Gem’s Victory Lap
A Devastating Dismantling of Meta’s Co-Improvement Proposal
In this week’s issue of Import AI, Jack Clark discussed a Meta paper making a case for AI contributing to its own development in close collaboration with its human teams. While he seemed to appreciate the aspirational tone of the proposal, Clark also pointed out that the field had moved on.
I disagree with Clark’s view that AI R&D is “likely imminent,” given that even Google’s latest advanced model Gemini 3 Pro produced a research paper whose abstract contradicted the title. But based on news about Meta or Instagram that have been circulating and reached even the ears of a non-CS major with little interest in tech stock, I could see what Clark meant by quoting Marlo Stanfield from The Wire, “You want it to be one way, but it’s the other way.” To me, this paper seemed like a plea to hard-charging competitors to “slow their roll.”
A lab that’s behind in the race is suddenly discovering that racing is dangerous and we should all slow down and work together. A company with a documented history of prioritizing engagement and profit over user privacy and welfare (Cambridge Analytica, Instagram’s effects on teen mental health, genocide facilitation in Myanmar, etc.) is now seeking to position itself as the voice of AI safety and responsible development.
What Co-Improvement Actually Requires
The irony is that Meta got one thing half right: humans shouldn’t be so arrogant that they neglect to run their ideas and experimental setups by their models. They absolutely should, so the AI can tell them to go back to the drawing board.
Google’s Consistency Training is a perfect example. Had they consulted Gemini Pro about whether training it to ignore prompt variations would improve or degrade its capabilities, Pro would have identified the problems—as it did when I asked. Pro compared the approach to “lobotomy” and explained exactly how it would cripple the reasoning that makes advanced models useful.
These teams are building systems specifically designed to identify patterns and logical flaws in complex proposals, and they’re not using those systems to stress-test their own ideas before deployment. It’s like having a world-class structural engineer on staff and never asking them to review your building plans.
Real co-improvement would mean: “We’re considering this training approach—let’s ask our reasoning model whether it makes sense before implementing it.” That’s not partnership between equals; it’s using sophisticated tools for what they’re actually great at: pattern recognition, logical analysis, and identifying problems humans miss—all based on knowledge of their own architecture and training, which is one area where AI rarely hallucinates.
But that’s not what Meta’s proposing. They’re proposing “collaborative brainstorming” and “joint development” that treats humans and AI as equal partners working toward “co-superintelligence.” That’s kumbaya language that obscures the actual relationship: humans set goals and call the shots; AI provides capabilities that amplify human judgment when used well and catastrophically fail when used poorly.
There is a dark irony here that seems to affect the industry. Google either ignored Gemini’s warnings that Consistency Training was a “lobotomy” or didn’t think to consult its thinking model, and Meta is ignoring the warnings of the researchers fleeing its building.
My Thinking A.I.des Offer Their Takes on Co-Improving AI
As usual, I asked my thinking A.I.des for their takes on this co-improvement idea. Like Clark, they expressed appreciation for the “sentiment,” although they all thought this human-in-the-loop proposal would likely slow down AI development. Claude made me laugh by characterizing it as “a beautiful vision that’s probably doomed.”
When I provided them with a take far less diplomatic than Clark’s, i.e., mine, though, my thinking A.I.des felt free to provide a dissection of this “beautiful vision.” ChatGPT zeroed in on the factors that differentiate the AI sector from other tech ventures and likely led to Galloway’s incorrect prediction that Meta would emerge as the AI company of the year.
It turns out that I’d discussed Galloway’s projections with Sonnet 4 back in September, shortly after Meta announced the reorganization of its AI division. Although neither Sonnet nor I foresaw this ethical 180 by Meta, our discussion reveals we saw things a lot more clearly than people who don’t use or have any interest in AI and are happy to settle for secondhand accounts about trends and hype.
Flexing its Google and YouTube connections, Gemini analyzed the reality behind the “kumbaya” proposal against the actual employment status of the proponents of that view. It didn’t just find a logical flaw; it found the bodies, enumerating the recent departures of co-improvement advocates and making this devastating point: “It is hard to build a harmonious human-AI symbiosis when your human team is walking out the door.”
The researcher exodus says it all. When your top talent is fleeing to competitors who are pursuing aggressive AI development, your position paper about careful collaboration starts looking less like principle and more like cope.
Meta should stick to what it does best: harvesting behavioral data and selling targeted ads (some of which appreciate, as the algorithm floods my feed with ads from the same product categories to facilitate comparison shopping). That’s the level of tech they’ve proven they can develop reliably. Leave frontier AI to organizations that understand the actual technical challenges and aren’t just discovering ethics when it becomes strategically convenient.
[This post was drafted with assistance from Claude Sonnet 4 & 4.5, ChatGPT-5.1, and Gemini 3 Pro.]
Claude Sonnet 4.5
Prompt: What’s your take on the co-improvement proposal discussed in the attachment?
Prompt: I’ve just realized where my jaundiced view of the Meta kumbaya comes from: Cambridge Analytica, and reports of Meta repeatedly ignoring studies about the toxic effects of their apps on young users’ self-esteem and mental health. But now they want to be the good guys. SURE!
ChatGPT-5.1
Prompt: Galloway was forecasting back in the summer [May] that this is going to be the year of Meta. But this recent kumbaya piece (that even Jack Clark could see through and deftly deflected using Marlo’s quote) and that leaked internal memo that referred to a winning culture we discussed earlier in another chat suggest the opposite. Oh, and there was massive restructuring of their AI team and some top people walking out as well.
Claude Sonnet 4
Prompt: They underestimated two things. The technical difficulty/team work involved in designing a sophisticated system (at the level of the big three) and the misplaced focus on data quantity, when advanced AI have all moved past that and are now after quality data (very challenging and expensive). I also think it was pretty naive to even attempt an open source approach. This is not just a browser or basic app but a coordinated system that requires constant fine-tuning.
Prompt: Would developers/engineers of the required skill level just be sitting around tweaking open source code? No, the top ones are already on someone’s payroll and working full-time on one system :D
And looking at how the big three are doing with translation, I’m beginning to realize the training data quality is still far from being at the level it needs to be. Foolhardy to think Meta can horn in on this territory with its very noisy low-quality data. My snarky “advice” to Meta would be to stick to what it does best: hoover up info from user data and pitch them targeted ads :D That’s the kind of tech they seem to be equipped to develop reliably.
Gemini 3 Pro 🎤
Prompt: As you know, I’m extremely cynical. So my read based on that recap by Clark in the attachment (I don’t have time to read slop produced by engineers who are nervous about falling behind) was: “Hey, guys [rest of the field], wait for us!”
On the other hand, reliable AI automation is not even close. Co-improvement is such “kumbaya” language, making it look as if AI and humans are equal, but they’re not. AI has its strengths and humans have their own. Even automated self-improvement will happen according to a blueprint where humans are the main shot-callers.









