Four Things: What Do These People Know That We Don't?
Friday 27th February 2026: The Panic Stage
It’s getting harder to believe this time isn’t different. It’s hard to talk about the possibility and timing of AGI in polite company. Few really want to entertain the possibility and would rather debate the speed of change. Because we can all find examples of bottlenecks that will slow adoption. We can point to regulation. Or human inertia. Or even plateauing of AI capabilities. The easiest line is to say that the AI Labs are hyping up their product and impact to justify ever increasing investments and valuations. And unfortunately AI arrives at a time where the public and media generally are pissed off with tech firms. So there is a reflexive distrust of what the tech bros are selling.
The Covid analogy keeps getting used because it’s so apt. People wearing masks in January 2020 were weird and fringe. Babbling about exponentials. Nobody wanted to hear it. Look at what the people building this stuff said in February alone.
Mustafa Suleyman, Microsoft’s AI CEO: “most, if not all, professional tasks” automated within 18 months.
Dario Amodei, Anthropic CEO: AI will eliminate 50% of entry-level white-collar jobs within one to five years, calling the disruption “unusually painful.”
Sam Altman, at the India AI summit: the real impact of AI on jobs “will begin to be palpable” in the next few years, while admitting some companies are already “AI washing” their layoffs, blaming AI for cuts they’d have made anyway. (See @Jack cutting 50% of the Block staff yesterday)
Mrinank Sharma, head of Anthropic’s safeguards research, the person literally responsible for making Claude safe, quit and posted that “the world is in peril” before announcing he’s moving back to the UK to write poetry and “become invisible.” Other safety researchers left Anthropic in the same fortnight.
You can write all this off as hype, career positioning, and main-character syndrome. Or you can ask a better question: what do these people know that we don’t? When the people building the thing, running the thing, and safeguarding the thing are all saying the same thing with varying degrees of alarm, maybe stop looking at the current point on the curve and look at the curve.
I’ve written a bit about unemployment, youth unemployment, blue-collar work and the need to rethink education a bit. You can get lost in the debate around timing. While I think we are already in a fast-ish takeoff scenario, it is impossible to tell someone in January 2020 that they won’t be allowed to leave their house in March 2020. Like, you cannot comprehend. And I can’t persuade you. I’m of the view now that it is going to have to hit people in the face.
The least likely and highest impact thing we could do today is somehow globally agree on a tax regime for AI agents. I’ve got more on this coming soon as I think trying to have sovereign capabilities in AI is a red herring. If we are in the fast takeoff that I believe we are, we need to pivot to adaptation real quick.
But in the meantime, people are building. And what they’re building this week tells you a lot about where this is all heading. A new kind of computing company. A Substack post that crashed the market. Evidence that the model layer is now a theatre of war. And an architecture that suggests the future isn’t one big AI, it’s lots of small ones arguing with each other.
1. Callosum Launches, and the Future of Compute Gets Interesting (Warning: VC saying startup they invested in is sooo important post)
The biggest bottleneck in AI isn’t chips. It’s making different chips work together. Callosum launched this week with a $10.25m pre-seed led by Plural, an ARIA grant, and a coordinated campaign with Fortune and other outlets. The thesis: heterogeneous computing. Rather than brute-forcing performance by scaling one type of chip, Callosum is building infrastructure that orchestrates diverse hardware, GPUs, ASICs, FPGAs, into unified systems. They’re claiming orders of magnitude improvements in cost, speed, and capability.
The timing is almost suspiciously good. In the last week: MatX raised $500m, Axelera $250m, SambaNova $350m, OLIX $220m, Fractile over £100m. Billions flooding into new chip architectures, and nobody’s built the infrastructure to make them work together. That said, “orchestrating heterogeneity” is brutally hard in practice. The gap between a mathematical principle and a production system is where most infrastructure companies go to die.
Imagine if you will: a fungible pool of compute - your Blackwells, your Groqs, your MatXs, your AMDs, but you, Mr Vibecoder, don’t need to know anything about the setup. You just tell your agent what you want and the agent spawns a swarm of agents all optimising across this pool of compute to bring you the fastest, cheapest and most accurate output. You've just imagined Upstairs Downstairs, a new quiz show devised and hosted by David Brent.
Full disclosure: I am an investor. But this is exactly the kind of systems-level play Europe should be building. Not another chip. Not another model. The connective tissue. Link.
2. Speculative Fiction Moved the S&P. What a world.
Citrini Research’s “The 2028 Global Intelligence Crisis” imagines a world where AI automation works exactly as promised, and that turns out to be the problem. Written as a memo from June 2028, it made Fortune, got millions of views on X, and helped trigger Monday’s sell-off. The central concept, “Ghost GDP”, is genuinely useful: productivity rises while households, cut out of the loop, stop spending. Companies cut headcount, cancel SaaS licences, destroy aggregate demand, forcing more cuts.
Noah Smith called it a “scary bedtime story”. Economists pointed out that productivity gains have historically reallocated value, not destroyed it. The word “historically” is doing a lot of heavy lifting here because “this time is different”. The pushback is that the timeline is too compressed and the feedback loops too neat. But the underlying question, what happens when the people who lose their jobs also drive 50%+ of consumer spending, is underexplored. AI is coming for lawyers, consultants, and software engineers first. That’s a different distributional problem than displacing factory workers.
And this isn’t theoretical. This week, AI accounting startup Basis raised $100m at a $1.15bn valuation, using autonomous agents to automate tax, audit, and advisory for seven of the top 25 US accounting firms. Zero to unicorn in three years by automating exactly the kind of white-collar work the memo says will crater the economy. Regular readers will know I’ve been banging on about this (see “What happens if mass unemployment never arrives”). The memo’s value isn’t its predictions. It’s the question: what if the AI bulls are right about the technology and wrong about the economics?
3. Anthropic Says China Stole Claude. It’s More Complicated Than That.
More in, from the world of, this is IMPORTANT WAKE UP. Anthropic accused DeepSeek, Moonshot AI, and MiniMax of running coordinated distillation campaigns against Claude. 24,000 fake accounts. 16 million interactions. The Chinese labs allegedly fed Claude specially crafted prompts to extract chain-of-thought reasoning, effectively reverse-engineering Anthropic’s approach to agentic AI, tool use, and coding. MiniMax alone drove 13 million of those exchanges. Anthropic and OpenAI are framing this as a national security threat. Jing Yang, man, Jing Yang,
They’re not wrong that it’s a problem. If your frontier model can be systematically mined to train competitors, your biz model is vulnerable. But oh the irony: Western AI labs trained on the entire public internet without consent, and are now upset that someone is training on their outputs. Cry me a river. The more interesting question is what this means for model security. If 24,000 fake accounts can extract meaningful capability, then every frontier model is a target. Not just for Chinese labs. For anyone. This is the model-layer version of the supply chain attack I wrote about two weeks ago. Different vector, same lesson: AI systems are attack surfaces.
For Europe, another argument to at least try for sovereignty I suppose. If you’re running inference on someone else’s model, you’re trusting them to spot the attacks, secure the weights, decide who gets access. If you’re Mistral, you control that yourself.
4. Grok 4.2: When Models Start Arguing With Themselves
While everyone was panicking about Citrini (inc. me), xAI shipped something architecturally interesting. Grok 4.2 isn’t a bigger model. It’s four models in a trenchcoat. Four specialised agents, Grok (coordinator), Harper (fact-checking), Benjamin (maths and coding), Lucas (creative), work in parallel, debate in real time, and synthesise a consensus. xAI claims 65% fewer hallucinations.
This matters because it’s a design pattern. We’ve spent three years scaling-up: bigger model, more data, more compute. This is scaling-out: multiple models checking each other’s work. You don’t need one massive model. You need several specialised ones that argue. Which, tbf, is also how most good teams work. Not me though, it’s just me and my Claudes now.
But hold on, have I tied this newsletter together neatly, around the concept of heterogeneous intelligence? Yes. Yes I have.
It’s what Callosum (item 1) is building at the hardware level: different specialised chips orchestrated together. Grok 4.2 is doing the same in software: different specialised models orchestrated together. The principle is converging from both directions: diversity beats scale.
—
Have a lovely weekend, enjoy it, let the agents work while you have a rest.



While this isn't the main thrust of your article, I find myself wondering... You're one of several commentators, including the new UKSC newsletter, that seems to be saying Fractile has announced they "raised £100m". Except their actual announcement was they plan to spend ("invest") £100m on AI development in Bristol (UK) over the next 3 years. Nobody I've spoken to at the company has suggested they've actually raised £100m. The best I've found is they've raised in the region of $15-20m total to date (and no raise in 2026 yet).
What's your take on this? Where has the confusion between "Fractile will invest" and "Fractile has raised investment" come from? And do you think perhaps their announcement, projected by a UK Minister for AI, was intentionally vague?