When Anthropic announced last month that it would restrict Claude Mythos to roughly 40 organisations, the reaction split along predictable lines. Defenders called it responsible. Critics called it paternalistic. A few pointed out that independent researchers had quickly replicated some of Mythos’s capabilities using existing tools, suggesting the dangers had been overstated. Both camps missed the more important point.
Anthropic did something genuinely difficult: it made a considered, public judgement about the risks of its own technology and acted on it, accepting the criticism that would follow. It shared access with critical infrastructure operators so they could patch vulnerabilities before malicious actors found them. It was transparent about its reasoning. That’s not a common corporate instinct, particularly when the instinct to ship is strong, and the competitive pressure to do so is stronger. It deserves credit, not a pile-on.
But what the Mythos episode also reveals is how much weight is being placed on individual companies to make calls that really need an institutional framework behind them. The dual-use problem at the heart of this debate – the same capability that finds vulnerabilities can exploit them – isn’t new. Cybersecurity has wrestled with it for decades. What’s new is the scale and speed at which AI amplifies it, and the absence of any agreed mechanism for deciding who gets access to what, and when.
Anthropic shouldn’t have to be the sole arbiter of those questions. No company should – not because they can’t be trusted, but because decisions of this consequence shouldn’t rest on any single actor’s judgement, however careful. The legitimacy problem is structural, not personal.
I’ve spent two decades investing at the infrastructure layer – in technology, life sciences, the systems that underpin things most people take for granted. The pattern is always the same: the technology moves faster than the institutions designed to govern it, and the gap gets filled, by default, by whoever built the thing. That arrangement works until it doesn’t, and by the time it doesn’t, the consequences are hard to reverse.
The White House is now considering a formal review process for new AI models. The instinct is right. What matters is whether the resulting structure has genuine technical independence and enough speed to be useful – a review process that takes longer than a product cycle is no process at all. There’s also a real risk that it becomes, in practice, a consultation with the people who have the most to gain from light-touch oversight.
Companies like Anthropic are doing their best in the absence of something better. The task now is to build that something better – the independent, technically credible, fast-moving institutional layer that can make these calls with proper legitimacy. The next Mythos won’t wait for it to arrive.