“Too Dangerous to Release” — Or Just Too Expensive? The Real Reason Anthropic Is Hiding Its Most Powerful AI

Body

In the first week of April 2026, Anthropic quietly made history — and then deliberately kept most people from accessing it.

The company launched Project Glasswing, a gated security research program built around a new frontier model called Claude Mythos Preview. Unlike virtually every other major AI release in recent memory, Anthropic didn’t post a blog, open a waitlist, or invite developers to start building. Instead, it hand-picked roughly 40 organizations — mostly enterprise cybersecurity firms, cloud providers, and critical infrastructure operators — and told the rest of the world: not yet, maybe not ever, and here is why.

The explanation was dramatic. According to Anthropic’s own materials for Glasswing, Mythos had crossed a threshold that no prior commercial AI model had reached: autonomous, real-time discovery and exploitation of zero-day vulnerabilities at scale. Releasing such a model broadly without safeguards, Anthropic argued, could destabilize the digital infrastructure the modern world depends on. The gated rollout, the company said, was designed to give defenders a head start before the offensive capabilities became widely accessible.

It was a compelling narrative. It was also one that not everyone found entirely convincing.

Over the weeks that followed, a parallel story began to emerge — one that pointed not to principled restraint, but to more prosaic constraints: compute costs, infrastructure capacity, and the brutal economics of running a frontier AI model at scale. The question of which story is true — or whether both are — has significant implications for how we understand Anthropic’s strategy, the AI industry’s relationship with safety rhetoric, and the future of frontier model access.

The Security Case: Evidence and Argument

The strongest evidence for security as the primary driver of restricted access comes directly from Anthropic itself, which makes it either the most credible source or the most interested party — depending on your prior.

The Glasswing announcement states explicitly: “Without the necessary safeguards, these powerful cyber capabilities could be used to exploit critical software.” The Frontier Red Team research makes the case in technical terms, arguing that restricted initial release is designed to “buy time for defenders” before equivalent capabilities proliferate through other channels — whether from other AI labs, state actors, or eventual model leakage.

The Compute Constraint Case: A Parallel Story

While Anthropic was announcing Glasswing, a separate set of events was unfolding that told a very different story about the company’s operational reality.

On April 6, 2026 — the day before the Glasswing launch — Anthropic announced a major compute expansion partnership with Google and Broadcom, described as involving multiple gigawatts of TPU capacity. The language in the announcement was telling: the capacity was needed, the company said, “to power our frontier Claude models and help us serve extraordinary demand.” Reuters’ contemporaneous reporting put the figure at approximately 3.5 gigawatts. The critical detail: this capacity was expected to come online starting in 2027, not 2026.

Four days later, on April 10, Reuters reported that Anthropic had struck a separate deal with CoreWeave to bring additional computing capacity online “later this year.” The CoreWeave deal looked like exactly what it was: a company filling a near-term gap while waiting for long-cycle infrastructure to materialize. And on April 9, Reuters also reported that Anthropic was exploring designing its own AI chips — a move explicitly tied to “a broader shortage of AI chips” and estimated to cost roughly $500 million.

These three stories, arriving within days of each other, painted a picture of a company under acute compute pressure across multiple time horizons: burning money to lease capacity now, negotiating for large-scale capacity years away, and beginning to consider the capital-intensive option of owning its own chip supply chain.

In conclusion

That is the honest answer to the question “why isn’t Mythos available?” — not because it is too powerful for the public, not because Anthropic doesn’t have enough servers, but because both of those things are somewhat true, they interact in ways that make a gated rollout the right operational choice, and the company has strong incentives to tell you only the more flattering part of that story.

Understanding both parts is the beginning of taking AI deployment decisions seriously.

Please login to post comments: