Skip to content
AI: Jobs, Power & Money
10APR

Anthropic withholds Mythos from public release

2 min read
16:54UTC

Anthropic's most capable model scored 83.1% on vulnerability reproduction but will not be released publicly, going instead to twelve partners through a $100 million restricted programme.

PoliticsDeveloping
Key takeaway

Anthropic set the precedent for withholding a frontier model from public release over systemic risk.

Anthropic released Claude Mythos Preview exclusively to twelve partner organisations through Project Glasswing on 8 April 2026, backed by $100 million in model usage credits 1. The model autonomously identified thousands of zero-day vulnerabilities across every major operating system and browser, including a 27-year-old OpenBSD flaw that had survived five million automated tests. On the CyberGym benchmark it scored 83.1% on vulnerability reproduction, compared with 66.6% for Anthropic's previous top model.

Anthropic has explicitly stated it will not release Mythos to the public. The twelve Glasswing partners include AWS, Apple, Google, Microsoft, CrowdStrike, Palo Alto Networks, and JPMorgan. Goldman Sachs, another partner, published displacement research the same week showing AI substitutes 25,000 jobs per month , placing these institutions on both sides of the AI capability and labour displacement story.

A Tom's Hardware review challenged the marketing: the "thousands" claim rested on only 198 manual reviews, and many flagged flaws were in outdated software 2. The Bessent-Powell emergency meeting suggests federal regulators took the risk seriously regardless.

Deep Analysis

In plain English

Every major AI model so far has been made available to the public, either free or via subscription. Anthropic has broken that pattern. Claude Mythos Preview is restricted to twelve partner organisations, including tech giants and financial institutions. The model can automatically find previously unknown security flaws in software at a scale that has never been seen from an AI system. Anthropic's position is that the capability is powerful enough that releasing it widely would create unacceptable risk, the same software that makes it useful to defenders could be used by attackers. So it is being distributed under a controlled programme called Project Glasswing, with $100 million in subsidised usage credits. A technical review by Tom's Hardware found that some of the specific claims were overstated, but the underlying capability gap the model represents is real.

Deep Analysis
Root Causes

Anthropic's capability assessment rests on the CyberGym benchmark jump from 66.6% to 83.1%, a 16.5-percentage-point improvement in autonomous vulnerability reproduction. Anthropic's own research on 'observed exposure' (ID:1402) shows computer programmers face 75% task coverage from Claude-class models; Mythos's security capabilities represent the first instance where the coverage figure has operational implications beyond individual productivity.

The restriction decision reflects Anthropic's founding premise that AI safety and capability development must remain linked. Project Glasswing's $100 million credit allocation is structured as a subsidy for defensive deployment, not a commercial launch, which is itself novel for a frontier model.

What could happen next?
  • Precedent

    The first frontier AI model explicitly withheld from public release establishes capability-gating as a legitimate deployment option for safety-constrained AI systems.

    Medium term · 0.85
  • Risk

    The twelve Glasswing partners, which include both defensive (CrowdStrike, Palo Alto) and dual-use (AWS, Google, Microsoft) organisations, may deploy Mythos capabilities in ways beyond Anthropic's stated defensive intent.

    Short term · 0.62
  • Opportunity

    Security professionals who can interpret and direct AI-identified vulnerability data at scale represent a new premium-tier role the model creates even as it automates routine scanning.

    Medium term · 0.71
First Reported In

Update #5 · The model they won't release

Anthropic· 10 Apr 2026
Read original
Different Perspectives
Oxford Economics
Oxford Economics
Concluded AI's role in recent layoffs is 'overstated,' finding companies are not replacing workers with AI at scale. Identified slowing growth, weak demand, and cost pressure as the actual drivers.
Ambrish Shah, Systematix Group
Ambrish Shah, Systematix Group
Warned AI coding tools will erode Indian IT firms' labour-arbitrage growth model by reducing enterprise dependency on large vendor teams.
South Korean government
South Korean government
Enacted the world's second comprehensive AI law, choosing an innovation-first framework over prescriptive employment protections — a deliberate contrast to the EU's regulatory approach.
Corporate executives executing AI-driven cuts
Corporate executives executing AI-driven cuts
Frame workforce reductions as existential necessity. Crypto.com CEO Kris Marszalek and Block CEO Jack Dorsey both described AI adoption as a survival imperative, with equity markets reinforcing the message through immediate share-price gains.
Chinese government (Wang Xiaoping)
Chinese government (Wang Xiaoping)
Positions AI as a job-creation engine to absorb 12.7 million annual graduates and offset 300 million retirements, directly contradicting domestic economist Cai Fang's warning that AI job destruction precedes creation.
Klarna and companies reversing AI cuts
Klarna and companies reversing AI cuts
Klarna's public reversal — rehiring the human agents it replaced with AI after customer satisfaction collapsed — validates Gartner's prediction that half of AI-driven service cuts will be undone by 2027.