Skip to content
Briefings are running a touch slower this week while we rebuild the foundations.See roadmap
AI: Jobs, Power & Money
2MAY

Anthropic withholds Mythos from public release

2 min read
15:17UTC

Anthropic's most capable model scored 83.1% on vulnerability reproduction but will not be released publicly, going instead to twelve partners through a $100 million restricted programme.

EconomicDeveloping
Key takeaway

Anthropic set the precedent for withholding a frontier model from public release over systemic risk.

Anthropic released Claude Mythos Preview exclusively to twelve partner organisations through Project Glasswing on 8 April 2026, backed by $100 million in model usage credits 1. The model autonomously identified thousands of zero-day vulnerabilities across every major operating system and browser, including a 27-year-old OpenBSD flaw that had survived five million automated tests. On the CyberGym benchmark it scored 83.1% on vulnerability reproduction, compared with 66.6% for Anthropic's previous top model.

Anthropic has explicitly stated it will not release Mythos to the public. The twelve Glasswing partners include AWS, Apple, Google, Microsoft, CrowdStrike, Palo Alto Networks, and JPMorgan. Goldman Sachs, another partner, published displacement research the same week showing AI substitutes 25,000 jobs per month , placing these institutions on both sides of the AI capability and labour displacement story.

A Tom's Hardware review challenged the marketing: the "thousands" claim rested on only 198 manual reviews, and many flagged flaws were in outdated software 2. The Bessent-Powell emergency meeting suggests federal regulators took the risk seriously regardless.

Deep Analysis

In plain English

Every major AI model so far has been made available to the public, either free or via subscription. Anthropic has broken that pattern. Claude Mythos Preview is restricted to twelve partner organisations, including tech giants and financial institutions. The model can automatically find previously unknown security flaws in software at a scale that has never been seen from an AI system. Anthropic's position is that the capability is powerful enough that releasing it widely would create unacceptable risk, the same software that makes it useful to defenders could be used by attackers. So it is being distributed under a controlled programme called Project Glasswing, with $100 million in subsidised usage credits. A technical review by Tom's Hardware found that some of the specific claims were overstated, but the underlying capability gap the model represents is real.

Deep Analysis
Root Causes

Anthropic's capability assessment rests on the CyberGym benchmark jump from 66.6% to 83.1%, a 16.5-percentage-point improvement in autonomous vulnerability reproduction. Anthropic's own research on 'observed exposure' (ID:1402) shows computer programmers face 75% task coverage from Claude-class models; Mythos's security capabilities represent the first instance where the coverage figure has operational implications beyond individual productivity.

The restriction decision reflects Anthropic's founding premise that AI safety and capability development must remain linked. Project Glasswing's $100 million credit allocation is structured as a subsidy for defensive deployment, not a commercial launch, which is itself novel for a frontier model.

What could happen next?
  • Precedent

    The first frontier AI model explicitly withheld from public release establishes capability-gating as a legitimate deployment option for safety-constrained AI systems.

    Medium term · 0.85
  • Risk

    The twelve Glasswing partners, which include both defensive (CrowdStrike, Palo Alto) and dual-use (AWS, Google, Microsoft) organisations, may deploy Mythos capabilities in ways beyond Anthropic's stated defensive intent.

    Short term · 0.62
  • Opportunity

    Security professionals who can interpret and direct AI-identified vulnerability data at scale represent a new premium-tier role the model creates even as it automates routine scanning.

    Medium term · 0.71
First Reported In

Update #5 · The model they won't release

Anthropic· 10 Apr 2026
Read original
Different Perspectives
UK financial regulators (BoE FPC / FCA)
UK financial regulators (BoE FPC / FCA)
The Bank of England's April FPC directive on agentic AI in payments was scoped around one frontier model; AISI confirmed a second model cleared the same 32-step threshold on 1 May. The supervisory architecture is one model behind the capability it was built to contain.
Indian IT sector workers (TCS, Infosys, Wipro)
Indian IT sector workers (TCS, Infosys, Wipro)
TCS posted its first annual revenue decline in the modern era, Infosys shed 8,400 workers in a quarter, and Wipro hit its zero-fresher target. Western Big Tech's AI automation is cannibalising the offshored-services model that employs roughly five million Indian IT workers.
Chinese workers (Hangzhou and Beijing plaintiffs)
Chinese workers (Hangzhou and Beijing plaintiffs)
Workers Zhou and Liu won cases that established a two-court doctrinal chain: AI adoption is the employer's deliberate strategy, placing the cost of displacement on the employer rather than the worker. Any Chinese employee facing AI-driven dismissal now has a citable legal route that American, British, and European counterparts do not.
Chinese government, courts, and domestic employers
Chinese government, courts, and domestic employers
The Hangzhou rulings were released on Workers' Day eve alongside the Ministry of Human Resources' recognition of 42 new AI occupations. Domestic firms now face mandatory retraining obligations; the Orgvue estimate of 8-14 months added to displacement timelines will feature in employer compliance briefings throughout 2026.
EU regulators and European Parliament
EU regulators and European Parliament
The second Digital Omnibus trilogue collapsed without agreement on 28 April; the third is scheduled for 13 May with the binding employer AI-literacy obligation still contested. Brussels is arguing over a non-binding encouragement clause while Beijing's courts have already bound employers.
US legislators (Warner, Rounds, Hawley, Sanders)
US legislators (Warner, Rounds, Hawley, Sanders)
Warner and Rounds produced the Economy of the Future Commission Act, the most concrete federal vehicle still moving, endorsed by the companies it would notionally regulate. The Sanders-AOC moratorium was killed by Democratic senators; the Hawley-Warner disclosure bill remains in committee with no floor date.