Skip to content
AI: Jobs, Power & Money
16APR

AISI confirms Mythos 20-hour attack chain

3 min read
13:29UTC

The UK AI Security Institute's independent evaluation of Claude Mythos Preview found no single-task superiority over rival models, but confirmed a genuine autonomous capability: a 32-step attack chain equivalent to 20 hours of trained-human work.

EconomyDeveloping
Key takeaway

AISI confirmed Mythos can run 20 hours of trained-human work autonomously, the capability that most directly substitutes for salaried labour.

The UK AI Security Institute (AISI) published an independent evaluation of Anthropic's Claude Mythos Preview on 15 April 2026. On isolated capture-the-flag (CTF) tasks, Mythos scored above 85%, but rival frontier models, GPT-5.4, Claude Opus 4.6 and Codex 5.3, fell within 5 to 10 percentage points. No single-task superiority. In AISI's 32-step "The Last Ones" benchmark, however, Mythos autonomously completed a sequence the Institute estimates would take a trained human roughly 20 hours, without human prompting between steps.

AISI is the UK government body established to evaluate the safety of frontier AI models; its evaluation is the first external assessment of Mythos since Anthropic distributed restricted access to twelve founding partners under Project Glasswing on 8 April . Anthropic's marketing had emphasised thousands of zero-day vulnerabilities discovered by the model; Tom's Hardware on 9 April reported those claims rested on only 198 manual reviews . AISI's CTF findings partly vindicate that critique: Mythos is not dramatically more capable than competitors at short, bounded tasks.

The attack-chaining result is the capability that matters. Sustained autonomous execution over 32 steps and roughly 20 hours is the operational profile a trained human analyst, paralegal or junior engineer currently provides inside a bank, law firm or software team. It is also the profile the Scott Bessent and Jerome Powell emergency convening of Wall Street CEOs at Treasury on 8 April was called to assess . Treasury and the Fed convened promptly on a capability that federal agencies could not themselves verify; AISI's 20-hour-human-equivalent figure is the first external confirmation the convening was warranted on substance.

For the workforce implication, the relevant dimension is not Mythos's cybersecurity reach but its ability to replace trained-human throughput at chain-of-task scale. That capability is what JPMorgan CEO Jamie Dimon described in February when he told the bank's investor meeting that AI has led to internal redeployment, covered elsewhere in this update. Every original Glasswing partner, and the additional five named in Anthropic's 7 April system card, will have to integrate the attack-chain profile into internal risk frameworks during live deployment.

The evaluation was accessed via a third-party summary from Results Sense rather than AISI's primary publication, so specific scores should be verified against the Institute's direct release when it becomes available. The methodology point, however, is solidly established: Mythos's material advantage is durability, not speed, and durability is the AI capability that most directly substitutes for salaried human labour.

Deep Analysis

In plain English

A UK government body called the AI Security Institute tested Anthropic's most advanced AI model, Mythos, and found that it can independently complete a complex cybersecurity attack across 32 separate steps; work that would take a trained human about 20 hours. This confirms a capability distinct from the headline claims: chaining together a full 32-step attack sequence autonomously, rather than finding a single flaw. This matters for jobs because the same autonomous multi-step capability that can conduct a security attack can also conduct many complex knowledge-work tasks without human oversight.

Deep Analysis
Root Causes

The attack-chaining capability that AISI confirmed is structurally distinct from any prior evaluation framework because it is an emergent property of model scale rather than a designed feature.

Existing regulatory frameworks (including the EU AI Act's high-risk classification system and the US Executive Order 14110 reporting requirements) were designed around discrete capabilities such as facial recognition accuracy and loan decision bias. They have no measurement category for 'sustained multi-step autonomous execution' as a risk dimension.

The ASL abandonment in Anthropic's own system card (event index 6) formalises this: capability thresholds cannot capture emergent attack-chaining because the capability arises from combining individually non-dangerous steps. This is the same structural challenge that makes nuclear non-proliferation frameworks inadequate for dual-use biotechnology: the dangerous capability is not in any single component.

First Reported In

Update #6 · Three federal surveys, one 34-to-1 gap

UK AI Security Institute (via Results Sense)· 16 Apr 2026
Read original
Causes and effects
This Event
AISI confirms Mythos 20-hour attack chain
The first external confirmation that the Treasury-Fed emergency convening on 8 April was warranted on capability grounds rather than on Anthropic's marketing. Attack chaining is the capability most directly relevant to autonomous task completion, and therefore to white-collar workforce displacement.
Different Perspectives
Oxford Economics
Oxford Economics
Concluded AI's role in recent layoffs is 'overstated,' finding companies are not replacing workers with AI at scale. Identified slowing growth, weak demand, and cost pressure as the actual drivers.
Ambrish Shah, Systematix Group
Ambrish Shah, Systematix Group
Warned AI coding tools will erode Indian IT firms' labour-arbitrage growth model by reducing enterprise dependency on large vendor teams.
South Korean government
South Korean government
Enacted the world's second comprehensive AI law, choosing an innovation-first framework over prescriptive employment protections — a deliberate contrast to the EU's regulatory approach.
Corporate executives executing AI-driven cuts
Corporate executives executing AI-driven cuts
Frame workforce reductions as existential necessity. Crypto.com CEO Kris Marszalek and Block CEO Jack Dorsey both described AI adoption as a survival imperative, with equity markets reinforcing the message through immediate share-price gains.
Chinese government (Wang Xiaoping)
Chinese government (Wang Xiaoping)
Positions AI as a job-creation engine to absorb 12.7 million annual graduates and offset 300 million retirements, directly contradicting domestic economist Cai Fang's warning that AI job destruction precedes creation.
Klarna and companies reversing AI cuts
Klarna and companies reversing AI cuts
Klarna's public reversal — rehiring the human agents it replaced with AI after customer satisfaction collapsed — validates Gartner's prediction that half of AI-driven service cuts will be undone by 2027.