Skip to content
Briefings are running a touch slower this week while we rebuild the foundations.See roadmap
AI: Jobs, Power & Money
2MAY

BoE flags agentic AI systemic risk

3 min read
15:17UTC

The Bank of England Financial Policy Committee asked the BoE and the Financial Conduct Authority for further work on agentic AI in payments and financial markets, noting that three-quarters of UK financial firms now deploy AI.

EconomicDeveloping
Key takeaway

UK regulators classified agentic AI risk as systemic; US equivalents convened informally with no published follow-up.

The Bank of England's Financial Policy Committee (FPC), in the record of its April 2026 meeting published on 10 April, directed the Bank of England and the Financial Conduct Authority (FCA) to undertake further work on agentic AI use in payments and financial markets. The FPC noted that 75% of UK financial firms now deploy AI and assessed that the systemic risk from agentic deployment is "likely to increase rapidly". The HM Treasury Committee has separately called for AI-specific stress tests and clearer FCA guidance by end of 2026.

Agentic AI is the class of system able to take sustained autonomous action across multiple steps, the operational profile AISI separately evaluated on Mythos. The FPC's directive treats that capability as a financial stability concern rather than a product feature, and requires the FCA to develop supervisory tools calibrated to it. The OBR had already modelled a worst-case scenario of additional UK unemployment from AI displacement, with the Bank of England committed to stress-test an AI shock ; the FPC's April record moves the work from modelled worst case to formal supervisory mandate.

The directive sits alongside the Bessent-Powell emergency convening of 8 April as evidence that AI capability risk has entered financial-stability frameworks on both sides of the Atlantic. The UK is proceeding through institutional channels with a published record and a dated follow-up deliverable; the US convening was ad hoc, with no published readout and no scheduled agency response. Whether Mythos or its successors appear on the formal agenda of the next Financial Stability Oversight Council meeting is the nearest-term test of whether the US is running the same process informally.

For UK financial firms, the likely near-term consequence is a formal data request from the FCA, covering which agentic AI systems are in production, what oversight is in place, and how model failures propagate through payments flows. That supervisory layer does not exist in the US, where the agencies best-placed to build it are the same ones the Hawley-Warner coalition has spent six weeks asking to count AI displacement.

Deep Analysis

In plain English

The Bank of England's Financial Policy Committee (the body that monitors risks to the UK's financial system) has told the Bank and the financial regulator to do more work on the risks of AI acting autonomously in payments and financial markets. Three-quarters of UK financial firms now use AI in some form. The concern is that if these systems all behave similarly in a crisis, they could amplify a market shock faster than regulators or humans can respond.

Deep Analysis
Root Causes

Three-quarters of UK financial firms deploying AI reflects a deployment pattern driven by cost rather than oversight: AI tools reduce operational headcount in compliance, document processing and customer service without requiring material new regulatory approval.

The FPC's concern is that this incremental adoption has created aggregate systemic exposure that no individual firm's risk framework captures, because each firm's AI deployment looks small relative to its total operations while the aggregate across 75% of the sector represents a correlated failure mode.

The systemic risk from agentic deployment 'likely to increase rapidly' reflects the shift from narrow AI tools to systems capable of executing multi-step decisions in payments and market-making; exactly the capability profile AISI confirmed in Mythos (event index 4). Once agentic systems move from document processing to settlement and execution, the speed at which correlated failures can propagate exceeds any existing circuit-breaker mechanism calibrated for human decision timescales.

First Reported In

Update #6 · Three federal surveys, one 34-to-1 gap

US Treasury· 16 Apr 2026
Read original
Causes and effects
This Event
BoE flags agentic AI systemic risk
The UK has placed AI capability risk inside a formal financial stability framework. The US equivalent, Bessent and Powell's 8 April convening, remains ad hoc with no published follow-up.
Different Perspectives
UK financial regulators (BoE FPC / FCA)
UK financial regulators (BoE FPC / FCA)
The Bank of England's April FPC directive on agentic AI in payments was scoped around one frontier model; AISI confirmed a second model cleared the same 32-step threshold on 1 May. The supervisory architecture is one model behind the capability it was built to contain.
Indian IT sector workers (TCS, Infosys, Wipro)
Indian IT sector workers (TCS, Infosys, Wipro)
TCS posted its first annual revenue decline in the modern era, Infosys shed 8,400 workers in a quarter, and Wipro hit its zero-fresher target. Western Big Tech's AI automation is cannibalising the offshored-services model that employs roughly five million Indian IT workers.
Chinese workers (Hangzhou and Beijing plaintiffs)
Chinese workers (Hangzhou and Beijing plaintiffs)
Workers Zhou and Liu won cases that established a two-court doctrinal chain: AI adoption is the employer's deliberate strategy, placing the cost of displacement on the employer rather than the worker. Any Chinese employee facing AI-driven dismissal now has a citable legal route that American, British, and European counterparts do not.
Chinese government, courts, and domestic employers
Chinese government, courts, and domestic employers
The Hangzhou rulings were released on Workers' Day eve alongside the Ministry of Human Resources' recognition of 42 new AI occupations. Domestic firms now face mandatory retraining obligations; the Orgvue estimate of 8-14 months added to displacement timelines will feature in employer compliance briefings throughout 2026.
EU regulators and European Parliament
EU regulators and European Parliament
The second Digital Omnibus trilogue collapsed without agreement on 28 April; the third is scheduled for 13 May with the binding employer AI-literacy obligation still contested. Brussels is arguing over a non-binding encouragement clause while Beijing's courts have already bound employers.
US legislators (Warner, Rounds, Hawley, Sanders)
US legislators (Warner, Rounds, Hawley, Sanders)
Warner and Rounds produced the Economy of the Future Commission Act, the most concrete federal vehicle still moving, endorsed by the companies it would notionally regulate. The Sanders-AOC moratorium was killed by Democratic senators; the Hawley-Warner disclosure bill remains in committee with no floor date.