Skip to content
AI: Jobs, Power & Money
16APR

BoE flags agentic AI systemic risk

3 min read
13:29UTC

The Bank of England Financial Policy Committee asked the BoE and the Financial Conduct Authority for further work on agentic AI in payments and financial markets, noting that three-quarters of UK financial firms now deploy AI.

EconomyDeveloping
Key takeaway

UK regulators classified agentic AI risk as systemic; US equivalents convened informally with no published follow-up.

The Bank of England's Financial Policy Committee (FPC), in the record of its April 2026 meeting published on 10 April, directed the Bank of England and the Financial Conduct Authority (FCA) to undertake further work on agentic AI use in payments and financial markets. The FPC noted that 75% of UK financial firms now deploy AI and assessed that the systemic risk from agentic deployment is "likely to increase rapidly". The HM Treasury Committee has separately called for AI-specific stress tests and clearer FCA guidance by end of 2026.

Agentic AI is the class of system able to take sustained autonomous action across multiple steps, the operational profile AISI separately evaluated on Mythos. The FPC's directive treats that capability as a financial stability concern rather than a product feature, and requires the FCA to develop supervisory tools calibrated to it. The OBR had already modelled a worst-case scenario of additional UK unemployment from AI displacement, with the Bank of England committed to stress-test an AI shock ; the FPC's April record moves the work from modelled worst case to formal supervisory mandate.

The directive sits alongside the Bessent-Powell emergency convening of 8 April as evidence that AI capability risk has entered financial-stability frameworks on both sides of the Atlantic. The UK is proceeding through institutional channels with a published record and a dated follow-up deliverable; the US convening was ad hoc, with no published readout and no scheduled agency response. Whether Mythos or its successors appear on the formal agenda of the next Financial Stability Oversight Council meeting is the nearest-term test of whether the US is running the same process informally.

For UK financial firms, the likely near-term consequence is a formal data request from the FCA, covering which agentic AI systems are in production, what oversight is in place, and how model failures propagate through payments flows. That supervisory layer does not exist in the US, where the agencies best-placed to build it are the same ones the Hawley-Warner coalition has spent six weeks asking to count AI displacement.

Deep Analysis

In plain English

The Bank of England's Financial Policy Committee (the body that monitors risks to the UK's financial system) has told the Bank and the financial regulator to do more work on the risks of AI acting autonomously in payments and financial markets. Three-quarters of UK financial firms now use AI in some form. The concern is that if these systems all behave similarly in a crisis, they could amplify a market shock faster than regulators or humans can respond.

Deep Analysis
Root Causes

Three-quarters of UK financial firms deploying AI reflects a deployment pattern driven by cost rather than oversight: AI tools reduce operational headcount in compliance, document processing and customer service without requiring material new regulatory approval.

The FPC's concern is that this incremental adoption has created aggregate systemic exposure that no individual firm's risk framework captures, because each firm's AI deployment looks small relative to its total operations while the aggregate across 75% of the sector represents a correlated failure mode.

The systemic risk from agentic deployment 'likely to increase rapidly' reflects the shift from narrow AI tools to systems capable of executing multi-step decisions in payments and market-making; exactly the capability profile AISI confirmed in Mythos (event index 4). Once agentic systems move from document processing to settlement and execution, the speed at which correlated failures can propagate exceeds any existing circuit-breaker mechanism calibrated for human decision timescales.

First Reported In

Update #6 · Three federal surveys, one 34-to-1 gap

US Treasury· 16 Apr 2026
Read original
Different Perspectives
Oxford Economics
Oxford Economics
Concluded AI's role in recent layoffs is 'overstated,' finding companies are not replacing workers with AI at scale. Identified slowing growth, weak demand, and cost pressure as the actual drivers.
Ambrish Shah, Systematix Group
Ambrish Shah, Systematix Group
Warned AI coding tools will erode Indian IT firms' labour-arbitrage growth model by reducing enterprise dependency on large vendor teams.
South Korean government
South Korean government
Enacted the world's second comprehensive AI law, choosing an innovation-first framework over prescriptive employment protections — a deliberate contrast to the EU's regulatory approach.
Corporate executives executing AI-driven cuts
Corporate executives executing AI-driven cuts
Frame workforce reductions as existential necessity. Crypto.com CEO Kris Marszalek and Block CEO Jack Dorsey both described AI adoption as a survival imperative, with equity markets reinforcing the message through immediate share-price gains.
Chinese government (Wang Xiaoping)
Chinese government (Wang Xiaoping)
Positions AI as a job-creation engine to absorb 12.7 million annual graduates and offset 300 million retirements, directly contradicting domestic economist Cai Fang's warning that AI job destruction precedes creation.
Klarna and companies reversing AI cuts
Klarna and companies reversing AI cuts
Klarna's public reversal — rehiring the human agents it replaced with AI after customer satisfaction collapsed — validates Gartner's prediction that half of AI-driven service cuts will be undone by 2027.