Skip to content
Briefings are running a touch slower this week while we rebuild the foundations.See roadmap
AI: Jobs, Power & Money
2MAY

Four governments, four AI jobs answers

4 min read
15:17UTC

The EU mandates pre-deployment conformity assessments. South Korea bets on innovation-first self-governance. The US has a bipartisan reporting bill and a California notice requirement. Four models, no convergence.

EconomicAssessed
Key takeaway

The global regulatory response to AI displacement is fragmenting into four incompatible models — creating compliance costs for responsible firms and arbitrage opportunities for those willing to operate wherever rules are lightest.

Legislators on three continents are writing rules for AI and employment. None of them agree on what the rules should do.

The EU AI Act's high-risk employment provisions take effect in August 2026 4. Any company deploying AI in recruitment, performance monitoring, promotion, or termination decisions must conduct a conformity assessment before deployment, maintain documented risk management systems, ensure human oversight, and monitor for discriminatory outcomes. Penalties reach €35 million or 7% of global annual turnover. The framework treats employment AI as a regulated product — analogous to medical devices — subject to pre-market authorisation.

South Korea's AI Basic Act, effective since 22 January, takes the opposite bet. It creates an AI Committee under the Prime Minister's office and establishes transparency principles but imposes no conformity assessments, no mandatory risk documentation, and no pre-deployment oversight. Seoul calculated that EU-style compliance costs would disadvantage Samsung, Naver, and Kakao against Chinese competitors. South Korea ranks among the top five countries for AI patent filings. Its youth unemployment hovers around 7–8%.

The United States has no comprehensive federal framework. Senators Mark Warner and Josh Hawley introduced the AI-Related Job Impacts Clarity Act (S.3108), requiring companies and federal agencies to report AI-related layoffs to the Department of Labor 1. The bill addresses the measurement vacuum documented by Challenger — only 8% of early-2026 cuts were formally attributed to AI 2.

California introduced SB 951, the Worker Technological Displacement Act: 90 days' advance notice before AI-driven mass layoffs and a state database to track displacement. Block's single-day workforce elimination is precisely the kind of action SB 951 would require three months' notice for. No US jurisdiction currently tracks AI-related job losses systematically.

A regulatory fault line is forming. The EU demands pre-deployment assessment. South Korea relies on post-deployment self-governance. China regulates by application category. The United States has a patchwork of state bills and one bipartisan federal reporting requirement. For multinationals deploying AI across all four jurisdictions, compliance now requires navigating four philosophical approaches to the same technology.

Deep Analysis

In plain English

Two US senators — one Democrat from Virginia, one Republican from Missouri — have jointly proposed a law requiring large companies and federal agencies to report to the government when they cut jobs because of AI. Right now, companies can lay off thousands of workers and describe it as 'restructuring' without specifying AI as the cause. This bill would create a national record of AI-attributed job losses. It would not directly help displaced workers — no mandatory notice, no retraining, no compensation. But it would make the AI washing problem harder to sustain at scale and could provide the data foundation for stronger legislation in future congressional sessions.

Deep Analysis
Synthesis

The Warner-Hawley alliance is analytically significant beyond the bill's narrow content. Warner's Northern Virginia constituency spans major federal contractors — simultaneously AI-investment beneficiaries and workforces exposed to AI-driven restructuring. Hawley's populist-nationalist brand has converged on opposition to concentrated tech power. Their coalition signals AI labour displacement is developing the cross-ideological salience that is a prerequisite for the stronger, durable legislation that labour advocates and academic researchers argue will ultimately be necessary.

Root Causes

The bill addresses a specific informational asymmetry: the AI washing problem is currently unverifiable at scale because no mandatory causal attribution requirement exists in layoff reporting. The Department of Labor's existing WARN Act filings capture the fact of mass layoffs but not their stated cause. S.3108 targets this data gap — itself a structural cause of policy paralysis, as legislators cannot design targeted interventions without causal data on which jobs are actually displaced by AI versus conventional restructuring.

Escalation

Bipartisan introduction does not guarantee passage — the bill faces Senate HELP Committee dynamics that are genuinely uncertain, and technology-sector lobbying will resist mandatory disclosure requirements. If March payrolls data produces a second consecutive negative print, political urgency for passage rises materially and the window for stronger provisions may open.

What could happen next?
  • Precedent

    Bipartisan sponsorship signals AI labour displacement has achieved cross-ideological political salience sufficient to sustain legislative attention across election cycles.

    Short term · Assessed
  • Opportunity

    The causal attribution data S.3108 would generate could become the evidentiary foundation for stronger AI labour legislation — job guarantees, retraining mandates, or AI taxation — in subsequent sessions.

    Medium term · Suggested
  • Risk

    A reporting mandate without enforcement or remediation provisions may normalise AI-driven displacement by producing official counts without policy response, legitimising rather than limiting the practice.

    Medium term · Suggested
  • Meaning

    The bill's limited scope — reporting only, no notice or remediation — reflects the current political ceiling for AI labour legislation in the United States.

    Immediate · Assessed
First Reported In

Update #1 · Meta cuts 20% while Big Tech spends $650bn

Fortune· 17 Mar 2026
Read original
Causes and effects
This Event
Four governments, four AI jobs answers
For the first time, four major jurisdictions are simultaneously legislating AI employment rules from fundamentally different premises. The EU treats AI as a regulated product. South Korea treats it as an economic growth engine. The US treats it as a reporting problem. China treats it case by case. The divergence shapes where companies locate AI operations and which workers receive protection.
Different Perspectives
UK financial regulators (BoE FPC / FCA)
UK financial regulators (BoE FPC / FCA)
The Bank of England's April FPC directive on agentic AI in payments was scoped around one frontier model; AISI confirmed a second model cleared the same 32-step threshold on 1 May. The supervisory architecture is one model behind the capability it was built to contain.
Indian IT sector workers (TCS, Infosys, Wipro)
Indian IT sector workers (TCS, Infosys, Wipro)
TCS posted its first annual revenue decline in the modern era, Infosys shed 8,400 workers in a quarter, and Wipro hit its zero-fresher target. Western Big Tech's AI automation is cannibalising the offshored-services model that employs roughly five million Indian IT workers.
Chinese workers (Hangzhou and Beijing plaintiffs)
Chinese workers (Hangzhou and Beijing plaintiffs)
Workers Zhou and Liu won cases that established a two-court doctrinal chain: AI adoption is the employer's deliberate strategy, placing the cost of displacement on the employer rather than the worker. Any Chinese employee facing AI-driven dismissal now has a citable legal route that American, British, and European counterparts do not.
Chinese government, courts, and domestic employers
Chinese government, courts, and domestic employers
The Hangzhou rulings were released on Workers' Day eve alongside the Ministry of Human Resources' recognition of 42 new AI occupations. Domestic firms now face mandatory retraining obligations; the Orgvue estimate of 8-14 months added to displacement timelines will feature in employer compliance briefings throughout 2026.
EU regulators and European Parliament
EU regulators and European Parliament
The second Digital Omnibus trilogue collapsed without agreement on 28 April; the third is scheduled for 13 May with the binding employer AI-literacy obligation still contested. Brussels is arguing over a non-binding encouragement clause while Beijing's courts have already bound employers.
US legislators (Warner, Rounds, Hawley, Sanders)
US legislators (Warner, Rounds, Hawley, Sanders)
Warner and Rounds produced the Economy of the Future Commission Act, the most concrete federal vehicle still moving, endorsed by the companies it would notionally regulate. The Sanders-AOC moratorium was killed by Democratic senators; the Hawley-Warner disclosure bill remains in committee with no floor date.