Skip to content
AI: Jobs, Power & Money
16APR

Three federal surveys, one 34-to-1 gap

18 min read
13:29UTC

Three federal agencies produced AI adoption rates of 18%, 41% and 78% for the same quarter; the **Bureau of Labor Statistics (BLS)** skipped its scheduled GenAI publication on 14 April and the **Federal Reserve Bank of New York** filled the gap with a survey showing 62% of workers expect AI-driven unemployment within twelve months. Stanford Digital Economy Lab calculates the real AI labour impact at roughly 34 times the **Challenger, Gray & Christmas** declared tally. The industry doing the displacing has raised $150 million to defeat pro-regulation candidates in the 2026 midterms.

Key takeaway

Declared AI cuts are one-thirty-fourth of the real impact, and the measurement system to prove it is being pre-empted electorally.

In summary

Three US federal surveys produced AI adoption rates of 18%, 41% and 78% for the same late-2025 quarter, and Stanford's JOLTS analysis put the real AI labour impact at 34 times the declared layoff count. The Bureau of Labor Statistics skipped its scheduled GenAI publication on 14 April; the Federal Reserve Bank of New York stepped in the same day, finding 62% of American workers expect AI to raise unemployment within twelve months. A super PAC backed by OpenAI president Greg Brockman and Andreessen Horowitz has raised $125 million specifically to defeat the senators who could compel that data to be produced.

This briefing mapped
Loading map…
Economic
Regulatory
Domestic
Military
Competitive
Legal

The Federal Reserve Board published a reconciliation paper on 3 April comparing three federal instruments that describe the same late-2025 economy as 18%, 41% or 78% AI-adopted. The gap is structural.

Sources profile:This story draws on neutral-leaning sources

The Federal Reserve Board published a survey reconciliation paper on 3 April 2026, authored by staff economist Jeffrey S. Allen for the FEDS Notes series, comparing three separate federal instruments that should describe the same US economy and do not. The Business Trends and Outlook Survey (BTOS), run by the Census Bureau on a firm-weighted basis, put AI adoption at 18% for late 2025. The Research on Practices Survey (RPS), which asks individuals whether they personally use AI at work, returned 41%. The Survey of Business Uncertainty (SBU), which weights by how many workers are employed at AI-using firms, came in at 78%. Daily AI use across the US workforce sits at 12%; weekly use at 35.2%.

The three figures measure different things: the share of firms using AI, the share of workers personally using AI, and the share of the workforce employed at firms that have adopted it. The Fed paper's point is not that the surveys are faulty; it is that no federal agency has formally chosen between firm-weighted, individual-weighted and employment-weighted units, so the same quarter can be described as roughly one-in-five, two-in-five or four-in-five AI-adopted depending on which federal instrument is cited.

Forty-six days earlier, the bipartisan nine-senator coalition led by Josh Hawley and Mark Warner had written to the Department of Labor and the Bureau of Labor Statistics urging expanded AI workforce data collection . The Fed Board's reconciliation is effectively the federal answer: there is no single figure, and none of the three existing instruments will produce one until an agency chooses a canonical unit. The BLS itself has so far chosen none, skipping a separately scheduled GenAI workplace publication eleven days later.

In practice, private datasets set the headline number by default. Challenger, Gray & Christmas counts announced AI layoffs; Stanford's JOLTS-based analysis puts the real labour impact at a far higher multiple; Goldman Sachs's earlier monthly substitution model sat between the two. None of those are federal. None can be legislated against without a statutory benchmark that the reconciliation paper has just confirmed does not exist.

The 4.3x divergence is also the quantitative foundation for the Hawley-Warner demand: a letter asking for one number has been met by a paper documenting that the government currently produces three, none of them designated as authoritative. Whether the BLS is resourced to build a fourth, harmonised instrument, or whether the NY Fed's Survey of Consumer Expectations becomes the de facto federal measure by attrition, is likely the first concrete thing the next US Congress will have to decide on AI workforce policy.

Explore the full analysis →

Erik Brynjolfsson's Stanford Digital Economy Lab applied the JOLTS hiring rate to the nonfarm workforce and found roughly a million annual hires are not happening. Against the declared AI layoff count, the ratio is 34 to 1.

Sources profile:This story draws on neutral-leaning sources

The Stanford Digital Economy Lab, led by economist Erik Brynjolfsson, published a JOLTS-based analysis on 10 April 2026 concluding that AI is preventing roughly 950,000 to 1 million American hires per year against the 2023 pace. The Job Openings and Labor Turnover Survey hiring rate fell to 3.1% in February 2026, the lowest reading since April 2020. Applied to the 158.6 million nonfarm workforce, the 0.6 percentage-point gap against the 2023 baseline produces an annualised shortfall of roughly one million hires. Against Challenger, Gray & Christmas's cumulative AI-attributed layoff tally of 27,645 through March, the ratio is approximately 34 to 1.

JOLTS is a Bureau of Labor Statistics monthly survey of job openings, hires and separations; its hiring rate measures how many workers were hired in a month as a share of total employment. Stanford's reading is that the rate has collapsed not because firms are cutting declared roles but because they are quietly choosing not to replace departing workers and not to open new entry-level requisitions. That is the channel through which most AI displacement actually runs, and it is the channel to which Washington's primary labour instrument, unemployment-insurance claims, is deliberately blind.

Workers aged 22 to 25 in AI-exposed occupations have seen 16% employment decline since late 2022, while colleagues over 30 in the same occupations are up between 6% and 12%. That age profile is the strongest evidence that AI is the mechanism rather than interest rates or cyclical slack. Young software developers sit 20% below their 2022 peak. That age asymmetry matches the SSRN large-scale resume study showing entry-level postings at AI-adopting firms fell sharply , and the Fortune/Columbia finding that most unemployed Americans never file for benefits . Goldman Sachs's own 25,000-per-month substitution model priced the unannounced displacement at roughly three times the Challenger count; Stanford moves that multiple to 34.

Announced layoffs drive the headline count that regulators and Congress respond to; Brynjolfsson's analysis argues the response has been calibrated to a number that captures one thirty-fourth of the real impact. Hires-not-made do not trigger WARN Act filings, do not register as unemployment claims, and do not appear in any official federal AI workforce dataset. They surface years later as cohort scarring, when the young workers who never entered the pipeline emerge as the mid-career shortage the 1980s manufacturing automation literature documented.

The Stanford figure is an analytical derivation rather than a direct measurement, and is sensitive to the 2023 baseline assumption; Brynjolfsson's causal inference rests on the occupation-by-age asymmetry that is hard to explain through general macro channels. Even if the true multiple turns out to be half or double Brynjolfsson's figure, the order-of-magnitude point is that the declared number on which policy has relied since 2023 is not close to describing the labour-market impact of AI deployment.

Explore the full analysis →

The Bureau of Labor Statistics skipped its scheduled GenAI workplace paper on 14 April. The Federal Reserve Bank of New York published its own survey the same day, finding 62% of American workers now expect AI to raise unemployment within twelve months.

Sources profile:This story draws on neutral-leaning sources

The Bureau of Labor Statistics had scheduled a dedicated Generative AI workplace publication for 14 April 2026; the date came and went without a release or a public explanation. The same day, the Federal Reserve Bank of New York published a paper on its Liberty Street Economics blog drawing on the Survey of Consumer Expectations (SCE), the NY Fed's monthly household survey covering labour market, credit and income conditions. The SCE found 39% of employed Americans use AI tools at work, 62% expect AI to increase unemployment within 12 months, and AI access is stratified 4.2x by income (66.3% of workers earning over $200,000 versus 15.9% earning under $50,000) and 2.6x by education.

Only 15.9% of US employers offer any AI training; 11% actively prohibit AI tool use on the job. The gap between the two numbers describes an employer population broadly divided between firms banning the tools and firms offering nothing, with a small minority running formal training programmes. Workers are pricing that access directly: those without training say they would accept an 11.4% salary cut to join a job that provides it, while those with AI-trained roles demand a 24.2% premium to move to one without. The training stratification documented alongside the 4.3x Fed Board adoption spread context) turns AI access into a priced labour-market benefit without any official federal instrument tracking its distribution.

The Hawley-Warner coalition had written in March specifically to ask the BLS to build that instrument . The BLS was scheduled to respond on 14 April with a dedicated GenAI paper; instead the Federal Reserve Bank of New York provided the day's federal signal, drawing on worker self-reports rather than employer filings. The practical effect is that the de facto US AI workforce measure has moved from Department of Labor establishment data to a regional reserve bank's household survey. The SCE's 62% unemployment-expectation figure is the first time a federal instrument has captured worker sentiment on AI displacement at nationally representative scale.

Whether the BLS reschedules its paper before May or lets the NY Fed SCE become the standing measure is the decision the Hawley-Warner letter was designed to force. It is also the decision the Leading the Future super PAC fundraising surge is designed to neutralise at the legislative level, by targeting senators who could fund better BLS data collection. On the scheduled publication date, the worker-reported figure was the only one on offer; the agency meant to produce the employer-reported equivalent was absent from its own schedule.

Explore the full analysis →

Leading the Future, a super PAC backed by OpenAI president Greg Brockman, Andreessen Horowitz and Palantir co-founder Joe Lonsdale, has raised more than $125 million to defeat pro-regulation candidates in the 2026 US midterms.

A super PAC called Leading the Future, backed by OpenAI President Greg Brockman, venture firm Andreessen Horowitz and Palantir co-founder Joe Lonsdale, has raised more than $125 million to defeat pro-regulation candidates in both parties in the 2026 US midterms, Axios reported on 14 April 2026. Total AI industry midterm spending is approaching $150 million. According to Axios, the fundraising surge tracks to the ten days after the bipartisan nine-senator Hawley-Warner letter became public and the same ten days that the Sanders-Ocasio-Cortez AI moratorium was killed by Democratic senators Fetterman and Warner . GZERO Media polling over the same period records 63% of Americans expecting AI to reduce employment, against 26% viewing AI positively.

This story has its primary home in the us-midterms-2026 topic; its relevance to the AI-jobs beat is narrower but direct. The political feedback loop this briefing has tracked, in which measurable displacement generates fiscal distress, which produces political pressure, which eventually yields regulation, is being short-circuited at the electoral stage, not the legislative one. The PAC's targeting logic is the reason: it is aimed at legislators working on workforce disclosure bills, not at moratorium sponsors who already lack a floor majority.

The Sanders moratorium was killed by Democratic defectors; the measurement-focused Hawley-Warner coalition has survived because agencies can act on it without legislation, which is also why the PAC needs to compete for seats held by legislators ready to fund better federal labour data. The public-opinion split on AI employment is the electoral floor the money is working against; within that, 50% of self-identified Republicans say they are concerned about AI, matching Democratic sentiment. The spending targets primary elections, where single-issue campaigns have historically been most effective at disciplining individual incumbents before a race polarises.

For the AI-jobs audience, the implication is operational rather than partisan. The Federal Reserve Board's reconciliation paper context) documented the measurement failure that the Hawley-Warner letter asked the BLS to close. A super PAC now exists to defeat the legislators who could fund the closure. If the PAC succeeds on even a third of its target list, the NY Fed Survey of Consumer Expectations and private datasets from Challenger, Gray & Christmas, Stanford Digital Economy Lab and Goldman Sachs are likely to remain the de facto federal figures through the next Congress. The displacement the data does not measure would then accumulate against a legislative class elected partly on the premise that it should not be measured.

Explore the full analysis →
Sources:Challenger, Gray & Christmas
Briefing analysis
What does it mean?

The measurement gap is no longer an administrative inconvenience: it is the primary strategic asset of the industry doing the displacing. Three conflicting federal surveys, a skipped BLS publication and a $150 million electoral war chest together ensure that no statutory displacement baseline will exist before the 2026 midterms. The Stanford 34:1 ratio and the NY Fed's 62% unemployment-expectation figure are the closest thing to ground truth currently available, and neither carries the legal authority needed to trigger policy.

Watch for
  • whether the BLS reschedules its GenAI paper before the NY Fed SCE becomes the standing federal measure; the Oracle WARN Act 60-day deadline around 30 May and whether Massachusetts produces a filing; the 28 April EU Digital Omnibus trilogue verdict on the employer AI literacy clause; the first named target list from Leading the Future's $150 million PAC.

The UK AI Security Institute's independent evaluation of Claude Mythos Preview found no single-task superiority over rival models, but confirmed a genuine autonomous capability: a 32-step attack chain equivalent to 20 hours of trained-human work.

Sources profile:This story draws on neutral-leaning sources

The UK AI Security Institute (AISI) published an independent evaluation of Anthropic's Claude Mythos Preview on 15 April 2026. On isolated capture-the-flag (CTF) tasks, Mythos scored above 85%, but rival frontier models, GPT-5.4, Claude Opus 4.6 and Codex 5.3, fell within 5 to 10 percentage points. No single-task superiority. In AISI's 32-step "The Last Ones" benchmark, however, Mythos autonomously completed a sequence the Institute estimates would take a trained human roughly 20 hours, without human prompting between steps.

AISI is the UK government body established to evaluate the safety of frontier AI models; its evaluation is the first external assessment of Mythos since Anthropic distributed restricted access to twelve founding partners under Project Glasswing on 8 April . Anthropic's marketing had emphasised thousands of zero-day vulnerabilities discovered by the model; Tom's Hardware on 9 April reported those claims rested on only 198 manual reviews . AISI's CTF findings partly vindicate that critique: Mythos is not dramatically more capable than competitors at short, bounded tasks.

The attack-chaining result is the capability that matters. Sustained autonomous execution over 32 steps and roughly 20 hours is the operational profile a trained human analyst, paralegal or junior engineer currently provides inside a bank, law firm or software team. It is also the profile the Scott Bessent and Jerome Powell emergency convening of Wall Street CEOs at Treasury on 8 April was called to assess . Treasury and the Fed convened promptly on a capability that federal agencies could not themselves verify; AISI's 20-hour-human-equivalent figure is the first external confirmation the convening was warranted on substance.

For the workforce implication, the relevant dimension is not Mythos's cybersecurity reach but its ability to replace trained-human throughput at chain-of-task scale. That capability is what JPMorgan CEO Jamie Dimon described in February when he told the bank's investor meeting that AI has led to internal redeployment, covered elsewhere in this update. Every original Glasswing partner, and the additional five named in Anthropic's 7 April system card, will have to integrate the attack-chain profile into internal risk frameworks during live deployment.

The evaluation was accessed via a third-party summary from Results Sense rather than AISI's primary publication, so specific scores should be verified against the Institute's direct release when it becomes available. The methodology point, however, is solidly established: Mythos's material advantage is durability, not speed, and durability is the AI capability that most directly substitutes for salaried human labour.

Explore the full analysis →

The Islamic Revolutionary Guard Corps named Stargate UAE, the $500 billion OpenAI, SoftBank and Oracle joint venture, in a 1 April military targeting video. Amazon Web Services subsequently declared hard-down status for multiple zones after Iranian strikes on Bahrain and Dubai.

The Islamic Revolutionary Guard Corps (IRGC) named Stargate UAE, the $500 billion OpenAI, SoftBank and Oracle joint venture, in a 1 April 2026 military targeting video, the Iranian spokesperson adding the phrase "Nothing stays hidden to our sight". Amazon Web Services (AWS) subsequently declared "hard down status for multiple zones" after Iranian missile strikes on infrastructure in Bahrain and Dubai. Oracle's Dubai data centre had been struck in an earlier round. Nvidia and Apple were named in the same IRGC video.

The event sits inside the Iran-conflict-2026 story as a military and diplomatic matter; its relevance on this beat is that the capex displacing US workers now includes physical assets in an active war zone. Iran's earlier targeting video had named a broader group of US tech firms ; the IRGC's April escalation was the first to single out AI infrastructure specifically. For the workforce angle, the linkage is financial: Oracle funded its $156 billion data centre programme partly by terminating the bulk of its late-March workforce cuts , with a large Indian cohort notified by 6am email .

The capital that freed those jobs is now buying concrete and silicon that Iranian missiles are trying to hit. Goldman Sachs has calculated that data centre electricity demand adds roughly 0.1 percentage points to core US inflation in 2026 and 2027; a Strait of Hormuz closure with the associated gas-price spike would amplify the figure. The workers displaced to finance the capex absorb the inflation the capex generates, and now carry a physical war-risk exposure on the assets their severance helped build.

The Glasswing partner list includes Oracle alongside JPMorgan, Goldman Sachs and AWS ; the same institution therefore holds privileged access to the restricted frontier model that triggered the Bessent-Powell convening, and has physical assets under active IRGC threat funded by its own workforce cuts. That triple exposure, capital, capability and war risk, is the compressed version of the feedback loop this topic has been tracking. The precedent Iran has now set, that state militaries will treat AI capex as strategic infrastructure worth targeting, is likely to raise insurance and reinsurance pricing on Gulf data centre assets through 2026 and 2027, and to change the geographic distribution of future projects.

Explore the full analysis →
Sources:Anthropic

Anthropic's 244-page Alignment Risk Update for Claude Mythos Preview abandoned the AI Safety Level capability threshold framework for autonomy-focused threat models, and added Broadcom, CrowdStrike, NVIDIA, Palo Alto Networks and Cisco to the Glasswing partner list.

Sources profile:This story draws on centre-left-leaning sources from United States
United States

Anthropic published a 244-page Alignment Risk Update for Claude Mythos Preview on 7 April 2026, formally abandoning its AI Safety Level (ASL) capability-threshold framework in favour of autonomy-focused threat models. The same update expanded Project Glasswing to add Broadcom, CrowdStrike, Nvidia, Palo Alto Networks and Cisco alongside the original twelve founding partners announced on 8 April . The Glasswing Programme is backed by $100 million in model usage credits, distributing restricted Mythos access to selected partner organisations under coordinated-disclosure terms.

The ASL framework classified risk by capability thresholds: a model crossed a line when it demonstrated a specified skill, and escalating mitigations followed. Its autonomy-focused replacement measures risk by sustained multi-step execution, aligning with the attack-chaining dimension AISI separately confirmed. All Glasswing partners therefore have to rewrite the internal risk frameworks they were running under ASL, mid-deployment, during live coordinated disclosure.

The update discloses that over 99% of the vulnerabilities Mythos discovered during its vulnerability research programme remain unpatched, with coordinated disclosure still in progress. For the Glasswing partners, that means the security posture of the operating systems and browsers their staff use daily is currently weaker than it was before Mythos began running, because Mythos has a list of undisclosed paths into software they all depend on. CrowdStrike and Palo Alto Networks, newly added as of 7 April, are among the security vendors most directly affected by that exposure.

The methodology shift also changes what frontier AI risk governance looks like. Capability thresholds produced discrete pass/fail tests that could be regulated; autonomy thresholds require ongoing observation of how a model behaves across time and tasks, which is closer to financial-market supervision than to product certification. The Bank of England's April directive to the FCA on agentic AI in payments, carried elsewhere in this update, proceeds from the same premise.

Explore the full analysis →
Sources:Axios·Challenger, Gray & Christmas
Causes and effects
Why is this happening?

The 34:1 ratio between hires-not-made and declared layoffs reflects two structural asymmetries. WARN Act disclosure requires firms to report terminations above a threshold but has no mechanism to require disclosure of hiring pauses or reductions, so the dominant AI displacement channel generates no mandatory data trail.

The second condition is the age concentration: workers aged 22 to 25 in AI-exposed occupations show a 16% employment decline since late 2022, because entry-level positions are the most fungible and the first cut. Mid-career and senior roles require contextual judgment current models cannot reliably replicate, meaning the impact accumulates at the bottom of the talent pipeline before it is visible in aggregate payroll data.

The Bank of England Financial Policy Committee asked the BoE and the Financial Conduct Authority for further work on agentic AI in payments and financial markets, noting that three-quarters of UK financial firms now deploy AI.

Sources profile:This story draws on neutral-leaning sources

The Bank of England's Financial Policy Committee (FPC), in the record of its April 2026 meeting published on 10 April, directed the Bank of England and the Financial Conduct Authority (FCA) to undertake further work on agentic AI use in payments and financial markets. The FPC noted that 75% of UK financial firms now deploy AI and assessed that the systemic risk from agentic deployment is "likely to increase rapidly". The HM Treasury Committee has separately called for AI-specific stress tests and clearer FCA guidance by end of 2026.

Agentic AI is the class of system able to take sustained autonomous action across multiple steps, the operational profile AISI separately evaluated on Mythos. The FPC's directive treats that capability as a financial stability concern rather than a product feature, and requires the FCA to develop supervisory tools calibrated to it. The OBR had already modelled a worst-case scenario of additional UK unemployment from AI displacement, with the Bank of England committed to stress-test an AI shock ; the FPC's April record moves the work from modelled worst case to formal supervisory mandate.

The directive sits alongside the Bessent-Powell emergency convening of 8 April as evidence that AI capability risk has entered financial-stability frameworks on both sides of the Atlantic. The UK is proceeding through institutional channels with a published record and a dated follow-up deliverable; the US convening was ad hoc, with no published readout and no scheduled agency response. Whether Mythos or its successors appear on the formal agenda of the next Financial Stability Oversight Council meeting is the nearest-term test of whether the US is running the same process informally.

For UK financial firms, the likely near-term consequence is a formal data request from the FCA, covering which agentic AI systems are in production, what oversight is in place, and how model failures propagate through payments flows. That supervisory layer does not exist in the US, where the agencies best-placed to build it are the same ones the Hawley-Warner coalition has spent six weeks asking to count AI displacement.

Explore the full analysis →
Sources:US Treasury

The Office for National Statistics reported UK vacancies unchanged at 721,000 for a sixth consecutive publication, payrolled employment down 96,000 year-on-year, and real wage growth at 0.4%.

Sources profile:This story draws on neutral-leaning sources

The Office for National Statistics (ONS) published its March 2026 UK labour market overview on 10 April, recording vacancies at 721,000 for a sixth consecutive publication, payrolled employees down 96,000 year-on-year, unemployment at 5.2%, and real wage growth of 0.4% against 3.8% nominal. UK vacancies had already been at this level when this beat first covered the ONS data ; the figure has not moved in half a year.

A static vacancy stock alongside falling payrolled employment indicates structural stasis, not recovery. Morgan Stanley's January research found UK firms suffered an 8% net AI job loss, roughly double the international average ; the ONS data is consistent with that picture, in which firms are neither hiring to fill declared vacancies nor cutting declared roles but quietly letting attrition run without replacement, the same pattern Stanford's JOLTS analysis identified in the US. Real wage growth of 0.4% is the binding tension: nominal earnings are rising, but CPIH inflation absorbs most of the increase, leaving households effectively flat into mid-2026.

The ONS has no AI-specific breakdown of its labour market data. Britain's statistical agency is therefore not measuring the mechanism the Bank of England has now classified as systemically risky for its financial sector. The Office for Budget Responsibility's earlier unemployment worst-case was modelled against a measurement stack that cannot distinguish AI-driven non-hiring from ordinary demand weakness. Until the ONS chooses to disaggregate, the UK policy debate on AI employment will be conducted against a vacancy number that has not shifted since autumn and a displacement channel the agency does not directly observe.

Explore the full analysis →
Sources:US Treasury

The EU Digital Omnibus second trilogue is scheduled for 28 April. The employer AI literacy obligation stripped by Parliament on 26 March remains contested entering negotiations.

Sources profile:This story draws on neutral-leaning sources

The European Commission's Digital Omnibus second trilogue is scheduled for 28 April 2026, twelve days from publication. The employer AI literacy obligation, which requires firms deploying AI on or alongside staff to ensure workers understand how the systems operate, was stripped from the text by Parliament on 26 March and remains contested entering the negotiations. The Cypriot Presidency is aiming for political agreement by May, with Official Journal publication targeted for July. The high-risk employment deadline under the AI Act is fixed at 2 December 2027.

The trilogue continues directly from the 1 April first trilogue , which opened with the literacy clause in dispute between Parliament and Council. If the clause survives, European workers will receive a statutory right to understand how AI is being deployed on them, which in practice means firms will need documentation, explanation and appeal pathways within twenty months. The clause's removal would leave the AI Act's high-risk employment deadline in place without the worker-facing literacy layer Parliament had built around it.

The divergence from the US position is the most concrete regulatory split on this topic. Washington's only federal measurement effort in the same period was the Hawley-Warner data-collection letter to agencies that have just proved they cannot agree on a baseline figure. Brussels is negotiating a binding statutory right, Washington is negotiating whether the question can be answered. The outcome of the 28 April trilogue is therefore the nearest-term variable on whether the two jurisdictions are likely to share a common AI workforce policy framework by the end of 2027 or proceed under materially different rules.

Explore the full analysis →

Oracle's Massachusetts WARN Act filing remains absent as of 13 April. The 60-day clock from the 31 March cuts expires around 30 May; Burlington offices remain unrepresented in any state disclosure.

Sources profile:This story draws on neutral-leaning sources

Oracle's Massachusetts WARN Act filing remained absent as of 13 April 2026, covering the Burlington offices affected by the late-March round of cuts . The 60-day clock from those terminations expires around the end of May; law firms are still investigating potential violations. Prior Oracle filings covered Washington state (491 positions) and Missouri (539), together representing fewer than 4% of the affected workforce .

The Worker Adjustment and Retraining Notification Act requires employers of 100 or more workers to give 60 calendar days' notice of mass layoffs at a single site; the filings are public record and enforceable via state labour departments. The Massachusetts gap is therefore a compliance decision, not an oversight: Oracle has filed where required and has not filed where the calculus of enforcement risk against disclosure cost points the other way. The pattern follows the New York state precedent that its own WARN Act captured zero AI attributions from 162 companies covering 28,300 workers in the law's first year .

If Burlington produces no filing before the deadline, the operational template for large-scale US AI-driven workforce reductions becomes clear: concentrate cuts offshore where no WARN equivalent applies, file minimally in small US jurisdictions where state-level triggers are unambiguous, and avoid filing in larger states where the federal single-site test can be contested. Oracle's India terminations demonstrated the offshore concentration ; the sub-four-percent US filing rate demonstrates the minimal compliance layer. Massachusetts is the remaining test of whether the template extends to states with active labour departments and law-firm scrutiny.

Explore the full analysis →

JPMorgan Chase CEO Jamie Dimon confirmed at the bank's February investor meeting that the bank has 'displaced people from AI' and offers them other jobs. The bank has committed $600 million annually to retraining and tied engineer performance reviews to AI tool adoption across 65,000 staff.

Jamie Dimon, JPMorgan Chase CEO, told the bank's February 2026 investor meeting that the bank has "displaced people from AI and we offer them other jobs", CNBC reported. JPMorgan committed $600 million annually to retraining and, according to CNBC, tied engineer performance reviews to AI tool adoption across 65,000 staff. MetaIntro research circulating the same period found that only 6% of companies globally are actually reskilling workers for AI, making JPMorgan's commitment a statistical outlier rather than the industry norm.

JPMorgan is one of the original twelve Project Glasswing partners with privileged access to Claude Mythos Preview , meaning the same institution is hearing the concerns raised at Treasury over the model's autonomous capabilities and is internally deploying AI in ways that reduce its own headcount. Dimon's admission is the sharpest corporate acknowledgement on this beat of the contradiction the topic has been tracking: major Glasswing institutions hold restricted access to a frontier model AISI has now evaluated as genuinely capable of sustained autonomous execution, while publicly disclosing that the same broad capability is displacing their own junior staff. JPMorgan's retraining commitment is the corporate outlier that follows from having that access; the overwhelming majority of companies globally have neither the Glasswing seat nor an equivalent retraining programme.

For AI-jobs policy, the JPMorgan case is diagnostic rather than representative. A bank at the capability frontier, willing to publicly confirm internal displacement and resource a retraining response, demonstrates that the displacement is real at an institution with every incentive to downplay it. The Office for National Statistics vacancy data, the Bureau of Labor Statistics scheduling gap and the Stanford JOLTS ratio all point to the same phenomenon at population scale; Dimon's investor-meeting line is the rare corporate acknowledgement that aligns with the measurement picture federal data has so far failed to produce.

Explore the full analysis →

Watch For

  • Whether the BLS reschedules and publishes its GenAI workplace research before May, or lets the NY Fed's SCE become the de facto federal measure.
  • The outcome of the EU Digital Omnibus second trilogue on 28 April, specifically whether the employer AI literacy obligation survives Council negotiation.
  • Oracle WARN Act filings at the 60-day mark around 30 May, particularly whether Massachusetts produces a filing for the Burlington offices.
  • Whether the NY Fed Survey of Consumer Expectations May release shows the 62% AI-unemployment expectation moving, up or down.
  • The first ad buy and named target list from the Leading the Future super PAC, and which pro-measurement senators it prioritises.
Closing comments

Escalating on three axes simultaneously. Politically, $150 million in PAC spending is targeting the measurement caucus before the 2026 midterms, compressing the legislative window for statutory disclosure requirements. Militarily, the IRGC's explicit naming of Gulf AI infrastructure as a target introduces uninsurable war risk onto the balance sheets of Oracle, AWS and OpenAI. Institutionally, the abandonment of AI Safety Level thresholds in favour of autonomy-focused risk models removes the externally verifiable compliance architecture at the moment when attack-chaining capability has been independently confirmed by a state evaluator.

Different Perspectives
Hawley-Warner Measurement Coalition (US Senate, bipartisan)
Hawley-Warner Measurement Coalition (US Senate, bipartisan)
The nine-senator coalition wrote to DOL and BLS in March demanding a canonical AI workforce figure; the Fed's 3 April reconciliation paper documented three incompatible answers as the federal response. Their political leverage now depends on the BLS rescheduling its skipped 14 April paper, against a $150 million PAC targeting the legislators who could fund that work.
Bank of England Financial Policy Committee
Bank of England Financial Policy Committee
The FPC's April 2026 record classified agentic AI risk in financial markets as likely to increase rapidly and directed both the Bank and the FCA to undertake further work, with a dated deliverable by end of 2026. Three-quarters of UK financial firms already deploy AI, creating correlated failure modes no individual-firm stress test currently captures.
European Trade Union Institute and BusinessEurope (EU Digital Omnibus)
European Trade Union Institute and BusinessEurope (EU Digital Omnibus)
ETUI backs the employer AI literacy obligation as the minimum transparency right consistent with GDPR Article 22; BusinessEurope argues the obligation falls disproportionately on SMEs that deploy commercial AI without control over model architecture and should attach to vendors rather than employer-deployers. The 28 April trilogue outcome will determine which framing prevails.
China Ministry of Human Resources and Social Security
China Ministry of Human Resources and Social Security
China's MOHRSS has defined 42 new AI-related occupations and embedded AI adoption within five-year labour planning, treating workforce transition as a state planning variable rather than a disclosure obligation. This approach produces no measurement gap equivalent to the US 18%-to-78% spread and no trilogue attrition equivalent to the EU, but depends on state-directed allocation that Western labour markets cannot replicate.
SoftBank / Masayoshi Son (Stargate UAE capex rationale)
SoftBank / Masayoshi Son (Stargate UAE capex rationale)
As co-anchor of the $500 billion Stargate UAE joint venture alongside OpenAI and Oracle, SoftBank's position is that Gulf AI infrastructure capex is the generational bet on sovereign AI capability, pursued despite the IRGC's explicit targeting. The 31 March Oracle workforce cuts that freed capex for the programme now carry physical war risk that standard commercial insurance excludes.
Oracle India workforce (12,000 terminated by 6am email)
Oracle India workforce (12,000 terminated by 6am email)
Oracle's approximately 12,000 India staff, terminated by 6am email on 31 March as 40% of the company's largest non-US workforce, sit outside every US disclosure mechanism: no WARN Act filing, no BLS payroll record, no Challenger count. They represent the largest single national cohort of the AI-funded cut and the most structurally invisible to Western policy frameworks.