Skip to content
Briefings are running a touch slower this week while we rebuild the foundations.See roadmap
European Tech Sovereignty
7MAY

AI Office gains enforcement powers in August

3 min read
10:13UTC

The EU AI Act's fine authority activates on 2 August 2026, giving Brussels a new instrument against general-purpose AI providers with penalties reaching 3% of global turnover.

TechnologyDeveloping
Key takeaway

EU AI Act enforcement activates in August 2026 with fines up to 3% of global turnover for AI providers.

The EU AI Act's AI Office gains full enforcement powers over general-purpose AI model providers on 2 August 2026, now 3.5 months away. The fine ceiling is €15m or 3% of global annual turnover, whichever is higher 1. For OpenAI, a single enforcement action could exceed €500m.

Companies that placed general-purpose AI models on the EU market after August 2025 must already comply with the GPAI Code of Practice. Models placed before that date have until August 2027. The AI Office has stated it will adopt a "collaborative, risk-based" approach initially, which likely means formal enforcement actions will not land before 2027. But the legal authority will exist from August, and the fine ceilings are large enough to change corporate behaviour even without an action being filed.

The enforcement framework creates an asymmetry that benefits European AI companies. Mistral and Aleph Alpha have been engaging with the AI Office since the regulation was drafted and have shaped their models around its requirements. US providers face a compliance burden designed around European values and regulatory traditions that do not map neatly onto their existing governance structures. The practical question is whether the AI Office has the technical capacity to assess general-purpose model compliance at the level of detail the regulation demands. The office is still hiring specialist staff.

Deep Analysis

In plain English

The EU AI Act is Europe's comprehensive law regulating artificial intelligence. Different types of AI systems face different rules, from a total ban on the most dangerous applications to light-touch rules for low-risk tools. On 2 August 2026, the AI Act gives the EU's new AI Office the power to fine companies that make or distribute general-purpose AI models; the foundational technology behind systems like ChatGPT, Gemini, and Claude. These are called GPAI (General-Purpose AI) models. The fines can be up to €15 million or 3% of a company's global annual revenue, whichever is higher. For OpenAI, which earned roughly $3.5 billion in revenue in 2024, that could mean a fine exceeding €100 million for a single violation. Companies that started offering GPAI models after August 2025 already have to comply with a Code of Practice; companies whose models existed before that date have until August 2027. The AI Office has said it will start carefully and collaboratively rather than immediately issuing large fines; but the powers will exist from August 2026.

Deep Analysis
Root Causes

The August 2026 enforcement date for GPAI model providers is the result of a deliberate sequencing in the AI Act's rollout: prohibited practices (February 2025), GPAI obligations (August 2025), high-risk system conformity (August 2026), and high-risk systems in regulated products (August 2027). The 12-month staging between GPAI obligations and full enforcement powers was intended to give providers time to implement the GPAI Code of Practice.

The €500m+ potential enforcement exposure for OpenAI reflects the 3% of global annual turnover fine ceiling applied to OpenAI's approximately $3.5bn annual revenue estimate. At this scale, an AI Act fine would be larger than any GDPR fine issued to date, and would be directly comparable to DMA-scale enforcement; a signal that the Commission intends AI Act enforcement to have equivalent deterrent effect.

What could happen next?
  • Consequence

    GPAI providers that have not completed AI Office documentation and GPAI Code of Practice compliance by August 2026 face injunction risk that is more immediately disruptive to EU business than financial fines.

    Immediate · 0.7
  • Risk

    AI Office resource constraints (approximately 100 officials covering 50+ GPAI providers) may produce selective enforcement that favours large US providers over smaller European providers less equipped to manage regulatory engagement.

    Short term · 0.6
  • Precedent

    The first AI Act enforcement action against a major GPAI provider will set the interpretive baseline for systemic risk obligations, with implications for the entire global AI industry's EU compliance posture.

    Medium term · 0.8
First Reported In

Update #1 · Europe's chip ambitions meet reality

CNBC· 13 Apr 2026
Read original
Different Perspectives
OpenForum Europe / EUI-Fraunhofer consortium
OpenForum Europe / EUI-Fraunhofer consortium
The consortium (OpenForum Europe, European University Institute, Fraunhofer ISI) is lobbying for a €350m EU Sovereign Tech Fund modelled on Germany's existing sovereign tech fund; Michal Kobosko MEP hosted a Parliament breakfast for it on 28 January 2026. No commissioner has named it as a priority and no host institution has been designated.
Chi Onwurah MP / UK SIT Committee
Chi Onwurah MP / UK SIT Committee
Onwurah wrote to DSIT minister Narayan that his sovereignty letter "fails to set out a coherent strategy for achieving technology sovereignty". Narayan cited the £500m Sovereign AI Unit and a proposed advanced market commitment for AI hardware; Onwurah's challenge signals that Parliament will press DSIT to move beyond an infrastructure-only first cohort.
US Trade Representative (USTR)
US Trade Representative (USTR)
USTR confirmed 24 July as the final determination date for its Section 301 investigation into EU digital rules; public hearings began in May. A USTR tariff threat published before the 27 July DMA Google ruling places direct political pressure on DG COMP to moderate its first cloud-AI enforcement decision.
ASML (Christophe Fouquet)
ASML (Christophe Fouquet)
Fouquet told analysts that ASML's 2026 guidance already "accommodates potential outcomes of ongoing discussions around export controls", after China fell to 19% of system sales in Q1 2026 from 36%. ASML co-signed the CEO deregulation letter; the MATCH Act would remove its remaining DUV China revenue.
Mistral AI / seven European CEOs
Mistral AI / seven European CEOs
Arthur Mensch co-signed a 5 May joint op-ed in Handelsblatt and Corriere della Sera after meeting von der Leyen, calling for simplified AI rules and looser merger control. Mistral's signature is the politically significant one: it is the company Brussels most often cites as evidence that European AI sovereignty is viable.
Schwarz Group / StackIT
Schwarz Group / StackIT
Schwarz Group anchored the Cohere-Aleph Alpha merger with $600m and already holds StackIT at SEAL-3 in the Commission's €180m framework. Chief Digital Officer Karsten Wildberger called Berlin's backing of the deal "a very strong signal"; Berlin attached conditions that development services remain in Germany and infrastructure deployment remain sovereign.