
Konstantin Sietzy
Sovereign AI Unit founding hire; ex-AI Safety Institute; AI safety evaluation expertise.
Last refreshed: 13 April 2026
Can AISI alumni running a PS500m AI fund maintain independence from Big Tech?
Timeline for Konstantin Sietzy
Mentioned in: Sovereign AI Unit to launch with £500m
UK Startups and InnovationBackground
Konstantin Sietzy is a founding hire of the UK governments Sovereign AI Unit, joining from the AI Safety Institute (AISI), the predecessor body that produced some of the foundational safety evaluation frameworks for frontier AI models. His background in AI safety evaluation gives the Sovereign AI Unit a technical risk management capability alongside the commercial product experience represented by co-hire Josephine Kant.
Sietzy worked at AISI during the period when it conducted its most prominent evaluations, including safety tests on frontier models from Anthropic, OpenAI, and Google DeepMind ahead of the 2023 Bletchley Park AI Safety Summit. That experience gives him direct relationships with the leading AI labs and a credibility that will be relevant when the Sovereign AI Unit negotiates access agreements, safety requirements, and compute contracts with those same organisations.
For observers of UK AI policy, Sietzy and Kants hiring profiles suggest the Sovereign AI Unit is being staffed to operate at the intersection of policy, technical safety, and commercial AI deployment, rather than as a pure procurement function. Whether a small unit of this kind can actually execute a PS500m programme without being captured by existing government IT procurement processes remains the critical unknown.