Artificial Intelligence: Weapons of Mass Destruction

(asked on 2nd January 2026) - View Source

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what steps the Government is able to take to delay or prohibit the public release of a frontier AI model in instances when the UK AI Security Institute assesses that model as posing a serious risk of assisting users in developing chemical, biological, radiological, or nuclear weapons.


Answered by
Kanishka Narayan Portrait
Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
This question was answered on 14th January 2026

We are optimistic about how AI will transform the lives of British people for the better, but advanced AI could also lead to serious security risks.

The Government believes that AI should be regulated at the point of use, and takes a context-based approach. Sectoral laws give powers to take steps where there are serious risks - for example the Procurement Act 2023 can prevent risky suppliers (including those of AI) from being used in public sector contexts, whilst a range of legislation offers protections against high-risk chemical and biological incidents.

This approach is complemented by the work of the AI Security Institute, which works in partnership with AI labs to understand the capabilities and impacts of advanced AI, and develop and test risk mitigations.

Reticulating Splines