Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps the Government is able to take to delay or prohibit the public release of a frontier AI model in instances when the UK AI Security Institute assesses that model as posing a serious risk of assisting users in developing chemical, biological, radiological, or nuclear weapons.
We are optimistic about how AI will transform the lives of British people for the better, but advanced AI could also lead to serious security risks.
The Government believes that AI should be regulated at the point of use, and takes a context-based approach. Sectoral laws give powers to take steps where there are serious risks - for example the Procurement Act 2023 can prevent risky suppliers (including those of AI) from being used in public sector contexts, whilst a range of legislation offers protections against high-risk chemical and biological incidents.
This approach is complemented by the work of the AI Security Institute, which works in partnership with AI labs to understand the capabilities and impacts of advanced AI, and develop and test risk mitigations.