Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps she is taking to develop artificial general intelligence (AGI) safety mechanisms.
There is considerable debate and uncertainty around Artificial General Intelligence (AGI), but the possibility of its development must be taken seriously. The increasing capabilities of AI may exacerbate existing risks and present new risks, for which the UK need to be prepared.
The role of the AI Security Institute (AISI) is to build an evidence base on these risks, so the government is equipped to prepare for them. AISI focuses on emerging AI risks with serious security implications, including the potential for AI to help users develop chemical and biological weapons, and the potential for loss of control presented by autonomous systems.
The Government will continue to take a long-term, science-led approach to understand and prepare for emerging risks from AI. This includes preparing for the possibility of very rapid AI progress, which could have transformative impacts on society and national security.