Artificial Intelligence

(asked on 9th February 2026) - View Source

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government what steps they are taking to ensure that AI governance and safety frameworks remain effective as AI systems develop in autonomy and complexity.


Answered by
 Portrait
Baroness Lloyd of Effra
Baroness in Waiting (HM Household) (Whip)
This question was answered on 24th February 2026

Monitoring the capabilities of AI systems is necessary to ensure we can prepare for the risks that advanced AI could bring. The AI Security Institute (AISI) builds tools to understand AI capabilities, evaluates AI models to research these risks, and develops risk mitigations. This understanding can inform Government awareness and resilience efforts.

The Institute’s testing has identified a large number of AI model vulnerabilities that leading AI developers (such as OpenAI and Anthropic) have addressed prior to release.

Safety frameworks are developers’ own risk management policies. Developers will need to adapt them to remain effective as AI systems develop in autonomy and complexity.


Reticulating Splines