Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether she has considered the potential merits of the mitigation of potential future risks from non-human, autonomous AI systems which may evade human oversight and control.
AI systems have the potential to pose novel risks that emerge from models behaving in unintended ways. The possibility that this unintended behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts and warrants close attention.
The role of the AI Security Institute (AISI) is to build an evidence base on these risks, so the government is equipped to understand them. One of the Institute’s research priorities is tracking the development of AI capabilities that could contribute towards AI’s ability to evade human control.
That is why the Institute launched the Alignment project - a funding consortium distributing up to £15m for research projects to carry out foundational research on methods for building AI systems, ensuring they reliably align with human values across multiple technical disciplines.