Artificial Intelligence

(asked on 30th March 2023) - View Source

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government what assessment they have made of the letter published by the Future of Life Institute Pause Giant AI Experiments: An Open Letter, published on 29 March; and what steps they intend to take in response to the recommendation in that letter that there should be "shared safety protocols for AI" which are audited and overseen by independent outside experts".


Answered by
Viscount Camrose Portrait
Viscount Camrose
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
This question was answered on 11th April 2023

It is important that industry voices are actively engaged in the discourse around responsible AI. British based companies, like Deepmind, are at the forefront of responsible innovation. However, It should be noted that questions have been raised regarding the veracity of some of the signatures of the open letter on Artificial Intelligence published by the Future of Life Institute (FLI). Some of the researchers whose work was cited in the letter have also apparently raised concerns. It is also important to note that the letter is not expressly targeted towards the UK or any other government. Nevertheless,

Government recognises the need to act to adapt the way in which we regulate AI as systems become more powerful, and are put to different use. As Sir Patrick Vallance highlighted in his recent regulatory review, there is a small window of opportunity to get this right and build a regulatory regime that enables innovation while addressing the risks. Government agrees that a collaborative approach is fundamental to addressing AI risk and supporting responsible AI development and use for the benefit of society. The AI Regulation White Paper we published on 29 March identifies “trustworthy”, “proportionate” and “collaborative” as key characteristics of the proposed AI regulation framework.

The AI Regulation White Paper sets out principles for the responsible development of AI in the UK. These principles such as safety, fairness, and accountability are at the very heart of our approach to ensuring the responsible development and use of AI. We will also establish a central risk function to bring together cutting-edge knowledge from industry, regulators, academia and civil society – including skilled computer scientists with a deep technical understanding of AI - to monitor future risks and adapt our approach if necessary. This is aligned with the calls to action in FLI’s letter.

In addition, our recently announced Foundation Model Taskforce has been established to strengthen UK capability - in a way that is aligned with the UK’s values - as this potentially transformative technology develops.

The approach to AI regulation outlined in the AI regulation White Paper is also complemented by parallel work on AI Standards, supported by the AI Standards Hub launched in October 2022, and via the Centre for Data Ethics and Innvovation’s AI Assurance Roadmap, published in December 2021. In concert, our holistic approach to AI governance combining regulation with an approach to standards development and AI assurance is in line with efforts to develop shared safety protocols, and will at the same time allow the UK to benefit from AI technologies while protecting people and our fundamental values.

Reticulating Splines