To match an exact phrase, use quotation marks around the search term. eg. "Parliamentary Estate". Use "OR" or "AND" as link words to form more complex queries.


Keep yourself up-to-date with the latest developments by exploring our subscription options to receive notifications direct to your inbox

Written Question
Artificial Intelligence
Tuesday 11th April 2023

Asked by: Baroness Helic (Conservative - Life peer)

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government what assessment they have made of the letter published by the Future of Life Institute Pause Giant AI Experiments: An Open Letter, published on 29 March; and what steps they intend to take in response to the recommendation in that letter that there should be "shared safety protocols for AI" which are audited and overseen by independent outside experts".

Answered by Viscount Camrose - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

It is important that industry voices are actively engaged in the discourse around responsible AI. British based companies, like Deepmind, are at the forefront of responsible innovation. However, It should be noted that questions have been raised regarding the veracity of some of the signatures of the open letter on Artificial Intelligence published by the Future of Life Institute (FLI). Some of the researchers whose work was cited in the letter have also apparently raised concerns. It is also important to note that the letter is not expressly targeted towards the UK or any other government. Nevertheless,

Government recognises the need to act to adapt the way in which we regulate AI as systems become more powerful, and are put to different use. As Sir Patrick Vallance highlighted in his recent regulatory review, there is a small window of opportunity to get this right and build a regulatory regime that enables innovation while addressing the risks. Government agrees that a collaborative approach is fundamental to addressing AI risk and supporting responsible AI development and use for the benefit of society. The AI Regulation White Paper we published on 29 March identifies “trustworthy”, “proportionate” and “collaborative” as key characteristics of the proposed AI regulation framework.

The AI Regulation White Paper sets out principles for the responsible development of AI in the UK. These principles such as safety, fairness, and accountability are at the very heart of our approach to ensuring the responsible development and use of AI. We will also establish a central risk function to bring together cutting-edge knowledge from industry, regulators, academia and civil society – including skilled computer scientists with a deep technical understanding of AI - to monitor future risks and adapt our approach if necessary. This is aligned with the calls to action in FLI’s letter.

In addition, our recently announced Foundation Model Taskforce has been established to strengthen UK capability - in a way that is aligned with the UK’s values - as this potentially transformative technology develops.

The approach to AI regulation outlined in the AI regulation White Paper is also complemented by parallel work on AI Standards, supported by the AI Standards Hub launched in October 2022, and via the Centre for Data Ethics and Innvovation’s AI Assurance Roadmap, published in December 2021. In concert, our holistic approach to AI governance combining regulation with an approach to standards development and AI assurance is in line with efforts to develop shared safety protocols, and will at the same time allow the UK to benefit from AI technologies while protecting people and our fundamental values.


Written Question
TikTok: Data Protection
Wednesday 8th March 2023

Asked by: Baroness Helic (Conservative - Life peer)

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government what assessment they have made of the data security threat posed by TikTok.

Answered by Viscount Camrose - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

The government does not routinely comment on security matters, but data security is paramount and we always take the requisite steps to protect data. We continue to monitor the threats to our data and will not hesitate to take further action if necessary to protect our national security.

Like all businesses, we expect Tiktok to fully comply with our privacy laws (UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018 (DPA18). Organisations which fail to comply may be investigated by the Information Commissioner’s Office and where appropriate subject to enforcement action, including fines.