To match an exact phrase, use quotation marks around the search term. eg. "Parliamentary Estate". Use "OR" or "AND" as link words to form more complex queries.


Keep yourself up-to-date with the latest developments by exploring our subscription options to receive notifications direct to your inbox

Written Question
Artificial Intelligence
Monday 22nd May 2023

Asked by: Lord Birt (Crossbench - Life peer)

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government what assessment they have made of concerns expressed by (1) Dr Geoffrey Hinton, and (2) other employees of Google and Microsoft, reported in the New York Times on 7 April, about the risk AI technologies are being introduced before the risks can be fully assessed.

Answered by Viscount Camrose - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

In March 2023, the Office for AI published a white paper on AI regulation. The framework proposes a proportionate, collaborative approach to AI regulation, and aims to promote innovation while protecting the UK’s values. The AI regulatory framework will ensure government is able to adapt and respond to the risks and opportunities that emerge as the technology develops at pace.

As part of the development of the AI regulation white paper, government officials heard from over 130 stakeholders, including civil society groups like trade bodies, unions, and rights focused groups, as well as academics and UK and global businesses at the forefront of AI development. This engagement included a focus on ensuring that the regulatory framework would be adaptable and responsive to emerging risk. Additionally in May 2023 our Secretary of State and I met with Dr Hinton to discuss AI risks and opportunities, and the role of government. The government is also working with international partners to address AI risks while promoting the UK’s values, including through key multilateral fora, such as the OECD, the G7, the Global Partnership on AI (GPAI), the Council of Europe, and UNESCO, and through bilateral relationships.

While direct regulation of AI will remain the responsibility of existing regulators in order to ensure a context-based approach focused on outcomes, government recognises the significance of cross-sectoral risks associated with AI. The AI regulation white paper therefore proposed a range of new central functions, including functions intended to improve government's ability to anticipate, assess and respond appropriately to emerging risks such as:

  • Horizon scanning to identify and monitor emerging trends, risks and opportunities in AI.

  • Cross-sectoral risk assessment to develop and maintain a cross-economy, society-wide AI risk register to facilitate structured assessment of cross-cutting risks and allowing effective, coherent mitigation planning.

  • Monitoring and evaluation to ensure that the overall regulatory framework for AI is achieving its policy objectives and is future proof and adaptable.

These central functions - together with others as set out in the white paper - will complement the existing work conducted by regulators and other government departments to tackle risks arising from AI.

Government understands that a collaborative approach is fundamental to governments’ and policy-makers’ ability to tackle AI risk and support responsible AI development and use for the benefit of society. As set out in the white paper, we will continue to convene a wide range of stakeholders - including frontier researchers from industry - to ensure that we hear the full spectrum of viewpoints. The UK’s continued leadership and cooperation in international debates on AI will also enable the development of a responsive and compatible system of global AI governance, allowing us to work together on cross-border AI risks and opportunities. This breadth of collaboration will be integral to the Government's ability to monitor and improve the framework, ensuring it remains effective in the face of emerging AI risks.

We are in a formal consultation period for the AI regulation white paper and encourage anyone interested to respond before 21 June.


Written Question
Tesla: Cameras
Thursday 27th April 2023

Asked by: Lord Birt (Crossbench - Life peer)

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government what assessment they have made of the implications for British citizens of the evidence obtained by a Reuters investigation which suggests that Tesla employees have viewed and shared camera recordings obtained from Tesla cars in private domestic settings while the vehicle was not in use; and whether the viewing of such material relating to British citizens is lawful.

Answered by Viscount Camrose - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

Our data protection laws impose strict obligations on both individuals and organisations to process people’s data fairly, lawfully and transparently to ensure that any data collected is processed in a way which individuals would expect.

The UK’s data protection laws are enforced independently of the Government by the Information Commissioner's Office (ICO). Organisations that fail to comply may be subject to enforcement action by the Information Commissioner’s Office. The Information Commissioner can impose significant financial penalties for non-compliance. The Data Protection Act 2018 also gives the ICO the power to prosecute those who commit criminal offences under the Act. Criminal offences under the Act include unlawfully obtaining, disclosing, or retaining personal data without the consent of the data controller.


Written Question
Artificial Intelligence
Tuesday 25th April 2023

Asked by: Lord Birt (Crossbench - Life peer)

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government what assessment they have made of concerns that AI technologies are being introduced prematurely to customers before their potential risks can be fully assessed.

Answered by Viscount Camrose - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

The AI Regulation White Paper, published 29 March 2023 set out a framework for regulating AI that seeks to balance the need to address risk and support innovation.

As part of the regulatory framework, the UK is proposing a range of central functions through which the government will monitor known and emerging risks as AI technologies evolve. This will support us in assessing the effectiveness of our framework to address AI risks, and identify gaps in our risk mitigation efforts. For example, we are creating a horizon scanning function and a central risk function which will enable the government and regulators to monitor future risks, including ‘high impact but low probability’ risks such as existential risks, or AI biosecurity risks, in a rigorous, coherent and balanced way.

Tools for trustworthy AI - including internationally developed standards, and assurance techniques - will play a central role in the implementation of the framework, especially for those technologies already being introduced to the market. Through AI assurance, businesses and consumers are better able to decide whether a product or service using AI is legitimate and trustworthy. Impact assessments, performance testing and, possibly, pre-release verification or certification against AI standards in the longer term, are a few of the assurance mechanisms that can help organisations innovate responsibly while also determining whether an AI system complies with applicable standards and regulations.

The collaborative, adaptable framework outlined in the AI regulation white paper will use the proposed central functions to convene and learn from the expertise of frontier researchers, industry, academics, representatives of the public and other key stakeholders as we continue to develop policy in this evolving area.