All 1 contributions to the Artificial Intelligence (Regulation) Bill [HL] 2023-24 (Ministerial Extracts Only)

Read Full Bill Debate Texts

Artificial Intelligence (Regulation) Bill [HL]

(Limited Text - Ministerial Extracts only)

Read Full debate
2nd reading
Friday 22nd March 2024

(1 month, 1 week ago)

Lords Chamber
Artificial Intelligence (Regulation) Bill [HL] 2023-24 Read Hansard Text Watch Debate

This text is a record of ministerial contributions to a debate held as part of the Artificial Intelligence (Regulation) Bill [HL] 2023-24 passage through Parliament.

In 1993, the House of Lords Pepper vs. Hart decision provided that statements made by Government Ministers may be taken as illustrative of legislative intent as to the interpretation of law.

This extract highlights statements made by Government Ministers along with contextual remarks by other members. The full debate can be read here

This information is provided by Parallel Parliament and does not comprise part of the offical record

Viscount Camrose Portrait The Parliamentary Under-Secretary of State, Department for Science, Innovation and Technology (Viscount Camrose) (Con)
- View Speech - Hansard - - - Excerpts

I join my thanks to those of others to my noble friend Lord Holmes for bringing forward this Bill. I thank all noble Lords who have taken part in this absolutely fascinating debate of the highest standard. We have covered a wide range of topics today. I will do my best to respond, hopefully directly, to as many points as possible, given the time available.

The Government recognise the intent of the Bill and the differing views on how we should go about regulating artificial intelligence. For reasons I will now set out, the Government would like to express reservations about my noble friend’s Bill.

First, with the publication of our AI White Paper in March 2023, we set out proposals for a regulatory framework that is proportionate, adaptable and pro-innovation. Rather than designing a new regulatory system from scratch, the White Paper proposed five cross-sectoral principles, which include safety, transparency and fairness, for our existing regulators to apply within their remits. The principles-based approach will enable regulators to keep pace with the rapid technological change of AI.

The strength of this approach is that regulators can act now on AI within their own remits. This common-sense, pragmatic approach has won endorsement from leading voices across civil society, academia and business, as well as many of the companies right at the cutting edge of frontier AI development. Last month we published an update through the Government’s response to the consultation on the AI White Paper. The White Paper response outlines a range of measures to support existing regulators to deliver against the AI regulatory framework. This includes providing further support to regulators to deliver the regulatory framework through a boost of more than £100 million to upskill regulators and help unlock new AI research and innovation.

As part of this, we announced a £10 million package to jump-start regulators’ AI capabilities, preparing and upskilling regulators to address the risks and to harness the opportunities of this defining technology. It also includes publishing new guidance to support the coherent implementation of the principles. To ensure robust implementation of the framework, we will continue our work to establish the central function.

Let me reassure noble Lords that the Government take mitigating AI risks extremely seriously. That is why several aspects of the central function have already been established, such as the central AI risk function, which will shortly be consulting on its cross-economy AI risk register. Let me reassure the noble Lord, Lord Empey, that the AI risk function will maintain a holistic view of risks across the AI ecosystem, including misuse risks, such as where AI capabilities may be leveraged to undermine cybersecurity.

Specifically on criminality, the Government recognise that the use of AI in criminal activity is a very important issue. We are working with a range of stakeholders, including regulators, and a range of legal experts to explore ways in which liability, including criminal liability, is currently allocated through the AI value chain.

In the coming months we will set up a new steering committee, which will support and guide the activities of a formal regulator co-ordination structure within government. We also wrote to key regulators, requesting that they publish their AI plans by 30 April, setting out how they are considering, preparing for and addressing AI risks and opportunities in their domain.

As for the next steps for ongoing policy development, we are developing our thinking on the regulation of highly capable general-purpose models. Our White Paper consultation response sets out key policy questions related to possible future binding measures, which we are exploring with experts and our international partners. We plan to publish findings from this expert engagement and an update on our thinking later this year.

We also confirmed in the White Paper response that we believe legislative action will be required in every country once the understanding of risks from the most capable AI systems has matured. However, legislating too soon could easily result in measures that are ineffective against the risks, are disproportionate or quickly become out of date.

Finally, we make clear that our approach is adaptable and iterative. We will continue to work collaboratively with the US, the EU and others across the international landscape to both influence and learn from international development.

I turn to key proposals in the Bill that the noble Lord has tabled. On the proposal to establish a new AI authority, it is crucial that we put in place agile and effective mechanisms that will support the coherent and consistent implementation of the AI regulatory framework and principles. We believe that a non-statutory central function is the most appropriate and proportionate mechanism for delivering this at present, as we observe a period of non-statutory implementation across our regulators and conduct our review of regulator powers and remits.

In the longer term, we recognise that there may be a case for reviewing how and where the central function has delivered, once its functions have become more clearly defined and established, including whether the function is housed within central government or in a different form. However, the Government feel that this would not be appropriate for the first stage of implementation. To that end, as I mentioned earlier, we are delivering the central function within DSIT, to bring coherence to the regulatory framework. The work of the central function will provide clarity and ensure that the framework is working as intended and that joined-up and proportionate action can be taken if there are gaps in our approach.

We recognise the need to assess the existing powers and remits of the UK’s regulators to ensure they are equipped to address AI risks and opportunities in their domains and to implement the principles consistently and comprehensively. We anticipate having to introduce a statutory duty on regulators requiring them to have due regard to the principles after an initial period of non-statutory implementation. For now, however, we want to test and iterate our approach. We believe this approach offers critical adaptability, but we will keep it under review; for example, by assessing the updates on strategic approaches to AI that several key regulators will publish by the end of April. We will also work with government departments and regulators to analyse and review potential gaps in existing regulatory powers and remits.

Like many noble Lords, we see approaches such as regulatory sandboxes as a crucial way of helping businesses navigate the AI regulatory landscape. That is why we have funded the four regulators in the Digital Regulation Cooperation Forum to pilot a new, multiagency advisory service known as the AI and digital hub. We expect the hub to launch in mid-May and will provide further details in the coming weeks on when this service will be open for applications from innovators.

One of the principles at the heart of the AI regulatory framework is accountability and governance. We said in the White Paper that a key part of implementation of this principle is to ensure effective oversight of the design and use of AI systems. We have recognised that additional binding measures may be required for developers of the most capable AI systems and that such measures could include requirements related to accountability. However, it would be too soon to mandate measures such as AI-responsible officers, even for these most capable systems, until we understand more about the risks and the effectiveness of potential mitigations. This could quickly become burdensome in a way that is disproportionate to risk for most uses of AI.

Let me reassure my noble friend Lord Holmes that we continue to work across government to ensure that we are ready to respond to the risks to democracy posed by deep fakes; for example, through the Defending Democracy Taskforce, as well as through existing criminal offences that protect our democratic processes. However, we should remember that AI labelling and identification technology is still at an early stage. No specific technology has yet been proven to be both technically and organisationally feasible at scale. It would not be right to mandate labelling in law until the potential benefits and risks are better understood.

Noble Lords raised the importance of protecting intellectual property, a profoundly important subject. In the AI White Paper consultation response, the Government committed to provide an update on their approach to AI and copyright issues soon. I am confident that, when we do so, it will address many of the issues that noble Lords have raised today.

In summary, our approach, combining a principles-based framework, international leadership and voluntary measures on developers, is right for today, as it allows us to keep pace with rapid and uncertain advances in AI. The UK has successfully positioned itself as a global leader on AI, in recognition of the fact that AI knows no borders and that its complexity demands nuanced international governance. In addition to spearheading thought leadership through the AI Safety Summit, the UK has supported effective action through the G7, the Council of Europe, the OECD, the G5, the G20 and the UN, among other bodies. We look forward to continuing to engage with all noble Lords on these critical issues as we continue to develop our regulatory approach.