AI Safety

(Limited Text - Ministerial Extracts only)

Read Full debate
Wednesday 10th December 2025

(1 day, 22 hours ago)

Westminster Hall
Read Hansard Text

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Anneliese Dodds Portrait Anneliese Dodds (Oxford East) (Lab/Co-op)
- Hansard - - - Excerpts

It is such a pleasure to take part in this critical debate. I start by acknowledging the Government’s commitment to rolling out AI in many areas and making the UK an adoption nation. They must also respond to the public demand for regulation in this area, however, and recognise that the two are interlinked. Research from the Ada Lovelace Institute and the Alan Turing Institute found that 72% of the public would feel more comfortable with AI if it was properly regulated.

There are clear potentials from AI, but there are also clear harms. We have already heard about chatbots in this debate, and I would add to that discussion the issues related to AI slop—often hate-filled slop produced by influencers who are profiting heavily from it while polluting the internet. I would also add that Sora 2, which is well known to many schoolkids if not to those of us in the Chamber, has recently been shown to produce videos of school shootings, for example, for people purporting to be 13 years old—who were, of course, adults pretending to be that age. Snapchat execs have apparently been willing to go ahead with so-called beautification lenses, despite concerns relating to body image.

There are significant harms, and I seek clarification on a number of questions. Will the curriculum review cover AI? Will teachers be supported in delivering that? Will there be a ban on nudified adult women images? When is the violence against women and girls strategy coming out—very soon, I hope? What is the position of AI chatbots, and are they covered by the Online Safety Act 2023? There seems to be a lot of confusion around that, at a time when we cannot have confusion. What is the timeline for the Secretary of State to look into this issue, given how important it is? Can the Minister push Ofcom to speedily publish the parameters for its welcome investigation into illegal online hate and terror material, and is that going to cover AI bots and slop? Surely it needs to.

We need Ministers to commit to an AI Bill. Can the Minister provide a timeline for that? Will that much-needed Bill include mandatory ex-ante evaluations for frontier AI models and transparency from companies on safety issues? I have asked parliamentary questions about this issue, but I am afraid that I do not completely agree with the Government that AI companies are conforming with international agreements. Surely we need more on that.

Are we going to have more scrutiny of AI use in government? Again—taking up the question that was asked earlier—I have asked PQs on BSL. Apparently, there is no knowledge of the cross-Government procurement of AI BSL, but there does seem to be discrete use of it by governmental bodies. Surely that needs to be looked at more. Surely we also need to act with the EU, with its commitment to human-centric, trustworthy AI, because ultimately, we have strength in numbers.

--- Later in debate ---
Kanishka Narayan Portrait The Parliamentary Under-Secretary of State for Science, Innovation and Technology (Kanishka Narayan)
- Hansard - - - Excerpts

It is a pleasure to serve with you in the Chair, Ms Butler, for my first Westminster Hall debate. It is a particular pleasure not only to have you bring your technological expertise to the Chair, but for the hon. Member for Strangford (Jim Shannon) to be reliably present in my first debate, as well as the UK’s—perhaps the world’s—first AI MP, my hon. Friend the Member for Leeds South West and Morley (Mark Sewards). It is a distinct pleasure to serve with everyone present and the expertise they bring. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this debate on AI safety. I am grateful to him and to all Members for their very thoughtful contributions to the debate.

It is no exaggeration to say that the future of our country and our prosperity will be led by science, technology and AI. That is exactly why, in response to the question on growth posed by the hon. Member for Runnymede and Weybridge (Dr Spencer), we recently announced a package of new reforms and investments to use AI to power national renewal. We will drive growth through developing new AI growth zones across north and south Wales, Oxfordshire and the north-east, creating opportunities for innovation by expanding access to compute for British researchers and scientists.

We are investing in AI to drive breakthroughs in developing new drugs, cures and treatments. But we cannot harness those opportunities without ensuring that AI is safe for the British public and businesses, nor without agency over its development. I was grateful for the points made by my hon. Friend the Member for Milton Keynes Central (Emily Darlington) on the importance of standards and the hon. Member for Harpenden and Berkhamsted (Victoria Collins) about the importance of trust.

That is why the Government are determined to make the UK one of the best places to start a business, to scale up, to stay on our shores, especially for the UK AI assurance and standards market. Our trusted third-party AI assurance roadmap and AI assurance innovation fund are focused on supporting the growth of UK businesses and organisations providing innovative AI products that are proven to be safe for sale and use. We must ensure that the AI transformation happens not to the UK but with and through the UK.

In consistency with the points raised by my hon. Friend the Member for Milton Keynes Central, that is why we are backing the sovereign AI unit, with almost £500 million in investment, to help build and scale AI capabilities on British shores, which will reflect our country’s needs, values and laws. Our approach to those AI laws seeks to ensure that we balance growth and safety, and that we remain adaptable in the face of inevitable AI change.

On growth, I am glad to hear the points made by my hon. Friend the Member for Leeds South West and Morley about a space for businesses to experiment. We have announced proposals for an AI growth lab that will support responsible AI innovation by making targeted regulatory modifications under robust safeguards. That will help drive trust by providing a precisely safe space for experimentation and trialling of innovative products and services. Regulators will monitor that very closely.

On safety, we understand that AI is a general-purpose technology, with a wide range of applications. In recognition of the contribution from the hon. Member for Newton Abbot (Martin Wrigley), I reaffirm some of the points he made about being thoughtful in regulatory approaches that distinguish between the technology and the specific use cases. That is why we believe that the vast majority of AI should be regulated at the point of use, where the risk relates and tractable action is most feasible.

A range of existing rules already applies to those AI systems in application contexts. Data protection and equality legislation protect the UK public’s data rights. They prevent AI-driven discrimination where the systems decide, for example, who is offered a job or credit. Competition law helps shields markets from AI uses that could distort them, including algorithmic collusion to set unfair prices.

Sarah Russell Portrait Sarah Russell
- Hansard - - - Excerpts

As a specialist equality lawyer, I am not currently aware of any cases in the UK around the kind of algorithmic bias that I am talking about. I would be delighted to see some, and delighted to see the Minister encouraging that, but I am not sure that the regulatory framework would achieve that at present.

--- Later in debate ---
Kanishka Narayan Portrait Kanishka Narayan
- Hansard - - - Excerpts

My hon. Friend brings deep expertise from her past career. If she feels there are particular absences in the legislation on equalities, I would be happy to take a look, though that has not been pointed out to me, to date.

The Online Safety Act 2023 requires platforms to manage harmful and illegal content risks, and offers significant protection against harms online, including those driven by AI services. We are supporting regulators to ensure that those laws are respected and enforced. The AI action plan commits to boosting AI capabilities through funding, strategic steers and increased public accountability.

There is a great deal of interest in the Government’s proposals for new cross-cutting AI regulation, not least shown compellingly by my right hon. Friend the Member for Oxford East (Anneliese Dodds). The Government do not speculate on legislation, so I am not able to predict future parliamentary sessions, although we will keep Parliament updated on the timings of any consultation ahead of bringing forward any legislation.

Notwithstanding that, the Government are clearly not standing still on AI governance. The Technology Secretary confirmed in Parliament last week that the Government will look at what more can be done to manage the emergent risks of AI chatbots, raised by my hon. Friend the Member for York Outer (Mr Charters), my right hon. Friend the Member for Oxford East, my hon. Friend the Member for Milton Keynes Central and others.

Alongside the comments the Technology Secretary made, she urged Ofcom to use its existing powers to ensure AI chatbots in scope of the Act are safe for children. Further to the clarifications I have provided previously across the House, if hon. Members have a particular view on where there are exceptions or spaces in the Online Safety Act on AI chatbots that correlate with risk, we would welcome any contribution through the usual correspondence channels.

Emily Darlington Portrait Emily Darlington
- Hansard - - - Excerpts

Will the Minister give way?

Kanishka Narayan Portrait Kanishka Narayan
- Hansard - - - Excerpts

I have about two minutes, so I will continue the conversation with my hon. Friend outside.

We will act to ensure that AI companies are able to make their own products safe. For example, the Government are tackling the disgusting harm of child sexual exploitation and abuse with a new offence to criminalise AI models that have been optimised for that purpose. The AI Security Institute, which I was delighted to hear praised across the House, works with AI labs to make their products safer and has tested over 30 models at the frontier of development. It is uniquely the best in the world at developing partnerships, understanding security risks, and innovating safeguards, too. Findings from AISI testing are used to strengthen model safeguards in partnership with AI companies, improving safety in areas such as cyber-tasks and biological weapon development.

The UK Government do not act alone on security. In response to the points made by the hon. Members for Ceredigion Preseli (Ben Lake), for Harpenden and Berkhamsted, and for Runnymede and Weybridge, it is clear that we are working closely with allies to raise security standards, share scientific insights and shape responsible norms for frontier AI. We are leading discussions on AI at the G7, the OECD and the UN. We are strengthening our bilateral relationships on AI for growth and security, including AI collaboration as part of recent agreements with the US, Germany and Japan.

I will take the points raised by the hon. Members for Dewsbury and Batley, for Winchester (Dr Chambers) and for Strangford, and by my hon. Friend the Member for York Outer (Mr Charters) on health advice, and how we can ensure that the quality of NHS advice is privileged in wider AI chatbot engagement, as well as the points made by my hon. Friend the Member for Congleton and my right hon. Friend the Member for Oxford East on British Sign Language standards in AI, which are important points that I will look further at.

To conclude, the UK is realising the opportunities for transformative AI while ensuring that growth does not come at the cost of security and safety. We do this through stimulating AI safety assurance markets, empowering our regulators and ensuring our laws are fit for purpose, driving change through AISI and diplomacy.