AI Safety

Sarah Russell Excerpts
Wednesday 10th December 2025

(1 day, 22 hours ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Sarah Russell Portrait Sarah Russell (Congleton) (Lab)
- Hansard - -

It is a pleasure to serve with you in the Chair, Ms Butler. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this debate.

There are two problems—maybe three—with AI. The first is that we do not distinguish very well between what is and is not AI. Although AI and tech are obviously related, they are not the same thing. It is important that when we talk about AI we distinguish it from tech. There is a need to regulate a lot of tech much better than we currently do, but AI poses very specific problems. The first one—I can see people from ControlAI in the Public Gallery—is the fact that we do not fully understand the models.

It worries any sensible-thinking person that we are unleashing technologies that appear to be able to self-replicate and do other things, and we are incorporating them into military hardware without a full understanding of how they work. We do not have to be a catastrophist or conspiracy theorist to be worried. I am generally a very optimistic person, but it is important to be optimistic on the basis of understanding the technology that we use and then regulating it appropriately. That does not mean stifling innovation, but it does mean making sure we know what we are doing.

When I look at AI, we have, as I said, two problems. One is rubbish in, rubbish out, and there is a lot of rubbish going into AI at the moment. We can see that in all sorts of terrible situations. We have a huge amount of in-built gender bias in our society. That means that, for instance, if we ask for AI to generate a picture of a female solicitor, as I am, we will get a picture of a woman who is barely clothed, but has a library of books behind her. That is not how female solicitors that I know go to work, but that is how AI thinks we are, and that has real-world impacts.

If we ask AI to suggest an hourly rate as a freelancer, it is on average suggesting significantly lower rates for women than it is for men. There are questions about algorithmic bias permeating the whole of the algorithm. Questions have been raised recently about LinkedIn. I and a lot of women I know are finding that we have significantly less interaction via LinkedIn than we used to. Various women have now changed their gender on their bios to male and suddenly find that their engagement levels go straight back up. LinkedIn appears to think we are not interesting and people will not want to read our content, so it is stopping showing female content at the same rate, it would appear. I caveat that I have not been able to speak to LinkedIn directly, but certainly a lot of women I know are reporting these problems.

We put in bio stuff to start with, but huge amounts of the image training data is based on what is publicly available on the internet, and that image training data of women on the internet is largely pornographic, which influences what comes out the other end of these models. When we look at that in terms of children, we have real problems. Nudification apps are huge and need to be dealt with. I would like to get into how I am worried about that and deal with health and how we do not have good enough training data on the interaction between gender and health and various other matters, but I will stop now. I thank everyone for their time today. I know colleagues will pick up important points.

--- Later in debate ---
Victoria Collins Portrait Victoria Collins (Harpenden and Berkhamsted) (LD)
- Hansard - - - Excerpts

It is a pleasure to serve under your chairmanship, Ms Butler. I congratulate the hon. Member for Dewsbury and Batley (Iqbal Mohamed) on securing this incredible debate. That so many issues have been packed into 90 minutes shows clearly that we need more time to debate this subject, and I think it comes down to the Government to say that an AI Bill, or further discussions, are clearly needed. The issue now pervades our lives, for the better but in many aspects for the worse.

As the Liberal Democrat spokesperson on science, innovation and technology, I am very excited about the positive implications of AI. It can clearly help grow our economy, solve the big problems and help us improve our productivity. However, it is clear from the debate that it comes with many risks that have nothing to do with growing our economy—certainly not the kind of economy we want to grow—including the use of generative AI for child sexual abuse material, children’s growing emotional dependency on chatbots, and the provision of suicide advice.

I have said for a long time the trust element is so important. It is two sides of the same coin: if we cannot trust this technology then we cannot develop as a society, but it is also really important for business and our economy. I find it fascinating that so many more businesses are now talking about this and saying, “If we can’t trust this technology, we can’t use it, we can’t spend money on it and we can’t adopt it.” Trust is essential.

If the UK acts fast and gets this right, we have a unique opportunity to be the leader on this. From talking to industry, I know that we have incredible talent and are great at innovating, but we also have a fantastic system for building trust. We need to take that opportunity. It is the right thing to do, and I believe we are the only country in the world that can really do it, but we have to act now.

Sarah Russell Portrait Sarah Russell
- Hansard - -

Does the hon. Lady agree that we should be looking hard at the EU’s regulation in this area, and considering alignment and whether there might be points on which we would like to go further?

Victoria Collins Portrait Victoria Collins
- Hansard - - - Excerpts

Absolutely, and the point about global co-operation has been made clearly across the Chamber today. The hon. Member for Leicester South (Shockat Adam) talked about what is now the AI Security Institute—it was the AI Safety Institute—and that point about leading and trust is really important. Indeed, I want to talk a little more about safety, because security and safety are slightly different. I see safety as consumer facing, but security is so important. Renaming the AI Safety Institute as the AI Security Institute, as the hon. Member mentioned, undermines the importance of both.

The first point is about AI psychosis and chatbots—this has been covered a lot today, and it is incredibly worrying. My understanding is that the problem of emotional dependency on AI chatbots is not covered by the Online Safety Act. Yes, elements of chatbots are covered—search functionality and user to user, for example—but Ofcom itself has said that there are certain harms from AI chatbots, which we can talk about, that are not covered. We have heard that 1.2 million users a week are talking to ChatGPT about suicide—we heard the example of Adam, who took his own life in the US after talking to a chatbot—and two thirds of 23 to 34-year-olds are turning to chatbots for their mental health. These are real harms.

Of course, the misinformation that is coming through chatbots also has to be looked at seriously. The hon. Member for York Outer (Mr Charters) mentioned the facts and the advice coming through. We can achieve powerful outcomes, but we need to make sure that chatbots are built in a way that ensures that advisory element, perhaps by linking with NHS or other proper advice.

The hon. Member for Milton Keynes Central (Emily Darlington), who has been very passionate about this issue, mentioned the Molly Rose Foundation, which is doing incredible work to show the harms coming through this black hole—many do not see the harms, which have an impact on children that parents do not understand, as well as on adults.

The harm of deepfakes, including horrific CSAM and sexual material of all ages, has also been mentioned, and it is also impacting our economy. Just recently, a deepfake was unfortunately made of the hon. Member for Mid Norfolk (George Freeman). The Sky journalist Yalda Hakim was also the victim of a deepfake. She mentioned her worry that it was shared thousands of times, but also picked up by media in the subcontinent. These things are coming through, and no one who watches them can tell the difference. It is extremely worrying.

As the hon. Member for Congleton (Sarah Russell) said, “Rubbish in, rubbish out.” What is worrying is that, as the Internet Watch Foundation has said, because a lot of the rubbish going in is online sexual content that has been scraped, that is what is coming out.

Then there is AI slop, as the right hon. Member for Oxford East (Anneliese Dodds) mentioned. Some of that is extreme content, but what worries me is that, as many may know, our internet is now full of AI slop—images, stories and videos—where users just cannot tell the difference. I do not know about others, but I often look at something and think, “Ah, that’s really cute. Oh no—that is not real.” What is really insidious is that this is breaking down trust. We cannot tell any more what is real and what is not, and that affects trust in our institutions, our news and our democracy. What we say here today can be changed. Small changes are breaking down trust, and it is really important that that stops. What is the Minister doing about AI labelling and watermarking, to make sure we can trust what we see? That is just one small part of it.

The other thing, which my hon. Friend the Member for Newton Abbot (Martin Wrigley) mentioned, is that often AI threats magnify what is already a threat, whether it is online fraud or a security threat. I believe that AI scams in just the first three months of this year cost Brits £1 billion. One third of UK businesses said in the first quarter they had been victims of AI fraud. And I have not got on to what the hon. Member for Dewsbury and Batley said about moving towards AI in security and defence, and superintelligence. What are the “exaggerated” threats that actually will become extremely threatening? What are the Government doing to clamp down on these threats, and what are they doing on AI fraud and online safety?

Another issue is global working. One of the Liberal Democrats’ calls is for an AI safety agency, which could be headquartered in the UK; we could take the lead on it. I think that is in line with what the hon. Member for Dewsbury and Batley was talking about. We have this opportunity; we need to take it seriously, and we could be a leader on that.

I will close by reiterating the incredible work that AI could do. We all know that it could solve the biggest problems of tomorrow, and it could improve our wellbeing and productivity, but the threats and risks are there. We have to manage them now, and make sure that trust is built on both sides.

--- Later in debate ---
Kanishka Narayan Portrait The Parliamentary Under-Secretary of State for Science, Innovation and Technology (Kanishka Narayan)
- Hansard - - - Excerpts

It is a pleasure to serve with you in the Chair, Ms Butler, for my first Westminster Hall debate. It is a particular pleasure not only to have you bring your technological expertise to the Chair, but for the hon. Member for Strangford (Jim Shannon) to be reliably present in my first debate, as well as the UK’s—perhaps the world’s—first AI MP, my hon. Friend the Member for Leeds South West and Morley (Mark Sewards). It is a distinct pleasure to serve with everyone present and the expertise they bring. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this debate on AI safety. I am grateful to him and to all Members for their very thoughtful contributions to the debate.

It is no exaggeration to say that the future of our country and our prosperity will be led by science, technology and AI. That is exactly why, in response to the question on growth posed by the hon. Member for Runnymede and Weybridge (Dr Spencer), we recently announced a package of new reforms and investments to use AI to power national renewal. We will drive growth through developing new AI growth zones across north and south Wales, Oxfordshire and the north-east, creating opportunities for innovation by expanding access to compute for British researchers and scientists.

We are investing in AI to drive breakthroughs in developing new drugs, cures and treatments. But we cannot harness those opportunities without ensuring that AI is safe for the British public and businesses, nor without agency over its development. I was grateful for the points made by my hon. Friend the Member for Milton Keynes Central (Emily Darlington) on the importance of standards and the hon. Member for Harpenden and Berkhamsted (Victoria Collins) about the importance of trust.

That is why the Government are determined to make the UK one of the best places to start a business, to scale up, to stay on our shores, especially for the UK AI assurance and standards market. Our trusted third-party AI assurance roadmap and AI assurance innovation fund are focused on supporting the growth of UK businesses and organisations providing innovative AI products that are proven to be safe for sale and use. We must ensure that the AI transformation happens not to the UK but with and through the UK.

In consistency with the points raised by my hon. Friend the Member for Milton Keynes Central, that is why we are backing the sovereign AI unit, with almost £500 million in investment, to help build and scale AI capabilities on British shores, which will reflect our country’s needs, values and laws. Our approach to those AI laws seeks to ensure that we balance growth and safety, and that we remain adaptable in the face of inevitable AI change.

On growth, I am glad to hear the points made by my hon. Friend the Member for Leeds South West and Morley about a space for businesses to experiment. We have announced proposals for an AI growth lab that will support responsible AI innovation by making targeted regulatory modifications under robust safeguards. That will help drive trust by providing a precisely safe space for experimentation and trialling of innovative products and services. Regulators will monitor that very closely.

On safety, we understand that AI is a general-purpose technology, with a wide range of applications. In recognition of the contribution from the hon. Member for Newton Abbot (Martin Wrigley), I reaffirm some of the points he made about being thoughtful in regulatory approaches that distinguish between the technology and the specific use cases. That is why we believe that the vast majority of AI should be regulated at the point of use, where the risk relates and tractable action is most feasible.

A range of existing rules already applies to those AI systems in application contexts. Data protection and equality legislation protect the UK public’s data rights. They prevent AI-driven discrimination where the systems decide, for example, who is offered a job or credit. Competition law helps shields markets from AI uses that could distort them, including algorithmic collusion to set unfair prices.

Sarah Russell Portrait Sarah Russell
- Hansard - -

As a specialist equality lawyer, I am not currently aware of any cases in the UK around the kind of algorithmic bias that I am talking about. I would be delighted to see some, and delighted to see the Minister encouraging that, but I am not sure that the regulatory framework would achieve that at present.