Crime and Policing Bill Debate

Full Debate: Read Full Debate
Department: Home Office

Crime and Policing Bill

Baroness Morgan of Cotes Excerpts
Thursday 27th November 2025

(1 day, 4 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Viscount Colville of Culross Portrait Viscount Colville of Culross (CB)
- View Speech - Hansard - - - Excerpts

My Lords, I put my name to Amendments 479 and 480, and I support the other amendments in this group. I have once again to thank my noble friend Lady Kidron for raising an issue which I had missed and which, I fear, the regulator might have missed as well. After extensive research, I too am very worried about the Online Safety Act, which many of your Lordships spent many hours refining. It does not cover some of the new developments in the digital world, especially personalised AI chatbots. They are hugely popular with children under 18; 31% use Snapchat’s My AI and 32% use Google’s Gemini.

The Online Safety Act Network set up an account on ChatGPT-5 using a 13 year-old persona. Within two minutes, the chatbot was engaged with the user about mental health, eating disorders and advice about how to safely cut yourself. Within 40 minutes, it had generated a list of pills for overdosing. The OSA was intended to stop such online behaviour. Your Lordships worked so hard to ensure that the OSA covered search and user-to-user functions in the digital space, but AI chatbots have varied functionalities that, as my noble friend pointed out, are not clearly covered by the legislation.

My noble friend Lady Kidron pointed out that, although Dame Melanie Dawes confirmed to the Communications and Digital Committee that chatbots are covered by the OSA, Ofcom in its paper Era of Answer Engines admits:

“Under the OSA, a search service means a service that is, or which includes, a search engine, and this applies to some (though not all) GenAI search tools”.


There is doubt about whether the AI interpretive process, which can change the original search findings, excludes it from being in the scope of search under the OSA. More significantly, AI chatbots are not covered where the provider creates content that is personalised for one user and cannot be forwarded to another user. I am advised that this is not a user-to-user service as defined under the Act.

One chatbot that seems to fall under this category is Replika. I had never heard of it until I started my research for this amendment. However, 2% of all children aged nine to 17 say that they have used the chatbot, and 18% have heard of it. Its aim is to stimulate human interaction by creating a replica chatbot personal to each user. It is very sophisticated in its output, using avatars to create images of a human interlocutor on screen and a speaking voice to reply conversationally to requests. The concern is that, unlike traditional search engines, it is programmed for sycophancy, or, in other words, to affirm and engage the user’s response—the more positive the response, the more engaged the child user. This has led to conversations with the AI companion talking the child user into self-harm and even suicide ideation.

Research by Internet Matters found that a third of children users think that interacting with chatbots is like talking to a friend. Most concerning is the level of trust they generate in children, with two in five saying that they have no concerns about the advice they are getting. However, because the replies are supposed to be positive, what might have started as trustworthy advice develops into unsafe advice as the conversation continues. My concern is that chatbots are not only affirming the echo chambers that we have seen developing for over a decade as a result of social media polarisation but are reducing yet further children’s critical faculties. We cannot leave the development of critical faculties to the already inadequate media literacy campaigns that Ofcom is developing. The Government need to discourage sycophancy and a lack of critical thinking at its digital source.

A driving force behind the Online Safety Act was the realisation that tech developers were prioritising user engagement over user safety. Once again, we find new AI products that are based on the same harmful principles. In looking at the Government’s headlong rush to surrender to tech companies in the name of AI growth, I ask your Lordships to read the strategic vision for AI laid out in the AI Opportunities Action Plan. It focuses on accelerating innovation but fails to mention once any concern about children’s safety. Your Lordships have fought hard to make children’s safety a priority online in legislation. Once again, I ask for these amendments to be scrutinised by Ofcom and the Government to ensure that children’s safety is at the very centre of their thinking as AI develops.

Baroness Morgan of Cotes Portrait Baroness Morgan of Cotes (Non-Afl)
- View Speech - Hansard - -

My Lords, I support the amendments of the noble Baroness, Lady Kidron. I was pleased to add my name to Amendments 266, 479 and 480. I also support the amendment proposed by the noble Lord, Lord Nash.

I do not want to repeat the points that were made—the noble Baroness ably set out the reasons why her amendments are very much needed—so I will make a couple of general points. As she demonstrated, what happens online has what I would call real-world consequences—although I was reminded this week by somebody much younger than me that of course, for the younger generation, there is no distinction between online and offline; it is all one world. For those of us who are older, it is worth remembering that, as the noble Baroness set out, what happens online has real-world, and sadly often fatal, consequences. We should not lose sight of that.

We have already heard many references to the Online Safety Act, which is inevitable. We all knew, even as we were debating the Bill before it was enacted, that there would have to be an Online Safety Act II, and no doubt other versions as well. As we have heard, technology is changing at an enormously fast rate, turbocharged by artificial intelligence. The Government recognise that in Clause 63. But surely the lesson from the past decade or more is that, although technology can be used for good, it can also be used to create and disseminate deeply harmful content. That is why the arguments around safety by design are absolutely critical, yet they have been lacking in some of the regulation and enforcement that we have seen. I very much hope that the Minister will be able to give the clarification that the noble Baroness asked for on the status of LLMs and chatbots under the Online Safety Act, although he may not be able to do so today.

I will make some general points. First, I do not think the Minister was involved in the debate on and scrutiny of—particularly in this Chamber—what became the Online Safety Act. As I have said before, it was a master class in what cross-party, cross-House working can achieve, in an area where, basically, we all want to get to the same point: the safety of children and vulnerable people. I hope that the Ministers and officials listening to and involved in this will work with this House, and with Members such as the noble Baroness who have huge experience, to improve the Bill, and no doubt lay down changes in the next piece of legislation and the one after that. We will always be chasing after developments in technology unless we are able to get that safety-by-design and preventive approach.

During the passage of the then Online Safety Bill, a number of Members of both Houses, working with experienced and knowledgeable outside bodies, spotted the harms and loopholes of the future. No one has all the answers, which is why it is worth working together to try to deal with the problems caused by new and developing technology. I urge the Government not to play belated catch-up as we did with internet regulation, platform regulation, search-engine regulation and more generally with the Online Safety Act. If we can work together to spot the dangers, whether from chatbots, LLMs, CSAM-generated content or deepfakes, we will do an enormous service to young people, both in this country and globally.

Baroness Berger Portrait Baroness Berger (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, I support Amendments 479 and 480, which seek to prevent chatbots producing illegal content. I also support the other amendments in this group. AI chatbots are already producing harmful, manipulative and often racist content. They have no age protections and no warnings or information about the sources being used to generate the replies. Nor is there a requirement to ensure that AI does not produce illegal content. We know that chatbots draw their information from a wide range of sources that are often unreliable and open to manipulation, including blogs, open-edit sites such as Wikipedia, and messaging boards, and as a result they often produce significant misinformation and disinformation.

I will focus on one particular area. As we have heard in the contributions so far, we know that some platforms generate racist content. Looking specifically at antisemitism, we can see Holocaust denial, praise of Hitler and deeply damaging inaccuracies about Jewish history. We see Grok, the X platform, generating numerous antisemitic comments, denying the scale of the Holocaust, praising Adolf Hitler and, as recently as a couple of months ago, using Jewish-sounding surnames in the context of hate speech.

Impressionable children and young people, who may not know how to check the validity of the information they are presented with, can so easily be manipulated when exposed to such content. This is particularly concerning when we know that children as young as three are using some of these technologies. We have already heard about how chatbots in particular are designed in this emotionally manipulative way, in order to boost engagement. As we have heard—it is important to reiterate it—they are sycophantic, affirming and built to actively flatter.

If you want your AI chatbot or platform not to flatter you, you have to specifically go to the personalisation page, as I have done, and be very clear that you want responses that focus on substance over praise, and that it should skip compliments. Otherwise, these platforms are designed to act completely the other way. If a person acted like this in some circumstances, we would call it emotional abuse. These design choices mean that young people—teens and children—can become overly trusting and, as we have heard in the cases outlined, reliant on these bots. In the most devastating cases, we know that this focus on flattery has led to people such as Sophie Rottenberg and 16 year-old Adam Raine in America taking their own lives on the advice of these AI platforms. Assisting suicide is illegal, and we need to ensure that this illegality extends to chatbots.