Crime and Policing Bill Debate

Full Debate: Read Full Debate
Department: Home Office

Crime and Policing Bill

Baroness Berger Excerpts
Thursday 27th November 2025

(1 day, 4 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Morgan of Cotes Portrait Baroness Morgan of Cotes (Non-Afl)
- View Speech - Hansard - - - Excerpts

My Lords, I support the amendments of the noble Baroness, Lady Kidron. I was pleased to add my name to Amendments 266, 479 and 480. I also support the amendment proposed by the noble Lord, Lord Nash.

I do not want to repeat the points that were made—the noble Baroness ably set out the reasons why her amendments are very much needed—so I will make a couple of general points. As she demonstrated, what happens online has what I would call real-world consequences—although I was reminded this week by somebody much younger than me that of course, for the younger generation, there is no distinction between online and offline; it is all one world. For those of us who are older, it is worth remembering that, as the noble Baroness set out, what happens online has real-world, and sadly often fatal, consequences. We should not lose sight of that.

We have already heard many references to the Online Safety Act, which is inevitable. We all knew, even as we were debating the Bill before it was enacted, that there would have to be an Online Safety Act II, and no doubt other versions as well. As we have heard, technology is changing at an enormously fast rate, turbocharged by artificial intelligence. The Government recognise that in Clause 63. But surely the lesson from the past decade or more is that, although technology can be used for good, it can also be used to create and disseminate deeply harmful content. That is why the arguments around safety by design are absolutely critical, yet they have been lacking in some of the regulation and enforcement that we have seen. I very much hope that the Minister will be able to give the clarification that the noble Baroness asked for on the status of LLMs and chatbots under the Online Safety Act, although he may not be able to do so today.

I will make some general points. First, I do not think the Minister was involved in the debate on and scrutiny of—particularly in this Chamber—what became the Online Safety Act. As I have said before, it was a master class in what cross-party, cross-House working can achieve, in an area where, basically, we all want to get to the same point: the safety of children and vulnerable people. I hope that the Ministers and officials listening to and involved in this will work with this House, and with Members such as the noble Baroness who have huge experience, to improve the Bill, and no doubt lay down changes in the next piece of legislation and the one after that. We will always be chasing after developments in technology unless we are able to get that safety-by-design and preventive approach.

During the passage of the then Online Safety Bill, a number of Members of both Houses, working with experienced and knowledgeable outside bodies, spotted the harms and loopholes of the future. No one has all the answers, which is why it is worth working together to try to deal with the problems caused by new and developing technology. I urge the Government not to play belated catch-up as we did with internet regulation, platform regulation, search-engine regulation and more generally with the Online Safety Act. If we can work together to spot the dangers, whether from chatbots, LLMs, CSAM-generated content or deepfakes, we will do an enormous service to young people, both in this country and globally.

Baroness Berger Portrait Baroness Berger (Lab)
- View Speech - Hansard - -

My Lords, I support Amendments 479 and 480, which seek to prevent chatbots producing illegal content. I also support the other amendments in this group. AI chatbots are already producing harmful, manipulative and often racist content. They have no age protections and no warnings or information about the sources being used to generate the replies. Nor is there a requirement to ensure that AI does not produce illegal content. We know that chatbots draw their information from a wide range of sources that are often unreliable and open to manipulation, including blogs, open-edit sites such as Wikipedia, and messaging boards, and as a result they often produce significant misinformation and disinformation.

I will focus on one particular area. As we have heard in the contributions so far, we know that some platforms generate racist content. Looking specifically at antisemitism, we can see Holocaust denial, praise of Hitler and deeply damaging inaccuracies about Jewish history. We see Grok, the X platform, generating numerous antisemitic comments, denying the scale of the Holocaust, praising Adolf Hitler and, as recently as a couple of months ago, using Jewish-sounding surnames in the context of hate speech.

Impressionable children and young people, who may not know how to check the validity of the information they are presented with, can so easily be manipulated when exposed to such content. This is particularly concerning when we know that children as young as three are using some of these technologies. We have already heard about how chatbots in particular are designed in this emotionally manipulative way, in order to boost engagement. As we have heard—it is important to reiterate it—they are sycophantic, affirming and built to actively flatter.

If you want your AI chatbot or platform not to flatter you, you have to specifically go to the personalisation page, as I have done, and be very clear that you want responses that focus on substance over praise, and that it should skip compliments. Otherwise, these platforms are designed to act completely the other way. If a person acted like this in some circumstances, we would call it emotional abuse. These design choices mean that young people—teens and children—can become overly trusting and, as we have heard in the cases outlined, reliant on these bots. In the most devastating cases, we know that this focus on flattery has led to people such as Sophie Rottenberg and 16 year-old Adam Raine in America taking their own lives on the advice of these AI platforms. Assisting suicide is illegal, and we need to ensure that this illegality extends to chatbots.

--- Later in debate ---
Lord Hanson of Flint Portrait Lord Hanson of Flint (Lab)
- Hansard - - - Excerpts

If I may, I will take away those comments. I am responsible for many things in this House, including the Bill, but some of those areas fall within other ministerial departments. I am listening to what noble Lords and noble Baronesses are saying today.

Currently, through Online Safety Act duties, providers of those services are required to undertake appropriate risk assessments and, under the Act’s illegal content duties, platforms must implement robust and timely measures to prevent illegal content appearing on their services. All in-scope providers are expected to have effective systems and processes in place to ensure that the risks of their platform being used for the types of offending mentioned today are appropriately reduced.

Ofcom currently has a role that is focused on civil enforcement of duties on providers to assess and mitigate the risks posed by illegal content. Where Ofcom may bring prosecutions in some circumstances, it will do so only in relation to regulatory matters where civil enforcement is insufficient. The proposed approach is not in line with the enforcement regime under the Act at the moment, which is the responsibility of Ofcom and DSIT.

Baroness Berger Portrait Baroness Berger (Lab)
- Hansard - -

My noble friend is making really important comments in this regard, but on the specific issue of Ofcom, perhaps fuelling much of the concern across the Committee are the comments we have heard from Ofcom. I refer to a briefing from the Molly Rose Foundation, which I am sure other noble Lords have received, which says that uncertainty has been “actively fuelled” by the regulator Ofcom, which has told the Molly Rose Foundation that it intends to maintain “tactical ambiguity” about how the Act applies. That is the very issue that unites us in our concern.

Lord Hanson of Flint Portrait Lord Hanson of Flint (Lab)
- Hansard - - - Excerpts

I am grateful to my noble friend for that and for her contribution to the debate and the experiences she has brought. The monitoring and evaluation of the online safety regime is a responsibility of DSIT and Ofcom, and they have developed a framework to monitor the implementation of the Act and evaluate core outcomes. This monitoring and evaluation is currently tracking the effect of the online safety regime and feeding into a post-implementation review of the 2023 Act. Where there is evidence of a need to go further to keep children safe online, including from AI-enabled harms, the Government will not hesitate to act.

If the noble Baroness, Lady Kidron, will allow DSIT and Ofcom to look at those matters, I will make sure that DSIT Ministers are apprised of the discussion that we have had today. It is in this Bill, which is a Home Office Bill, but it is important that DSIT Ministers reflect on what has been said. I will ensure that we try to arrange that meeting for the noble Baroness in due course.

I want also to talk about Amendments 271A and 497ZA from the noble Lord, Lord Nash, which propose that smartphone and tablet manufacturers, importers and distributors are required to ensure that any device they have is preinstalled with technology that prevents the recording and viewing of child sexual abuse material or similar material accordingly. I acknowledge the noble Lord’s very valid intention concerning child safety and protection, and to prevent the spread of child sexual abuse material online. To that end, there is a shared agreement with the Government on the need to strengthen our already world-leading online safety regime wherever necessary.

I put to the noble Lord, and to the noble Lord, Lord Bethell, on his comments in support, that if nudity detection technology could be effectively deployed at scale, there could be a significant limiting impact on the production and sharing of child sexual abuse material. I accept that, but we must get this right. Application of detection technology that detects and blocks all nudity, adult and child, but which is primarily targeted at children, would be an effective intervention. I and colleagues across government want to gather evidence about the application of such technology and its effectiveness and impact. However, our assessment is that further work is needed to understand the accuracy of such tools and how they may be implemented.

We must also consider the risks that could arise from accepting this amendment, including legitimate questions about user privacy and data security. If it helps the noble Lord, Lord Nash, we will continue to assess the effect of detection tools on the performance of mobile device so that we can see how easy it is to circumvent them, how effective they are and a range of other matters accordingly. The Government’s focus is on protective measures within the Online Safety Act, but we are actively considering the potential benefits of the technology that the noble Lord has mentioned and others like it in parallel. There will be further future government interventions but they must be proportionate and driven by evidence. At the moment, we do not have sufficient evidence to ensure that we could accept the amendment from the noble Lord, but the direction of travel is one that we would support.