Viscount Colville of Culross
Main Page: Viscount Colville of Culross (Crossbench - Excepted Hereditary)Department Debates - View all Viscount Colville of Culross's debates with the Home Office
(1 day, 4 hours ago)
Lords ChamberMy Lords, I put my name to Amendments 479 and 480, and I support the other amendments in this group. I have once again to thank my noble friend Lady Kidron for raising an issue which I had missed and which, I fear, the regulator might have missed as well. After extensive research, I too am very worried about the Online Safety Act, which many of your Lordships spent many hours refining. It does not cover some of the new developments in the digital world, especially personalised AI chatbots. They are hugely popular with children under 18; 31% use Snapchat’s My AI and 32% use Google’s Gemini.
The Online Safety Act Network set up an account on ChatGPT-5 using a 13 year-old persona. Within two minutes, the chatbot was engaged with the user about mental health, eating disorders and advice about how to safely cut yourself. Within 40 minutes, it had generated a list of pills for overdosing. The OSA was intended to stop such online behaviour. Your Lordships worked so hard to ensure that the OSA covered search and user-to-user functions in the digital space, but AI chatbots have varied functionalities that, as my noble friend pointed out, are not clearly covered by the legislation.
My noble friend Lady Kidron pointed out that, although Dame Melanie Dawes confirmed to the Communications and Digital Committee that chatbots are covered by the OSA, Ofcom in its paper Era of Answer Engines admits:
“Under the OSA, a search service means a service that is, or which includes, a search engine, and this applies to some (though not all) GenAI search tools”.
There is doubt about whether the AI interpretive process, which can change the original search findings, excludes it from being in the scope of search under the OSA. More significantly, AI chatbots are not covered where the provider creates content that is personalised for one user and cannot be forwarded to another user. I am advised that this is not a user-to-user service as defined under the Act.
One chatbot that seems to fall under this category is Replika. I had never heard of it until I started my research for this amendment. However, 2% of all children aged nine to 17 say that they have used the chatbot, and 18% have heard of it. Its aim is to stimulate human interaction by creating a replica chatbot personal to each user. It is very sophisticated in its output, using avatars to create images of a human interlocutor on screen and a speaking voice to reply conversationally to requests. The concern is that, unlike traditional search engines, it is programmed for sycophancy, or, in other words, to affirm and engage the user’s response—the more positive the response, the more engaged the child user. This has led to conversations with the AI companion talking the child user into self-harm and even suicide ideation.
Research by Internet Matters found that a third of children users think that interacting with chatbots is like talking to a friend. Most concerning is the level of trust they generate in children, with two in five saying that they have no concerns about the advice they are getting. However, because the replies are supposed to be positive, what might have started as trustworthy advice develops into unsafe advice as the conversation continues. My concern is that chatbots are not only affirming the echo chambers that we have seen developing for over a decade as a result of social media polarisation but are reducing yet further children’s critical faculties. We cannot leave the development of critical faculties to the already inadequate media literacy campaigns that Ofcom is developing. The Government need to discourage sycophancy and a lack of critical thinking at its digital source.
A driving force behind the Online Safety Act was the realisation that tech developers were prioritising user engagement over user safety. Once again, we find new AI products that are based on the same harmful principles. In looking at the Government’s headlong rush to surrender to tech companies in the name of AI growth, I ask your Lordships to read the strategic vision for AI laid out in the AI Opportunities Action Plan. It focuses on accelerating innovation but fails to mention once any concern about children’s safety. Your Lordships have fought hard to make children’s safety a priority online in legislation. Once again, I ask for these amendments to be scrutinised by Ofcom and the Government to ensure that children’s safety is at the very centre of their thinking as AI develops.
My Lords, I support the amendments of the noble Baroness, Lady Kidron. I was pleased to add my name to Amendments 266, 479 and 480. I also support the amendment proposed by the noble Lord, Lord Nash.
I do not want to repeat the points that were made—the noble Baroness ably set out the reasons why her amendments are very much needed—so I will make a couple of general points. As she demonstrated, what happens online has what I would call real-world consequences—although I was reminded this week by somebody much younger than me that of course, for the younger generation, there is no distinction between online and offline; it is all one world. For those of us who are older, it is worth remembering that, as the noble Baroness set out, what happens online has real-world, and sadly often fatal, consequences. We should not lose sight of that.
We have already heard many references to the Online Safety Act, which is inevitable. We all knew, even as we were debating the Bill before it was enacted, that there would have to be an Online Safety Act II, and no doubt other versions as well. As we have heard, technology is changing at an enormously fast rate, turbocharged by artificial intelligence. The Government recognise that in Clause 63. But surely the lesson from the past decade or more is that, although technology can be used for good, it can also be used to create and disseminate deeply harmful content. That is why the arguments around safety by design are absolutely critical, yet they have been lacking in some of the regulation and enforcement that we have seen. I very much hope that the Minister will be able to give the clarification that the noble Baroness asked for on the status of LLMs and chatbots under the Online Safety Act, although he may not be able to do so today.
I will make some general points. First, I do not think the Minister was involved in the debate on and scrutiny of—particularly in this Chamber—what became the Online Safety Act. As I have said before, it was a master class in what cross-party, cross-House working can achieve, in an area where, basically, we all want to get to the same point: the safety of children and vulnerable people. I hope that the Ministers and officials listening to and involved in this will work with this House, and with Members such as the noble Baroness who have huge experience, to improve the Bill, and no doubt lay down changes in the next piece of legislation and the one after that. We will always be chasing after developments in technology unless we are able to get that safety-by-design and preventive approach.
During the passage of the then Online Safety Bill, a number of Members of both Houses, working with experienced and knowledgeable outside bodies, spotted the harms and loopholes of the future. No one has all the answers, which is why it is worth working together to try to deal with the problems caused by new and developing technology. I urge the Government not to play belated catch-up as we did with internet regulation, platform regulation, search-engine regulation and more generally with the Online Safety Act. If we can work together to spot the dangers, whether from chatbots, LLMs, CSAM-generated content or deepfakes, we will do an enormous service to young people, both in this country and globally.
Many AI chatbots that enable users to share content with each other or search live websites for information are within the scope of the Online Safety Act’s duties. Providers of those services—
I want to repeat what I said in my speech. There are some chatbots, such as Replika, that do not have user-to-user functionality. They are created for just one user, and that user cannot pass it on to any other users. There is concern that the law does not cover that and that Ofcom does not regulate it.
If I may, I will take away those comments. I am responsible for many things in this House, including the Bill, but some of those areas fall within other ministerial departments. I am listening to what noble Lords and noble Baronesses are saying today.
Currently, through Online Safety Act duties, providers of those services are required to undertake appropriate risk assessments and, under the Act’s illegal content duties, platforms must implement robust and timely measures to prevent illegal content appearing on their services. All in-scope providers are expected to have effective systems and processes in place to ensure that the risks of their platform being used for the types of offending mentioned today are appropriately reduced.
Ofcom currently has a role that is focused on civil enforcement of duties on providers to assess and mitigate the risks posed by illegal content. Where Ofcom may bring prosecutions in some circumstances, it will do so only in relation to regulatory matters where civil enforcement is insufficient. The proposed approach is not in line with the enforcement regime under the Act at the moment, which is the responsibility of Ofcom and DSIT.