Caroline Dinenage
Main Page: Caroline Dinenage (Conservative - Gosport)Department Debates - View all Caroline Dinenage's debates with the Home Office
(1 day, 12 hours ago)
Commons ChamberI am very grateful to the Minister for giving way on that point. I am not sure whether she will come on to this, but the Government have tabled amendments on online safety, and have identified that the next frontline in this war is artificial intelligence. As she knows, we have already seen children taking their own lives after interactions with AI chatbots, and we know that tech companies will always prioritise profits over user safety, so there must be more focus on a safety-by-design approach that prevents AI products that could be harmful to users from coming to market. This approach has been suggested by Baroness Kidron in the other place. Why are the Government not supporting her amendment?
I thank the hon. Lady for her intervention. She is, of course, right about the growing concern around chatbots and the need for safety by design. I will come on to Baroness Kidron’s amendment and the Government’s response to it later on in my speech.
Furthermore, the Government have brought forward Lords amendment 367 to take a power to extend the scope of the Online Safety Act 2023 to cover unregulated AI chatbots. It means that general-purpose AI chatbots, such as Grok, which allow the creation and sharing of non-consensual intimate images, will have to proactively remove that illegal content from their services or face enforcement from Ofcom. Taken together, the measures will deliver an effective ban on nudification tools. Given that, we do not believe that a separate possession offence, as provided for in Lords amendment 505, would make a meaningful difference, not least as many such tools are accessed online, rather than possessed.
Where a person is convicted of an intimate image offence, we agree that it is vital that those images are deleted from the perpetrator’s devices. Amendment (a) in lieu of Lords amendment 258 enables the courts to make an image deletion order following a conviction for an offence related to intimate image abuse. Breach of the order will be a criminal offence. The amendment also enables the courts to require the deletion of other intimate images of the same victim. This approach gives courts the required flexibility to consider the details of each case when applying their powers, while ensuring that the offenders are held accountable for compliance with the order.
I appreciate the challenge that the right hon. Gentleman is raising, and I know that DUP Members of Parliament in particular have raised these concerns before. The challenge here is that Lords amendment 357 would remove the historical safeguard for statements that glorify acts of terrorism committed by proscribed organisations. Our view is that these statements may not necessarily create terrorist risk and may result in the offence capturing legitimate political and social discourse and debate.
I will say two other things to the right hon. Gentleman. First, the independent reviewer of terrorism legislation, Jonathan Hall KC, strongly advised against the removal of the historical safeguard in his review of terrorism legislation following the 7 October attacks in 2023. Secondly, in the light of the concerns that have been raised in the Lords and by Members in this place, the Government will ask the independent reviewer to conduct a more detailed review of the encouragement offence within six months of Royal Assent.
Let me turn to Lords amendment 359. It is a long-standing principle that has been adopted by successive Administrations that the Government do not comment on which organisations are being considered for proscription. Mandating that the Government review whether to proscribe Iranian Government-related organisations would violate this principle and tie the Government’s hands unnecessarily. The Government are already taking decisive action to deter threats from Iran, and we have committed to introducing a new state threats-based proscription tool.
I turn now to Lords amendments 360 and 368 to 372 tabled by Baroness Kidron, which concern chatbots. The Government are clear that we need to act quickly to bring all unregulated AI chatbots within the scope of the Online Safety Act’s requirements on illegal activity. As I mentioned earlier, the Government are seeking to take a regulation-making power to do this, under Lords amendment 367. By taking this power, the Government will be able to remove any ambiguity over whether services like Grok are subject to the Online Safety Act’s provisions to tackle illegal content. This approach also allows us to design regulations that are effective, targeted and informed by necessary consultation with subject matter experts. Amendment (a) in lieu of Lords amendment 372 commits the Government to reporting to Parliament by the end of the year on our progress to develop regulations.
I don’t mean to bang on about this, but the fact is that the Government’s approach is too narrow. It is focused on taking down illegal content when it should be the responsibility of the company to prevent harms in the first place, rather than to deal with them after the event. We do not design any other sector’s regulation in this way. When designing aircraft, we do not wait until after the plane has crashed to worry about any of the safety features. This should be the same.
During Report stage in the Lords, peers voted overwhelmingly in support of the safety-by-design approach. They also understood that when it comes to the design of something, harm includes building in aspects that are addictive and manipulative, which have been key to some of the very tragic suicides of children who have interacted with AI chatbots. What do the Government have against building safety by design into the very purpose of AI chatbots?
The hon. Lady makes her case very clearly, and we can agree that we need to design out those kinds of issues. The challenges are in what we do and how we do it—those are the challenges we had with this particular group of amendments. Obviously there is wider work being done on violence against women and girls and how the Online Safety Act is to be rolled forward, and that work is really important, but we are talking about this particular group of Lords amendments on chatbots and the challenges with them. That is why, through amendment (a) in lieu, we commit to reporting by the end of the year on our progress to develop regulations.
We are clear that regulation is a more effective and proportionate tool than the criminal law for addressing risks from AI chatbots and setting industry best practice. Incorporating currently unregulated chatbots into the scope of the Online Safety Act will ensure that such regulation applies extraterritorially, which is crucial when dealing with international companies.
The Government’s approach is also broader in scope than the content of amendments 360 and 368 to 372. Those amendments would not capture image generators creating non-consensual graphic images of women or online AI chatbot toys such as Gabbo. The Government’s amendment in lieu does capture such services and allows them to be clearly brought under online safety regulations.