Social Media: Non-consensual Sexual Deepfakes Debate

Full Debate: Read Full Debate
Department: Department for Business and Trade

Social Media: Non-consensual Sexual Deepfakes

Viscount Camrose Excerpts
Wednesday 14th January 2026

(1 day, 22 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Let me conclude by saying this. I believe, and the Government believe, that artificial intelligence is a transformative technology that has the power and potential to bring about extraordinary and welcome change—to create jobs and growth, to diagnose and treat diseases, to help children learn at school, to tackle climate change and so much more besides—but in order to seize those opportunities, people must feel confident that they and their children are safe online, and that AI is not being used for destructive and abusive ends. Many tech companies want to act, and are acting, responsibly, but where they do not, we must and will act. Innovation should serve humanity, not degrade it, so we will leave no stone unturned in our determination to stamp out these demeaning, degrading and illegal images. If that means strengthening existing laws, we are prepared to do that, because this Government stand on the side of decency. We stand on the side of the law. We stand for basic British values, which are supported by the vast majority of people in this country. I commend this Statement to the House”.
Viscount Camrose Portrait Viscount Camrose (Con)
- View Speech - Hansard - -

My Lords, the technological capabilities and their misuse that have prompted this Statement are, needless to say, deeply disturbing and demand our careful attention. The use of AI to generate non-consensual sexual imagery of women and children is both grotesque in itself, but also corrosive of trust in technology more broadly.

We therefore welcome the Secretary of State’s confirmation that new offences criminalising the creation or solicitation of such material will be brought into force this week. We support the enforcement of these laws. We also welcome Ofcom’s decision to open a formal investigation into the use of Grok on X under the Online Safety Act, an investigation that must proceed swiftly to protect victims and hold platforms to account.

Hard though it is to predict the misuses of emerging technologies, we must collectively find better ways to be ready for them before they strike. I fear there is a pervasive and damaging sense of regulatory, legislative and political uncertainty around AI. As long as that remains the case, we risk remaining a victim of events beyond our control.

From the outset of this Parliament, and indeed in opposition, the Government have pledged to legislate on AI. Reviews and policy documents, including the Clifford AI Opportunities Action Plan, promised a framework to drive adoption and regulatory clarity. However, we still have no clear timeline, nor even a clear account of the Government’s policy on AI.

It is worth noting that the legislative tools the Government are now relying on to implement their proposed new offences, such as the creation and solicitation of non-consensual intimate images, are the product of amendments introduced by this House to the Data (Use and Access) Act. Ministers have repeatedly argued both that binding AI regulation must come, and that the existing multi-regulator framework is sufficient.

Evidence to the House of Commons Science, Innovation and Technology Committee late last year confirmed that the Secretary of State would not commit to a specific AI Bill, instead speaking of considering targeted interventions rather than an overarching legislative framework. This may indeed be the right approach, but its unclear presentation and communication drive uncertainty that undermines confidence for investors, businesses and regulators, but above all for citizens.

Progress on other AI-related policy commitments seems to have stalled too. I do not underestimate the difficulty of the problem, but work thus far on AI and copyright has been pretty disappointing. I am not seeking to go into that debate now, but only to make the point that it contributes to a widespread sense of uncertainty about tech in general and AI in particular.

Frankly, this uncertainty has been compounded by inconsistent political messaging. Over the weekend, reports emerged that the Government were considering banning X altogether before subsequently softening that position, creating wholly unnecessary confusion. At the same time, the Government have mischaracterised X’s decision to move its nudification tools behind a paywall as a means to boost profits, when the platform argues, reasonably persuasively, that this is a measure to ensure that those misusing the tools cannot do so anonymously.

Nor has there been much effective communication from the Government about their regulatory intentions for AI. This leaves the public and businesses unclear on how AI will be regulated and what standards companies are expected to meet. Political and legislative uncertainty in this case is having real consequences. It weakens our ability to deter misuse of AI technologies; it undermines public confidence, and it leaves regulators and enforcement agencies in a reactive posture rather than being empowered to act with a clear statutory direction.

We of course support efforts to criminalise harmful uses of AI. However, under the Government’s current Sentencing Bill, most individuals convicted of these new AI-related offences against women and girls will be liable for only suspended sentences, meaning that they could leave court free to continue using the technology that enabled their crime. This is concerning. It cannot be right that someone found guilty of producing non-consensual sexual imagery may walk free, unrestrained and with unimpeded access to the tools that facilitated their offending.

As I say, we support Ofcom’s work and the use of existing powers, but law without enforcement backed by a coherent, predictable regulatory regime will offer little real protection. Without proper sentencing, regulatory certainty and clear legislative direction for AI, these laws will not provide the protection that we need.

We urge the Government to publish a clear statement on their intentions on comprehensive AI regulation, perhaps building on the AI White Paper that we produced in government, to provide clarity for both tech companies and the public, and to underpin the safe adoption of AI across the economy and society. We must assume that new ways to abuse AI are being developed as we speak. Either we have principled, strategic approaches to deal with them, or we end up lurching from one crisis to the next.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - - - Excerpts

My Lords, we on the Liberal Democrat Benches welcome the Secretary of State’s Statement, as well as her commitment to bring the new offence of creating or requesting non-consensual intimate images into force and to make it a priority offence. However, why has it taken this specific crisis with Grok and X to spur such urgency? The Government have had the power for months to commence this offence, so why have they waited until women and children were victimised on an industrial scale?

My Commons colleagues have called for the National Crime Agency to launch an urgent criminal investigation into X for facilitating the creation and distribution of this vile and abusive deepfake imagery. The Secretary of State is right to call X’s decision to put the creation of these images behind a paywall insulting; indeed, it is the monetisation of abuse. We welcome Ofcom’s formal investigation into sexualised imagery generated by Grok and shared on X. However, will the Minister confirm that individuals creating and sharing this content will also face criminal investigation by the police? Does the Minister not find it strange that the Prime Minister needs to be reassured that X, which is used by many parliamentarians and government departments, will comply with UK law?

While we welcome the move to criminalise nudification apps in the Crime and Policing Bill, we are still waiting for the substantive AI Bill promised in the manifesto. The Grok incident proves that voluntary agreements are not enough. I had to take a slightly deep breath when I listened to what the noble Viscount, Lord Camrose, had to say. Who knew that the Conservative Party was in favour of AI regulation? Will the Government commit to a comprehensive, risk-based regulatory framework, with mandatory safety testing, for high-risk models before they are released to the public, of the kind that we have been calling for on these Benches for some time? We need risk-proportionate, mandatory standards, not voluntary commitments that can be abandoned overnight.

Will the Government mandate the adoption of hashtagging technology that would make the removal of non-consensual images possible, as proposed by the noble Baroness, Lady Owen of Alderley Edge, in Committee on the Crime and Policing Bill—I am pleased to see that the noble Lord, Lord Hanson, is in his place—and as advocated by StopNCII.org?

The Secretary of State mentioned her commitment to the safety of children, yet she has previously resisted our calls to raise the digital age of consent to 16, in line with European standards. If the Government truly want to stop companies profiteering from children’s attention and data, why will they not adopt this evidence-based intervention?

To be absolutely clear, the creation and distribution of non-consensual intimate images has nothing whatever to do with free speech. These are serious criminal offences. There is no free speech right to sexually abuse women and children, whether offline or online. Any attempt to frame this as an issue of freedom of expression is a cynical distortion designed to shield platforms from their legal responsibilities.

Does the Minister have full confidence that Ofcom has the resources and resolve to take on these global tech giants, especially now that it is beginning to ramp up the use of its investigation and enforcement powers? Will the Government ensure that Ofcom uses the full range of enforcement powers available to it? If X continues to refuse compliance, will Ofcom deploy the business disruption measures under Part 7, Chapter 6 of the Online Safety Act? Will it seek service restriction orders under Sections 144 and 145 to require payment service providers and advertisers to withdraw their services from the non-compliant platform? The public expect swift and decisive action, not a drawn-out investigation while the abuse continues. Ofcom must use every tool Parliament has given it.

Finally, if the Government believe that X is a platform facilitating illegal content at scale, why do they continue to prioritise it for official communications? Is it not time for the Government to lead by example and reduce their dependence on a platform that seems ideologically opposed to the values of decency and even perhaps the UK rule of law, especially now that we know that the Government have withdrawn their claim that 10.8 million families use X as their main news source?

AI technologies are developing at an exponential rate. Clarity on regulation is needed urgently by developers, adopters and, most importantly, the women and children who deserve protection. The tech sector can be a force for enormous good, but only when it operates within comprehensive, risk-proportionate regulatory frameworks that put safety first. We on these Benches will support robust action to ensure that that happens.