Offences against Children: Artificial Intelligence

(asked on 17th December 2025) - View Source

Question to the Home Office:

To ask the Secretary of State for the Home Department, what assessment she has made of the potential impact of the use of AI by child sexual abuse offenders on levels of offending.


Answered by
Jess Phillips Portrait
Jess Phillips
Parliamentary Under-Secretary (Home Office)
This question was answered on 9th January 2026

The Government recognises the serious and evolving threat posed by artificial intelligence being misused by offenders for child sexual abuse.

AI-generated child sexual abuse material is not a victimless crime; it often depicts real children, increasing the risk of contact abuse. The volume and realism of this material can make it increasingly challenging for safeguarding partners to identify and protect children. Offenders can also use these images to groom and blackmail children.

In September 2025, the Internet Watch Foundation revealed, for the first time, child sexual abuse images linked directly to AI chatbots, including examples designed to simulate sexual scenarios with child avatars.

We know offenders will seek every opportunity to exploit emerging and established technologies to facilitate their offending.

UK law is explicit. Child sexual abuse is illegal. We must all play our part to prevent the misuse of this technology being used to target our children.

This is why the UK Government has taken world-leading action to tackle this threat.

Working in partnership with the Department for Science, Innovation and Technology, the Alan Turing Institute, and the Accelerated Capability Environment, the Home Office has led the Deepfake Detection Challenge. This initiative brought together experts and stakeholders to develop and evaluate detection tools, which are essential in addressing serious harms including online child sexual abuse. As offenders increasingly exploit AI, we must harness its potential for good.

A key outcome is the UK Government Benchmarking capability, enabling scientific evaluation of detection technologies. The next phase will continue to identify and benchmark AI-driven solutions.

Under the Crime and Policing Bill, creating, possessing, or distributing AI tools for child sexual abuse will carry penalties of up to five years’ imprisonment, with up to three years for “paedophile manuals” on how to use AI to abuse children.

We have recently announced a further amendment to the Crime and Policing Bill to empower authorised bodies- including AI developers and child protection organisations- to scrutinise AI systems to prevent them generating harmful content. This will help to improve safeguards within AI models to prevent them being misused to create child abuse material.

We recognise there are concerns about AI chatbots, or AI companions, and the risks of harm to children these may pose. At the recent Science, Innovation and Technology Committee, we confirmed that we are considering if all AI chatbots are covered by the Online Safety Act and what more may need to be done. If it requires legislation, then this is what we will do.

Where AI models fall under the Online Safety Act as a user-to-user service or an online search provider, companies are required to provide highly effective age assurance to protect children from exposure to harmful or inappropriate content.

The Online Safety Act lays the foundation for a safer online experience for children, but this is just the start of the conversation.

Our approach combines robust legislation, proactive technology safeguards, and international cooperation to keep children safe online and we will not hesitate to go further.

Reticulating Splines