Internet: Children

(asked on 11th December 2025) - View Source

Question to the Home Office:

To ask the Secretary of State for the Home Department, what steps she is taking to help protect children from AI-generated abuse online.


Answered by
Jess Phillips Portrait
Jess Phillips
Parliamentary Under-Secretary (Home Office)
This question was answered on 17th December 2025

The Government recognises the serious and evolving threat posed by artificial intelligence being misused to create child sexual abuse material. We have taken world-leading action to address this risk.

AI-generated child sexual abuse material is not a victimless crime. The material often includes depictions of real children, escalating the risk of contact abuse. The volume and realism of this material can make it increasingly challenging for safeguarding partners to identify and protect children. Offenders can also use these images to groom and blackmail children.

Working in partnership with the Department for Science, Innovation and Technology, the Alan Turing Institute, and the Accelerated Capability Environment, the Home Office has led the Deepfake Detection Challenge. This initiative brought together experts and stakeholders to develop and evaluate detection tools, which are essential in addressing serious harms including online child sexual abuse. As offenders increasingly exploit AI, we must harness its potential for good.

A key outcome has been the creation of a UK Government Benchmarking capability which enables scientific evaluation of detection technologies, offering data to support informed procurement decisions for the most effective solutions. The next phase will continue to identify and benchmark AI-driven solutions.

Through the Crime and Policing Bill, we are introducing specific offences to make it illegal to possess, create, or distribute AI tools designed to generate child sexual abuse material, as well as so-called “paedophile manuals” that instruct offenders on how to exploit AI for abuse. These offences carry penalties of up to five years’ imprisonment for AI tools and up to three years for manuals.

We have recently announced a further amendment to the Crime and Policing Bill to empower authorised bodies- including AI developers and child protection organisations- to scrutinise AI systems to prevent them generating harmful content. This will help to improve safeguards within AI models to prevent them being misused to create child abuse material.

Where AI models fall under the Online Safety Act as a user-to-user service or an online search provider, companies are required to provide highly effective age assurance to protect children from exposure to harmful or inappropriate content.

We recognise there are concerns about AI chatbots, or AI companions, and the risks of harm to children these may pose. At the recent Science, Innovation and Technology Committee, we confirmed that we are considering if all AI chatbots are covered by the Online Safety Act and what more may need to be done. If it requires legislation, then this is what we will do.

We have been clear as a government that our steps so far with the Online Safety Act are the foundation for a safer online experience for children. But it is not the end of the conversation.

The UKG will also be supporting to host an event in the new year with the NSPCC focusing on children and AI.

Our approach combines robust legislation, proactive technology safeguards, and international cooperation to keep children safe online and we will not hesitate to go further.

Reticulating Splines