Question to the Home Office:
To ask the Secretary of State for the Home Department, what discussions her Department has had with relevant stakeholders on the use of proactive technology to (a) identify and (b) tackle (i) deepfakes and (ii) AI generated (A) intimate image abuse and (B) child sexual abuse images.
The Home Office actively engages with relevant stakeholders on the use of proactive technology to identify and tackle AI-enabled harms, including deepfakes, intimate image abuse and child sexual abuse images.
Working in partnership with the Department for Science, Innovation and Technology, the Alan Turing Institute, and the Accelerated Capability Environment, the Home Office has led the Deepfake Detection Challenge. This initiative brought together experts and stakeholders to develop and evaluate detection tools, which are essential in addressing serious harms including online child sexual abuse. As offenders increasingly exploit AI, we must harness its potential for good.
A key outcome has been the creation of a tool which enables scientific evaluation of detection technologies, offering actionable metrics to support informed procurement decisions and helping end users select the most effective solutions. This capability is now being considered as a potential global standard and the next phase will continue to identify and benchmark AI-driven solutions.
In addition, we are engaging with industry across the AI ecosystem, recognising their vital role in mitigating and preventing AI-enabled harms.
The Home Office has also introduced world leading measures, becoming the first country to criminalise the possession, creation and distribution of AI tools to generate child sexual abuse material, as well as the possession of paedophile manuals that instruct others on creating such tools.
The Government remains committed to investing in innovation to combat these appalling crimes and will continue to collaborate with relevant stakeholders to do so.