Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what assessment her Department has made of the potential merits of making UK AISI / Thorn's guidance, Recommended Practice for AI-G CSEA Prevention, published in December 2025, mandatory for all AI developers to prevent the creation of AI-generated child sexual abuse material.
The Government recognises the importance of tackling AI-generated CSAM. Creating, possessing, or distributing CSAM, including AI Generated CSAM, is illegal. The Online Safety Act requires services to proactively identify and remove this content. We are taking further action in the Crime and Policing Bill to criminalise CSAM image generators, and to ensure AI developers can directly test for and address vulnerabilities in their models which enable the production of CSAM.
The AISI / Thorn joint publication guidance (Recommended Practice for AI-G CSEA Prevention) sets out practical steps that AI developers, model hosting services and others in the AI ecosystem can take to reduce the risk that their systems are misused to generate CSAM. This guidance is informed by input from industry and child protection organisations, and many of the world’s leading AI developers (including OpenAI, Anthropic, Google and Meta) have signed up to the principles of earlier forms of this guidance.
The Government is clear: no option is off the table when it comes to protecting the online safety of users in the UK, and we will not hesitate to act where evidence suggests that further action is necessary.