Question to the Department of Health and Social Care:
To ask His Majesty's Government what steps they are taking to ensure users of artificial intelligence platforms can safely access mental health support and are protected from harmful content such as suicide and self-harm content.
We recognise the growing use of artificial intelligence (AI) platforms and the potential risks they pose, particularly when people are seeking mental health support.
The National Health Service operates within a comprehensive regulatory framework for AI, underpinned by rigorous standards for safety and effectiveness. Publicly available AI applications that are not deployed by the NHS are not regulated as medical technologies and may offer incorrect or harmful information. Users are strongly advised to be careful when using these technologies.
Regardless of whether content is created by AI or humans, the Online Safety Act places robust duties on all in-scope services to prevent users encountering illegal content including content on suicide and self-harm.