Artificial Intelligence: Safeguarding Debate
Full Debate: Read Full DebateBaroness Deech
Main Page: Baroness Deech (Crossbench - Life peer)Department Debates - View all Baroness Deech's debates with the Department of Health and Social Care
(1 day, 11 hours ago)
Lords ChamberI understand the need for training, as the noble Baroness rightly outlines, but I would emphasise that AI chatbots are in scope of the Act, as I mentioned just now to my noble friend. What matters is the fact that they actually search the live internet. The point the noble Baroness raises is very important, and it is also about literacy in terms of using the internet, equipping individuals to try to stay safe, and safeguarding those who are more vulnerable, as she describes; training is certainly part of that.
My Lords, I have consulted ChatGPT on this. It calls me “dear Ruth”, and it says that when people write to it about suicide, it responds with empathy and compassion. It does not encourage suicide, and it sends a guide to human support. I do not want to make light of this or condemn it outright. On the contrary, there may be something to be said, certainly at a light level, for unhappy people consulting ChatGPT. I do not want to discourage or limit freedom of speech any further than it is already limited. There may be some help for people in ChatGPT.
The noble Baroness makes a helpful challenge about how to regard AI services. Generative AI can indeed offer opportunities to enhance mental health support, and the National Health Service is looking at how we can, particularly through the NHS app, assist and support people. But such technologies must not replace trained professionals, including in crisis situations. It is about getting the right support, at the right place, at the right time—that is a delicate balance, but we should use AI for its great benefits.