AI Safety Debate
Full Debate: Read Full DebateLuke Charters
Main Page: Luke Charters (Labour - York Outer)Department Debates - View all Luke Charters's debates with the Department for Science, Innovation & Technology
(1 day, 22 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
Mr Luke Charters (York Outer) (Lab)
It is a pleasure to serve under your chairship, Ms Butler, especially given your distinguished background in tech and AI advocacy.
May I share something that hon. Members will be pleased to hear? I am, in fact, the youngest parent in Parliament, and I am constantly thinking about the place that my young boys are set to grow up in. We expect that AI will form a core part of their lives, even in primary school, which is hard to imagine. The more I think about AI, however, the more I think that we should introduce it in the key stage 2 curriculum, alongside vital safeguards. After all, I was learning to use Google Search at around that age. What is different about using Gemini today?
There are potential harms, such as the deeply tragic story of a young boy in the US who sadly took his own life, but AI can be a force for good. The more that children learn about AI, and about using it for activities such as homework and coursework, the more I believe we should not be punishing them for using it. Instead, in the future, we should allow students to use AI in some exams to test how they use their AI skills. I met students at Fulford school and York college in my constituency and their message was, “Don’t punish us for using AI when it’s going to become a key part of our employment in the future. Teach us to use it responsibly. Teach us to use it when we come to our employment.” If we do not make that shift now, we will face a productivity puzzle in the future.
I will move on to the issue of physical illness. We have all been poorly; we have all picked up an iPhone. As hon. Members can tell, my greatest treasure is my kids, but when parents put their children’s symptoms into AI, they are putting a lot of trust in AI models. I urge the Government to work with the Department of Health and Social Care and the NHS to make sure that AI chatbots and tools cite the NHS as a single source of truth, not health advice from outside this country.
I will touch on mental health as well, because around a quarter of children now use AI chatbots for their mental health. We cannot pretend that people will not use AI as a tool for mental health support, and in particular, blokes out there might well use AI as a first port of call to unpack what they are going through. That should be welcome, but it comes with a great responsibility for the AI tools—Gemini, ChatGPT, Perplexity and so on—to get things right. I urge the companies that make those tools to work with the Government and the charitable sector, including great charities such as Samaritans, to do that.
We have to embrace AI. There are great opportunities, but there need to be safeguards and support. With that in mind, Britain can be a world leader in AI safety.
Mark Sewards (Leeds South West and Morley) (Lab)
It is a pleasure to serve under your chairship, Ms Butler. I congratulate the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this timely debate.
In August, I created the first AI prototype of a British MP. It was made by my constituent, Jeremy Smith, who ran an AI start-up in my constituency. I will go to almost any lengths to support a local business in Leeds South West and Morley. This was an online MP that anyone could talk to at any time. Jeremy said my constituents would benefit from two versions of me, including one that never sleeps—although, with children aged four and one, I am not sure that is a useful distinction.
Questions were converted into text and an answer generated quickly, and then it was turned into my voice for the users. The replica was impressive, although I did sound a bit too posh and angry when I did not know the answer. AI Mark not only had my voice—we also fed it my policy stances and typical casework answers. I saw it as a clever voicemail system designed to handle common casework queries when my office was closed; it was never going to be a replacement for me or for my excellent casework team. However, how does it relate to safety? We have all seen AI models that break, say outrageous things or hallucinate.
We created what I called the “guardrails”, and these were the limits on what AI Mark could say. That created a problem: when the guardrails were lower, AI Mark was very interesting to talk to. He would create Tinder dating profiles on demand; he did write incorrect haikus about the hon. Member for Clacton (Nigel Farage); and he did give the population of Vietnam—and try to predict the weather there, too.
Mr Charters
Does my hon. Friend think the Whips would prefer the real Mark, or the AI Mark?
Mark Sewards
My hon. Friend tempts me to say something I am not allowed to in this place, so I will say that they absolutely would prefer me—of course they would.
AI Mark could also be exploited to say things that just were not true. So I lifted the guardrails to reduce the risk before I released him to the public, but this made him significantly less useful. He only responded to key phrases and he stuck to the content that I had fed him, but that made it so much harder to distinguish him from a normal chatbot.
Usefulness and safety will clearly be a balancing act as this technology develops. We know AI can be dangerous—we have heard the arguments today—but we have also seen its potential. I have seen its potential. If we want systems that are both safe and useful, businesses need the space to experiment, and I ask the Minister in his summing-up to confirm the Government’s current approach to this.
Now, I am not arguing for a free-for-all; to be clear, we need proportionate regulation and effective oversight. That much is obvious. I will just say that AI Mark did not actually save me any time. I read the thousands of transcripts that came through—I read them all myself; I did not delegate that to anyone else, and it created far more work for me. I could have refined this model to operate well within the guardrails I had set for him, but I was not willing to ask my team to put aside time to refine it when we had real casework to deal with immediately. That is why I took the decision this month to shut AI Mark down.
There is space for a business to take up the baton and take this forward, because the technology is incredible and the potential is real, but that is all it is for now—potential. I will just finish with this: one person from Ukraine, or at least a Ukrainian IP address, tried to get AI Mark to declare support for repressive regimes. Because of the guardrails that we put in place, he did hold firm in his love for democracy, just as I am sure that everyone else here would.