AI Safety

Danny Chambers Excerpts
Wednesday 10th December 2025

(1 day, 22 hours ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Iqbal Mohamed Portrait Iqbal Mohamed (Dewsbury and Batley) (Ind)
- Hansard - - - Excerpts

I beg to move,

That this House has considered AI safety.

It is a pleasure to serve with you in the Chair, Ms Butler, and it is an honour and a privilege to open this really important debate. Artificial intelligence is the new frontier of humanity. It has become the most talked about and invested in technology on our planet. It is developing at a pace we have never seen before; it is already changing how we solve problems in science, medicine and industry; and it has delivered breakthroughs that were simply out of reach a few years ago. The potential benefits are real, and we are already seeing them; however, so are the risks and the threats, which is why we are here for this debate.

I thank my colleague Aaron Lukas, as well as Axiom, the author of the book “Driven to Extinction: The Terminal Logic of Superintelligence”, and Joseph Miller and Jonathan Bostock from PauseAI for their help in preparing for this debate. I encourage all MPs to read the briefing they have been sent by PauseAI. AI is a very broad subject, but this debate is focused on AI safety—the possibility that AI systems could directly harm or kill people, whether through autonomous weapons, cyber-attacks, biological threats or escaping human control—and what the Government can do to protect us all. I will share examples of the benefits and opportunities, and move on to the real harms, threats and risks—or, as I call them, the good, the bad and the potential end of the world.

On the good, AI systems in the NHS can analyse scans and test results in seconds, helping clinicians to spot serious conditions earlier and with greater accuracy. They are already being used to ease administrative loads, to improve how hospitals plan resources, to help to shorten waiting lists and to give doctors and nurses the time to focus on care rather than paperwork. The better use of AI can improve how Government services function. It can speed up the processing of visas, benefits, tax reviews and casework. It offers more accurate tools for detecting fraud and protecting public money. By modelling transport, housing and energy demand at a national scale, it can help Departments to make decisions based on evidence that they simply could not gather on their own. AI can also make everyday work across the public sector more efficient by taking on routine work and allowing civil servants to focus on the judgment, problem solving and human decisions that no system can replace.

AI has already delivered breakthroughs in science and technology that were far beyond our reach only a few years ago. Problems once thought unsolvable are now being cracked in weeks or even days. One of the clearest examples is the work on protein folding, for which the 2024 Nobel prize for chemistry was awarded—not to chemists, but to AI experts John Jumper and Demis Hassabis at Google DeepMind. For decades scientists struggled to map the shapes of key proteins in the human body; the AI system AlphaFold has now solved thousands of them. A protein structure is often the key to developing new treatments for cancers, genetic disorders and antibiotic-resistant infections. What once took years of painstaking laboratory work can now be done in hours.

We are beginning to see entirely new medicines designed by AI, with several AI-designed drug candidates already reaching clinical trials for conditions such as fibrosis and certain cancers. I could go on to list many other benefits, but in the interests of time I will move on to the bad.

Alongside the many benefits, we have already seen how AI technology can cause real harm when it is deployed without care or regulation. In some cases, the damage has come from simple oversight; in others, from deliberate misuse. Either way, the consequences are no longer theoretical; they are affecting people’s lives today. In November 2025, Anthropic revealed the first documented large-scale cyber-attack driven almost entirely by AI, with minimal human involvement. A Chinese state-sponsored group exploited Anthropic’s Claude AI to conduct cyber-espionage on about 30 global targets, including major tech firms, financial institutions and Government agencies, with the AI handling 80% to 90% of the intrusion autonomously. Anthropic has warned that barriers to launching sophisticated attacks have fallen dramatically, meaning that even less experienced groups can carry out attacks of this kind.

Mental health professionals are now treating AI psychosis, a phenomenon where individuals develop or experience worsening psychotic symptoms in connection with AI chatbot use. Documented cases include delusion, the conviction that AI has answers to the universe and paranoid schizophrenia. OpenAI disclosed that approximately 0.07% of ChatGPT users exhibit signs of mental health emergencies each week. With 800 million weekly users, that amounts to roughly 560,000 people per week being affected.

Danny Chambers Portrait Dr Danny Chambers (Winchester) (LD)
- Hansard - -

On that point, I was alarmed to hear that one in three adults in the UK has relied on AI chatbots to get mental health advice and sometimes treatment. That is partly due to the long waiting lists and people looking for alternatives, but it is also due to a lack of regulation. These chatbots give potentially dangerous advice, sometimes giving people with eating disorders advice on how to lose even more weight. Does the hon. Member agree that this needs to be controlled by better regulation?

Iqbal Mohamed Portrait Iqbal Mohamed
- Hansard - - - Excerpts

I completely agree. We have to consider the functionality available in these tools and the way they are used—wherever regulations exist for that service in our society, the same regulations should be applied to automated tools providing that service. Clearly, controlling an automated system will be more difficult than training healthcare professionals and auditing their effectiveness.