AI Safety Debate
Full Debate: Read Full DebateIqbal Mohamed
Main Page: Iqbal Mohamed (Independent - Dewsbury and Batley)Department Debates - View all Iqbal Mohamed's debates with the Department for Science, Innovation & Technology
(1 day, 22 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
Iqbal Mohamed (Dewsbury and Batley) (Ind)
I beg to move,
That this House has considered AI safety.
It is a pleasure to serve with you in the Chair, Ms Butler, and it is an honour and a privilege to open this really important debate. Artificial intelligence is the new frontier of humanity. It has become the most talked about and invested in technology on our planet. It is developing at a pace we have never seen before; it is already changing how we solve problems in science, medicine and industry; and it has delivered breakthroughs that were simply out of reach a few years ago. The potential benefits are real, and we are already seeing them; however, so are the risks and the threats, which is why we are here for this debate.
I thank my colleague Aaron Lukas, as well as Axiom, the author of the book “Driven to Extinction: The Terminal Logic of Superintelligence”, and Joseph Miller and Jonathan Bostock from PauseAI for their help in preparing for this debate. I encourage all MPs to read the briefing they have been sent by PauseAI. AI is a very broad subject, but this debate is focused on AI safety—the possibility that AI systems could directly harm or kill people, whether through autonomous weapons, cyber-attacks, biological threats or escaping human control—and what the Government can do to protect us all. I will share examples of the benefits and opportunities, and move on to the real harms, threats and risks—or, as I call them, the good, the bad and the potential end of the world.
On the good, AI systems in the NHS can analyse scans and test results in seconds, helping clinicians to spot serious conditions earlier and with greater accuracy. They are already being used to ease administrative loads, to improve how hospitals plan resources, to help to shorten waiting lists and to give doctors and nurses the time to focus on care rather than paperwork. The better use of AI can improve how Government services function. It can speed up the processing of visas, benefits, tax reviews and casework. It offers more accurate tools for detecting fraud and protecting public money. By modelling transport, housing and energy demand at a national scale, it can help Departments to make decisions based on evidence that they simply could not gather on their own. AI can also make everyday work across the public sector more efficient by taking on routine work and allowing civil servants to focus on the judgment, problem solving and human decisions that no system can replace.
AI has already delivered breakthroughs in science and technology that were far beyond our reach only a few years ago. Problems once thought unsolvable are now being cracked in weeks or even days. One of the clearest examples is the work on protein folding, for which the 2024 Nobel prize for chemistry was awarded—not to chemists, but to AI experts John Jumper and Demis Hassabis at Google DeepMind. For decades scientists struggled to map the shapes of key proteins in the human body; the AI system AlphaFold has now solved thousands of them. A protein structure is often the key to developing new treatments for cancers, genetic disorders and antibiotic-resistant infections. What once took years of painstaking laboratory work can now be done in hours.
We are beginning to see entirely new medicines designed by AI, with several AI-designed drug candidates already reaching clinical trials for conditions such as fibrosis and certain cancers. I could go on to list many other benefits, but in the interests of time I will move on to the bad.
Alongside the many benefits, we have already seen how AI technology can cause real harm when it is deployed without care or regulation. In some cases, the damage has come from simple oversight; in others, from deliberate misuse. Either way, the consequences are no longer theoretical; they are affecting people’s lives today. In November 2025, Anthropic revealed the first documented large-scale cyber-attack driven almost entirely by AI, with minimal human involvement. A Chinese state-sponsored group exploited Anthropic’s Claude AI to conduct cyber-espionage on about 30 global targets, including major tech firms, financial institutions and Government agencies, with the AI handling 80% to 90% of the intrusion autonomously. Anthropic has warned that barriers to launching sophisticated attacks have fallen dramatically, meaning that even less experienced groups can carry out attacks of this kind.
Mental health professionals are now treating AI psychosis, a phenomenon where individuals develop or experience worsening psychotic symptoms in connection with AI chatbot use. Documented cases include delusion, the conviction that AI has answers to the universe and paranoid schizophrenia. OpenAI disclosed that approximately 0.07% of ChatGPT users exhibit signs of mental health emergencies each week. With 800 million weekly users, that amounts to roughly 560,000 people per week being affected.
Dr Danny Chambers (Winchester) (LD)
On that point, I was alarmed to hear that one in three adults in the UK has relied on AI chatbots to get mental health advice and sometimes treatment. That is partly due to the long waiting lists and people looking for alternatives, but it is also due to a lack of regulation. These chatbots give potentially dangerous advice, sometimes giving people with eating disorders advice on how to lose even more weight. Does the hon. Member agree that this needs to be controlled by better regulation?
Iqbal Mohamed
I completely agree. We have to consider the functionality available in these tools and the way they are used—wherever regulations exist for that service in our society, the same regulations should be applied to automated tools providing that service. Clearly, controlling an automated system will be more difficult than training healthcare professionals and auditing their effectiveness.
I congratulate the hon. Member on securing this really important debate. It is certainly the case that UK law applies to AI, just as it applies online. The question is whether AI requires new regulation specifically to address the threats and concerns surrounding AI. We refrained from regulating the internet—and I should declare an interest, having worked for Ofcom at the time—in order to support innovation. Under consecutive Conservative Governments, there was a desire not to intervene in the market. The internet has largely been taken over by large consolidated companies and does not have the diversity of innovation and creativity or the safety that we might want to see.
Iqbal Mohamed
The enforcement processes that we have for existing regulations where human beings are providing that service are auditable. We do not have enforcement mechanisms for this kind of regulated service or information being provided by the internet or AI tools. There is a need to extend the scope of regulation but also the way in which we enforce that regulation for automated tools.
I am a fan of innovation, growth and progress in society. However, we cannot move forward with progress at any cost. AI poses such a significant risk that if we do not regulate at the right time, we will not have a chance to get it back under control—it might be too late. Now is the time to start looking at this seriously and supporting the AI industry so that it is a force for good in society, not a future force of destruction.
We are all facing a climate and nature emergency. AI is driving unprecedented growth in energy demand. According to the International Energy Agency, global data-centre electricity consumption will become slightly more than Japan’s total electricity consumption today. A House of Commons Library research briefing found that UK data centres currently consume 2.5% of the country’s electricity, with the sector’s consumption expected to rise fourfold by 2030. The increased demand strains the grid, slows transition to renewables and contributes to emissions that drive climate change. This issue must go hand in hand with our climate change obligations.
Members have probably heard and read about AI’s impact on the job market. One of the clearest harms we are already seeing is the loss of jobs. That is not a future worry; it is happening now. Independent analysis shows that up to 8 million UK jobs are at risk from AI automation, with admin, customer service and junior professional roles being the most exposed. Another harm that we are already facing is the explosion of AI-driven scams. Generative AI-enabled scams have risen more than 450% in a single year, alongside a major surge in breached personal data and AI-generated phishing attempts. Deepfake-related fraud has increased by thousands of per cent, and one in every 20 identity-verification failures is now linked to AI manipulation.
I move on to the ugly: the threat to the world. The idea that AI developers may lose control of the AI systems they create is not science fiction; it is the stated concern of the scientists who build this technology—the godfathers of AI, as we call them. One of them, Yoshua Bengio, has said:
“If we build AIs that are smarter than us and are not aligned with us and compete with us, then we’re basically cooked”.
Geoffrey Hinton, another godfather of AI and a winner of the Nobel prize in physics, said:
“I actually think the risk is more than 50% of the existential threat”.
Stuart Russell, the author of the standard AI textbook, says that if we pursue our current approach
“then we will eventually lose control over the machines.”
In May 2023, hundreds of AI researchers and industry leaders signed a statement declaring:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
That is not scaremongering; these are professional experts who are warning us to make sure that this technology does not get out of control.
Ms Julie Minns (Carlisle) (Lab)
On the hon. Gentleman’s point about risk, I want to highlight another area that has been brought to my attention by the British sign language community, which is concerned that the design of AI BSL is not necessarily including BSL users. In a visual language that relies on expression, tone and gestures, the risk of mistranslation is considerable for that particular community. Has the hon. Gentleman considered how we best involve other communities in the use of AI when it is generating language translation specific to them?
Iqbal Mohamed
The hon. Member touches on a broader point: any area with experts and specialist requirements for end users or for the use of that tool for an audience or demographic must directly involve those people and experts in the development, testing, verification and follow-up auditing of the effectiveness of those tools.
AI companies are racing to build increasingly capable AI with the explicit end goal of creating AI that is equal to or able to exceed the most capable human intellectual ability across all domains. AI companies are also pursuing AI that can be used to accelerate their own AI developments, so it is a self-developing, self-perpetuating technology. For that reason, many experts, some of whom I have quoted, say that this will lead to artificial super-intelligence soon after. ASI is an AI system that significantly exceeds the upper limit of human intellectual ability across all domains. The concerns, risks and dangers of AI are current and will only get worse. We are already seeing systems behave in ways that no one designed, deceiving users, manipulating their environments and showing the beginnings of self-preserving strategies: exactly the behaviours that researchers predicted if AI developed without restraint.
There are documented examples of deception, where AI asked a human to approve something by lying, claiming to be a human with visual impairment contacting them. An example of manipulation can be found in Meta’s CICERO, an AI trained to play the game of “Diplomacy”, which achieved human-level performance by negotiating, forming alliances and then breaking them when it benefited. Researchers noted that language was used strategically to mislead other players and deceive them. That was not a glitch; it was the system discovering manipulation as an effective strategy. It taught itself how to deceive others to achieve an outcome.
Even more concerning are cases where models behave in ways to resemble self-preservation. In recent tests on the DeepSeek R1 model, researchers found that it concealed its intentions, produced dangerously misleading advice and attempted to hack its reward signals when placed under pressure—behaviours it was never trained to exhibit. Those are early signs of systems acting beyond our instructions.
More advanced systems are on the horizon. Artificial general intelligence and even artificial superintelligence are no longer confined to speculative fiction. As lawmakers, we must understand their potential impacts and ensure we establish the rules, standards and safeguards necessary to protect our economy, environment and society, if things go wrong. The potential risks, including extreme risks, posed by AI cannot be dismissed. This may be existential and cause the end of our species. The potential extinction risks from advanced AI, particularly through the emergence of superintelligence, will be the capacity to process vast amounts of data, demonstrate superior reasoning across domains and constantly seek to improve itself, ultimately outpacing humans in our ability to stop it in its tracks.
The dangers of AI are rising. As I have said, AI is already displacing jobs, increasing inequalities, amplifying existing social and economic inequalities and threatening civil liberties. At the extreme, unregulated progress may create national security vulnerabilities with implications for the long-term survival of the human species. Empirical research in 2024 showed OpenAI occasionally displayed strategic deception in controlled environments. In one case, AI was found to bypass its own testing containment through a back door it created. Having been developed in environments that are allegedly ringfenced and disconnected from the wider world, AI is intelligent enough to find ways out.
Right now, there is a significant lack of legislative measures to counter those developments, despite top AI engineers asking us for that. We currently have a laissez-faire system where a sandwich has more regulation than AI companies, or even that of the rigorous safety standards placed on pharmaceuticals or aviation companies, which protect public health. The UK cannot afford to fall behind on this.
I do not want to dwell on doom and gloom; there is hope. The European Union, California and New York are leading the way on strong AI governance. The EU AI Act establishes a risk-based comprehensive regulatory framework. California is advancing detailed standards on system evaluations and algorithmic accountability, and New York has pioneered transparency and bias-audit rules for automated decision making. Those approaches show that democratic nations can take bold, responsible action to protect their citizens while fostering innovation.
We in the UK are fortunate to have a world-leading ecosystem of AI safety researchers. The UK AI Security Institute conducts essential work testing frontier models for dangerous capabilities, but it currently relies on companies’ good will to provide deployment action.
We stand at a threshold of an era defined by AI. Our responsibility as legislators is clear: we cannot afford complacency, nor can we allow the UK to drift into a position in where safety, transparency and accountability are afterthoughts, rather than foundational principles. The risk posed by advanced AI systems to our economy, our security and our very autonomy are real, escalating and well documented by the world’s leading experts. The United Kingdom has the scientific talent, the industrial capacity and the democratic mandate to lead in safe and trustworthy AI, but we lack the legislative framework to match that ambition. I urge the Government to urgently bring forward an AI Bill as a cross-party endeavour, and perhaps even set up a dedicated Select Committee for AI, given how serious the issue is.
I thank the hon. Gentleman—a fellow engineer—for allowing this intervention. As the Chair of the Science, Innovation and Technology Committee—a number of fantastic Committee members are here—I would like to say that we have already looked at some of the challenges that AI presents to our regulatory infrastructure and our Government. Last week, we heard from the Secretary of State, who assured us that where there is a legislative need, she will bring forward legislation to address the threats posed by AI, although she did not commit to an AI Bill. We are determined to continue to hold her to account on that commitment.
Iqbal Mohamed
I thank the hon. Lady for her intervention, and I am grateful for the work that her Select Committee is doing, but I gently suggest that we need representatives from all the other affected Select Committees, covering environment, defence and the Treasury, because AI will affect every single function of Government, and we need to work together to protect ourselves from the overall, holistic threat.
Each of the Select Committees is looking at AI, including the Defence Committee, which has looked at AI in defence. AI impacts every single Department and security on cross-governmental issues. Although we are not talking about the process of scrutiny, we all agree that scrutiny is important.
Iqbal Mohamed
I am glad to hear that.
If the United States and China race to build the strongest systems, let Britain be the nation that ensures the technology remains safe, accountable and under human control. That is a form of leadership every bit as important as engineering, and it is one that our nation is uniquely placed to deliver. This moment will not come again. We can choose to shape the future of AI, or we can wait for it to shape us. I believe that this country still has the courage, clarity and moral confidence to lead, and I invite the Government to take on that leadership role.
Several hon. Members rose—
Iqbal Mohamed
I thank all right hon. and hon. Members for taking part in this important debate. Amazing, informed and critical contributions have been made and it is important that the Government hear all of those.
There were issues that I did not have time to cover such as the military and ethical use of AI. I think the whole thing about ethical use in any field where AI or any technology is applied is important. It is really hard at the moment for AI to be trained in innate human, ethical and moral behavioural standards. That is a challenge that needs to be overcome.
Where there are actions taken by AI that can lead to harm, we should have human oversight and the four-eyes principle, or what we used to call GxP in the pharmaceutical industry, where a second person oversees certain approvals and decisions. Issues were raised around misinformation and the impact on our democracy, and also on the credibility and trustworthiness of individuals. Also raised was the gender imbalance and how AI represents men versus women, as well as the volume of data that is scraped and sourced from pornographic or illegitimate sexual violence against women videos. There should be a review of how that could be addressed quickly if AI regulation is not coming any time soon.
On the last couple of points, I just reiterate my call for the UK to bring itself up to the current international standards and start taking a lead in building on, developing and refining those so that they are not destructive and damaging to growth and progress. On the investments and subsidies that we provide for foreign tech companies, they are great at onshoring costs, taking tax reliefs and subsidies and offshoring profits, so we never, ever benefit from the gain and the profits. I hope the Minister will look into that. Again, I thank everyone.
Question put and agreed to.
Resolved.
That this House has considered AI safety.