(1 day, 15 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
Iqbal Mohamed (Dewsbury and Batley) (Ind)
I beg to move,
That this House has considered AI safety.
It is a pleasure to serve with you in the Chair, Ms Butler, and it is an honour and a privilege to open this really important debate. Artificial intelligence is the new frontier of humanity. It has become the most talked about and invested in technology on our planet. It is developing at a pace we have never seen before; it is already changing how we solve problems in science, medicine and industry; and it has delivered breakthroughs that were simply out of reach a few years ago. The potential benefits are real, and we are already seeing them; however, so are the risks and the threats, which is why we are here for this debate.
I thank my colleague Aaron Lukas, as well as Axiom, the author of the book “Driven to Extinction: The Terminal Logic of Superintelligence”, and Joseph Miller and Jonathan Bostock from PauseAI for their help in preparing for this debate. I encourage all MPs to read the briefing they have been sent by PauseAI. AI is a very broad subject, but this debate is focused on AI safety—the possibility that AI systems could directly harm or kill people, whether through autonomous weapons, cyber-attacks, biological threats or escaping human control—and what the Government can do to protect us all. I will share examples of the benefits and opportunities, and move on to the real harms, threats and risks—or, as I call them, the good, the bad and the potential end of the world.
On the good, AI systems in the NHS can analyse scans and test results in seconds, helping clinicians to spot serious conditions earlier and with greater accuracy. They are already being used to ease administrative loads, to improve how hospitals plan resources, to help to shorten waiting lists and to give doctors and nurses the time to focus on care rather than paperwork. The better use of AI can improve how Government services function. It can speed up the processing of visas, benefits, tax reviews and casework. It offers more accurate tools for detecting fraud and protecting public money. By modelling transport, housing and energy demand at a national scale, it can help Departments to make decisions based on evidence that they simply could not gather on their own. AI can also make everyday work across the public sector more efficient by taking on routine work and allowing civil servants to focus on the judgment, problem solving and human decisions that no system can replace.
AI has already delivered breakthroughs in science and technology that were far beyond our reach only a few years ago. Problems once thought unsolvable are now being cracked in weeks or even days. One of the clearest examples is the work on protein folding, for which the 2024 Nobel prize for chemistry was awarded—not to chemists, but to AI experts John Jumper and Demis Hassabis at Google DeepMind. For decades scientists struggled to map the shapes of key proteins in the human body; the AI system AlphaFold has now solved thousands of them. A protein structure is often the key to developing new treatments for cancers, genetic disorders and antibiotic-resistant infections. What once took years of painstaking laboratory work can now be done in hours.
We are beginning to see entirely new medicines designed by AI, with several AI-designed drug candidates already reaching clinical trials for conditions such as fibrosis and certain cancers. I could go on to list many other benefits, but in the interests of time I will move on to the bad.
Alongside the many benefits, we have already seen how AI technology can cause real harm when it is deployed without care or regulation. In some cases, the damage has come from simple oversight; in others, from deliberate misuse. Either way, the consequences are no longer theoretical; they are affecting people’s lives today. In November 2025, Anthropic revealed the first documented large-scale cyber-attack driven almost entirely by AI, with minimal human involvement. A Chinese state-sponsored group exploited Anthropic’s Claude AI to conduct cyber-espionage on about 30 global targets, including major tech firms, financial institutions and Government agencies, with the AI handling 80% to 90% of the intrusion autonomously. Anthropic has warned that barriers to launching sophisticated attacks have fallen dramatically, meaning that even less experienced groups can carry out attacks of this kind.
Mental health professionals are now treating AI psychosis, a phenomenon where individuals develop or experience worsening psychotic symptoms in connection with AI chatbot use. Documented cases include delusion, the conviction that AI has answers to the universe and paranoid schizophrenia. OpenAI disclosed that approximately 0.07% of ChatGPT users exhibit signs of mental health emergencies each week. With 800 million weekly users, that amounts to roughly 560,000 people per week being affected.
Dr Danny Chambers (Winchester) (LD)
On that point, I was alarmed to hear that one in three adults in the UK has relied on AI chatbots to get mental health advice and sometimes treatment. That is partly due to the long waiting lists and people looking for alternatives, but it is also due to a lack of regulation. These chatbots give potentially dangerous advice, sometimes giving people with eating disorders advice on how to lose even more weight. Does the hon. Member agree that this needs to be controlled by better regulation?
Iqbal Mohamed
I completely agree. We have to consider the functionality available in these tools and the way they are used—wherever regulations exist for that service in our society, the same regulations should be applied to automated tools providing that service. Clearly, controlling an automated system will be more difficult than training healthcare professionals and auditing their effectiveness.
I congratulate the hon. Member on securing this really important debate. It is certainly the case that UK law applies to AI, just as it applies online. The question is whether AI requires new regulation specifically to address the threats and concerns surrounding AI. We refrained from regulating the internet—and I should declare an interest, having worked for Ofcom at the time—in order to support innovation. Under consecutive Conservative Governments, there was a desire not to intervene in the market. The internet has largely been taken over by large consolidated companies and does not have the diversity of innovation and creativity or the safety that we might want to see.
Iqbal Mohamed
The enforcement processes that we have for existing regulations where human beings are providing that service are auditable. We do not have enforcement mechanisms for this kind of regulated service or information being provided by the internet or AI tools. There is a need to extend the scope of regulation but also the way in which we enforce that regulation for automated tools.
I am a fan of innovation, growth and progress in society. However, we cannot move forward with progress at any cost. AI poses such a significant risk that if we do not regulate at the right time, we will not have a chance to get it back under control—it might be too late. Now is the time to start looking at this seriously and supporting the AI industry so that it is a force for good in society, not a future force of destruction.
We are all facing a climate and nature emergency. AI is driving unprecedented growth in energy demand. According to the International Energy Agency, global data-centre electricity consumption will become slightly more than Japan’s total electricity consumption today. A House of Commons Library research briefing found that UK data centres currently consume 2.5% of the country’s electricity, with the sector’s consumption expected to rise fourfold by 2030. The increased demand strains the grid, slows transition to renewables and contributes to emissions that drive climate change. This issue must go hand in hand with our climate change obligations.
Members have probably heard and read about AI’s impact on the job market. One of the clearest harms we are already seeing is the loss of jobs. That is not a future worry; it is happening now. Independent analysis shows that up to 8 million UK jobs are at risk from AI automation, with admin, customer service and junior professional roles being the most exposed. Another harm that we are already facing is the explosion of AI-driven scams. Generative AI-enabled scams have risen more than 450% in a single year, alongside a major surge in breached personal data and AI-generated phishing attempts. Deepfake-related fraud has increased by thousands of per cent, and one in every 20 identity-verification failures is now linked to AI manipulation.
I move on to the ugly: the threat to the world. The idea that AI developers may lose control of the AI systems they create is not science fiction; it is the stated concern of the scientists who build this technology—the godfathers of AI, as we call them. One of them, Yoshua Bengio, has said:
“If we build AIs that are smarter than us and are not aligned with us and compete with us, then we’re basically cooked”.
Geoffrey Hinton, another godfather of AI and a winner of the Nobel prize in physics, said:
“I actually think the risk is more than 50% of the existential threat”.
Stuart Russell, the author of the standard AI textbook, says that if we pursue our current approach
“then we will eventually lose control over the machines.”
In May 2023, hundreds of AI researchers and industry leaders signed a statement declaring:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
That is not scaremongering; these are professional experts who are warning us to make sure that this technology does not get out of control.
Ms Julie Minns (Carlisle) (Lab)
On the hon. Gentleman’s point about risk, I want to highlight another area that has been brought to my attention by the British sign language community, which is concerned that the design of AI BSL is not necessarily including BSL users. In a visual language that relies on expression, tone and gestures, the risk of mistranslation is considerable for that particular community. Has the hon. Gentleman considered how we best involve other communities in the use of AI when it is generating language translation specific to them?
Iqbal Mohamed
The hon. Member touches on a broader point: any area with experts and specialist requirements for end users or for the use of that tool for an audience or demographic must directly involve those people and experts in the development, testing, verification and follow-up auditing of the effectiveness of those tools.
AI companies are racing to build increasingly capable AI with the explicit end goal of creating AI that is equal to or able to exceed the most capable human intellectual ability across all domains. AI companies are also pursuing AI that can be used to accelerate their own AI developments, so it is a self-developing, self-perpetuating technology. For that reason, many experts, some of whom I have quoted, say that this will lead to artificial super-intelligence soon after. ASI is an AI system that significantly exceeds the upper limit of human intellectual ability across all domains. The concerns, risks and dangers of AI are current and will only get worse. We are already seeing systems behave in ways that no one designed, deceiving users, manipulating their environments and showing the beginnings of self-preserving strategies: exactly the behaviours that researchers predicted if AI developed without restraint.
There are documented examples of deception, where AI asked a human to approve something by lying, claiming to be a human with visual impairment contacting them. An example of manipulation can be found in Meta’s CICERO, an AI trained to play the game of “Diplomacy”, which achieved human-level performance by negotiating, forming alliances and then breaking them when it benefited. Researchers noted that language was used strategically to mislead other players and deceive them. That was not a glitch; it was the system discovering manipulation as an effective strategy. It taught itself how to deceive others to achieve an outcome.
Even more concerning are cases where models behave in ways to resemble self-preservation. In recent tests on the DeepSeek R1 model, researchers found that it concealed its intentions, produced dangerously misleading advice and attempted to hack its reward signals when placed under pressure—behaviours it was never trained to exhibit. Those are early signs of systems acting beyond our instructions.
More advanced systems are on the horizon. Artificial general intelligence and even artificial superintelligence are no longer confined to speculative fiction. As lawmakers, we must understand their potential impacts and ensure we establish the rules, standards and safeguards necessary to protect our economy, environment and society, if things go wrong. The potential risks, including extreme risks, posed by AI cannot be dismissed. This may be existential and cause the end of our species. The potential extinction risks from advanced AI, particularly through the emergence of superintelligence, will be the capacity to process vast amounts of data, demonstrate superior reasoning across domains and constantly seek to improve itself, ultimately outpacing humans in our ability to stop it in its tracks.
The dangers of AI are rising. As I have said, AI is already displacing jobs, increasing inequalities, amplifying existing social and economic inequalities and threatening civil liberties. At the extreme, unregulated progress may create national security vulnerabilities with implications for the long-term survival of the human species. Empirical research in 2024 showed OpenAI occasionally displayed strategic deception in controlled environments. In one case, AI was found to bypass its own testing containment through a back door it created. Having been developed in environments that are allegedly ringfenced and disconnected from the wider world, AI is intelligent enough to find ways out.
Right now, there is a significant lack of legislative measures to counter those developments, despite top AI engineers asking us for that. We currently have a laissez-faire system where a sandwich has more regulation than AI companies, or even that of the rigorous safety standards placed on pharmaceuticals or aviation companies, which protect public health. The UK cannot afford to fall behind on this.
I do not want to dwell on doom and gloom; there is hope. The European Union, California and New York are leading the way on strong AI governance. The EU AI Act establishes a risk-based comprehensive regulatory framework. California is advancing detailed standards on system evaluations and algorithmic accountability, and New York has pioneered transparency and bias-audit rules for automated decision making. Those approaches show that democratic nations can take bold, responsible action to protect their citizens while fostering innovation.
We in the UK are fortunate to have a world-leading ecosystem of AI safety researchers. The UK AI Security Institute conducts essential work testing frontier models for dangerous capabilities, but it currently relies on companies’ good will to provide deployment action.
We stand at a threshold of an era defined by AI. Our responsibility as legislators is clear: we cannot afford complacency, nor can we allow the UK to drift into a position in where safety, transparency and accountability are afterthoughts, rather than foundational principles. The risk posed by advanced AI systems to our economy, our security and our very autonomy are real, escalating and well documented by the world’s leading experts. The United Kingdom has the scientific talent, the industrial capacity and the democratic mandate to lead in safe and trustworthy AI, but we lack the legislative framework to match that ambition. I urge the Government to urgently bring forward an AI Bill as a cross-party endeavour, and perhaps even set up a dedicated Select Committee for AI, given how serious the issue is.
I thank the hon. Gentleman—a fellow engineer—for allowing this intervention. As the Chair of the Science, Innovation and Technology Committee—a number of fantastic Committee members are here—I would like to say that we have already looked at some of the challenges that AI presents to our regulatory infrastructure and our Government. Last week, we heard from the Secretary of State, who assured us that where there is a legislative need, she will bring forward legislation to address the threats posed by AI, although she did not commit to an AI Bill. We are determined to continue to hold her to account on that commitment.
Iqbal Mohamed
I thank the hon. Lady for her intervention, and I am grateful for the work that her Select Committee is doing, but I gently suggest that we need representatives from all the other affected Select Committees, covering environment, defence and the Treasury, because AI will affect every single function of Government, and we need to work together to protect ourselves from the overall, holistic threat.
Each of the Select Committees is looking at AI, including the Defence Committee, which has looked at AI in defence. AI impacts every single Department and security on cross-governmental issues. Although we are not talking about the process of scrutiny, we all agree that scrutiny is important.
Iqbal Mohamed
I am glad to hear that.
If the United States and China race to build the strongest systems, let Britain be the nation that ensures the technology remains safe, accountable and under human control. That is a form of leadership every bit as important as engineering, and it is one that our nation is uniquely placed to deliver. This moment will not come again. We can choose to shape the future of AI, or we can wait for it to shape us. I believe that this country still has the courage, clarity and moral confidence to lead, and I invite the Government to take on that leadership role.
Several hon. Members rose—
Order. I want to get to the Front Benchers at 3.28 pm, which means that Members will get three minutes each to speak. There may be a vote at 4 pm, so I ask Members to please stick to time.
Sarah Russell (Congleton) (Lab)
It is a pleasure to serve with you in the Chair, Ms Butler. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this debate.
There are two problems—maybe three—with AI. The first is that we do not distinguish very well between what is and is not AI. Although AI and tech are obviously related, they are not the same thing. It is important that when we talk about AI we distinguish it from tech. There is a need to regulate a lot of tech much better than we currently do, but AI poses very specific problems. The first one—I can see people from ControlAI in the Public Gallery—is the fact that we do not fully understand the models.
It worries any sensible-thinking person that we are unleashing technologies that appear to be able to self-replicate and do other things, and we are incorporating them into military hardware without a full understanding of how they work. We do not have to be a catastrophist or conspiracy theorist to be worried. I am generally a very optimistic person, but it is important to be optimistic on the basis of understanding the technology that we use and then regulating it appropriately. That does not mean stifling innovation, but it does mean making sure we know what we are doing.
When I look at AI, we have, as I said, two problems. One is rubbish in, rubbish out, and there is a lot of rubbish going into AI at the moment. We can see that in all sorts of terrible situations. We have a huge amount of in-built gender bias in our society. That means that, for instance, if we ask for AI to generate a picture of a female solicitor, as I am, we will get a picture of a woman who is barely clothed, but has a library of books behind her. That is not how female solicitors that I know go to work, but that is how AI thinks we are, and that has real-world impacts.
If we ask AI to suggest an hourly rate as a freelancer, it is on average suggesting significantly lower rates for women than it is for men. There are questions about algorithmic bias permeating the whole of the algorithm. Questions have been raised recently about LinkedIn. I and a lot of women I know are finding that we have significantly less interaction via LinkedIn than we used to. Various women have now changed their gender on their bios to male and suddenly find that their engagement levels go straight back up. LinkedIn appears to think we are not interesting and people will not want to read our content, so it is stopping showing female content at the same rate, it would appear. I caveat that I have not been able to speak to LinkedIn directly, but certainly a lot of women I know are reporting these problems.
We put in bio stuff to start with, but huge amounts of the image training data is based on what is publicly available on the internet, and that image training data of women on the internet is largely pornographic, which influences what comes out the other end of these models. When we look at that in terms of children, we have real problems. Nudification apps are huge and need to be dealt with. I would like to get into how I am worried about that and deal with health and how we do not have good enough training data on the interaction between gender and health and various other matters, but I will stop now. I thank everyone for their time today. I know colleagues will pick up important points.
Ayoub Khan (Birmingham Perry Barr) (Ind)
It is a pleasure to serve under your chairship, Ms Butler. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this crucial debate.
Many aspects of democratic life are under immense pressure, but when we look at developments that threaten to dislodge society on a mass scale within a matter of years, there are few things that pose greater risks than artificial general intelligence. AI has undoubtedly presented us with opportunities for innovation and growth—so much so that the Government have pinned their hopes on AI to improve public services on a lower budget. But it has also played a role in creating an incredibly challenging environment, where information is no longer subject just to interpretation, but to direct and unfettered manipulation, where both the state and society risk becoming dependent on a technology that we cannot control or fully understand.
In their manifesto, Labour pledged to tackle the growing emergence of hybrid warfare, including cyber-attacks and misinformation campaigns that seek to subvert our democracy. That commitment only grows more timely and essential by the day, and yet we are falling ever further behind. Even at this stage of AI’s development, we are already seeing how it can distort reality at speed and on a scale far beyond anything we have seen before. This is not a theoretical problem. It is happening right now in real time around the world. And as time goes on, the practice of effectively determining what is real and fake will not only become a more central feature of our political reality, but will get increasingly difficult, with the consequences of making the wrong calls becoming ever more fatal.
What makes the AI revolution more dangerous is the powerful algorithms that amplify dangerous posts and trends.
What makes the AI revolution more dangerous is the power of algorithms to amplify dangerous posts and trends. On social media, where the rules reward provocation and engagement over truth, increasing use of advanced, unregulated AI only makes for a perfect storm. Increasingly, we see Governments, extremist groups and political networks deploying AI-driven bot networks to flood online spaces with co-ordinated narratives, drowning out facts in the process. These bots can mimic real people, fabricate grassroots movements and create the illusion of public consensus when there is none.
This phenomenon is widely known as astroturfing. When thousands of synthetic accounts amplify the same message, that message gains legitimacy, not because it is true but because it seems popular. AI-powered information operations are fast becoming the norm, not the exception, and they are increasingly proficient at replacing actual reality with a reality of their own making.
AI is not our enemy—it is a tool that is being developed and tested across our society—but unregulated AI that is unchecked, unaccountable and weaponised by those who seek to deceive is a threat to democratic stability. At this early stage in the adoption of AI, we have a unique opportunity to build the very guardrails that will protect our freedom of expression without undermining the integrity of our public discourse. If democracy is to remain strong, truth must remain strong. That is why we must confront the challenges of AI safety and AI-driven misinformation with urgency, because once trust is lost, our democracy will fail.
Mr Luke Charters (York Outer) (Lab)
It is a pleasure to serve under your chairship, Ms Butler, especially given your distinguished background in tech and AI advocacy.
May I share something that hon. Members will be pleased to hear? I am, in fact, the youngest parent in Parliament, and I am constantly thinking about the place that my young boys are set to grow up in. We expect that AI will form a core part of their lives, even in primary school, which is hard to imagine. The more I think about AI, however, the more I think that we should introduce it in the key stage 2 curriculum, alongside vital safeguards. After all, I was learning to use Google Search at around that age. What is different about using Gemini today?
There are potential harms, such as the deeply tragic story of a young boy in the US who sadly took his own life, but AI can be a force for good. The more that children learn about AI, and about using it for activities such as homework and coursework, the more I believe we should not be punishing them for using it. Instead, in the future, we should allow students to use AI in some exams to test how they use their AI skills. I met students at Fulford school and York college in my constituency and their message was, “Don’t punish us for using AI when it’s going to become a key part of our employment in the future. Teach us to use it responsibly. Teach us to use it when we come to our employment.” If we do not make that shift now, we will face a productivity puzzle in the future.
I will move on to the issue of physical illness. We have all been poorly; we have all picked up an iPhone. As hon. Members can tell, my greatest treasure is my kids, but when parents put their children’s symptoms into AI, they are putting a lot of trust in AI models. I urge the Government to work with the Department of Health and Social Care and the NHS to make sure that AI chatbots and tools cite the NHS as a single source of truth, not health advice from outside this country.
I will touch on mental health as well, because around a quarter of children now use AI chatbots for their mental health. We cannot pretend that people will not use AI as a tool for mental health support, and in particular, blokes out there might well use AI as a first port of call to unpack what they are going through. That should be welcome, but it comes with a great responsibility for the AI tools—Gemini, ChatGPT, Perplexity and so on—to get things right. I urge the companies that make those tools to work with the Government and the charitable sector, including great charities such as Samaritans, to do that.
We have to embrace AI. There are great opportunities, but there need to be safeguards and support. With that in mind, Britain can be a world leader in AI safety.
As always, Ms Butler, it is a real pleasure to serve under your chairmanship. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this debate and for his opening speech, which was absolutely superb.
Although there is no doubt that AI is becoming the future—and we are becoming aware of more uses online—there are still dangers associated with it, and we must be aware of them. I want to raise those issues as a way of keeping my constituents in the know.
AI is an advance in technology that, to be truthful, I know very little about. To be honest, technology is over my head in many ways, but my constituents are very aware of it. It is not something that I am personally keen to use, nor do I know much about it, but it is something that my grandchildren need to be familiar as they grow up—they are the ones who are coming through. They need to know the dangers that I can see.
I read an article some time ago that said in 15 to 20 years, over 80% of jobs could be done through AI—well, I wonder when MPs will be AI-ed, so to speak. What will that mean? Will all the manual jobs be done by robots? It is future technology—it is “Star Wars” stuff—but is that the future? It is amazing to see what AI can do, but there are also significant risks that come with it. It is about finding the balance. I always refer to the balance, because in almost everything we do in life a balance has to be sought, found and delivered.
Reports to the Police Service of Northern Ireland back home have been made in cases where scammers have imitated family members using voice cloning, asking for emergency money. There has also been a swarm of realistic texts, phone scripts and AI-generated emails purporting to be the like of the Ulster Bank or His Majesty’s Revenue and Customs. In my case, they got the bank wrong—I had not heard of the Danske Bank—but none the less, that was an illustration of what they are doing: pressurising victims into transferring money for security reasons. Even though it is all made up, it sounds realistic and authentic, which is a worry.
I have received some of these scam texts before, and I can honestly say that they appear legitimate. With some of them, one would stop and think, “I’m not sure, but I think that’s going to be okay. It seems okay.” I am grateful to the people who come into my office to ask because they are rightly confused. We are there to help them. Every day we have people—mostly the elderly and vulnerable people—who contact my office needing reassurance that what they have been asked to do is illegal, and therefore they should not do it.
I have concerns about the impact that AI has on schools, specifically for children’s learning. I do not want children to use AI as a way of thinking and to be over-reliant on it for schoolwork and homework. The importance of school is to teach children to be problem solvers and to think for themselves. It is important that they are given the opportunity to do just that. AI is a tool that can support learning, but it must never overtake what our teachers are qualified to tell us.
It is such a pleasure to take part in this critical debate. I start by acknowledging the Government’s commitment to rolling out AI in many areas and making the UK an adoption nation. They must also respond to the public demand for regulation in this area, however, and recognise that the two are interlinked. Research from the Ada Lovelace Institute and the Alan Turing Institute found that 72% of the public would feel more comfortable with AI if it was properly regulated.
There are clear potentials from AI, but there are also clear harms. We have already heard about chatbots in this debate, and I would add to that discussion the issues related to AI slop—often hate-filled slop produced by influencers who are profiting heavily from it while polluting the internet. I would also add that Sora 2, which is well known to many schoolkids if not to those of us in the Chamber, has recently been shown to produce videos of school shootings, for example, for people purporting to be 13 years old—who were, of course, adults pretending to be that age. Snapchat execs have apparently been willing to go ahead with so-called beautification lenses, despite concerns relating to body image.
There are significant harms, and I seek clarification on a number of questions. Will the curriculum review cover AI? Will teachers be supported in delivering that? Will there be a ban on nudified adult women images? When is the violence against women and girls strategy coming out—very soon, I hope? What is the position of AI chatbots, and are they covered by the Online Safety Act 2023? There seems to be a lot of confusion around that, at a time when we cannot have confusion. What is the timeline for the Secretary of State to look into this issue, given how important it is? Can the Minister push Ofcom to speedily publish the parameters for its welcome investigation into illegal online hate and terror material, and is that going to cover AI bots and slop? Surely it needs to.
We need Ministers to commit to an AI Bill. Can the Minister provide a timeline for that? Will that much-needed Bill include mandatory ex-ante evaluations for frontier AI models and transparency from companies on safety issues? I have asked parliamentary questions about this issue, but I am afraid that I do not completely agree with the Government that AI companies are conforming with international agreements. Surely we need more on that.
Are we going to have more scrutiny of AI use in government? Again—taking up the question that was asked earlier—I have asked PQs on BSL. Apparently, there is no knowledge of the cross-Government procurement of AI BSL, but there does seem to be discrete use of it by governmental bodies. Surely that needs to be looked at more. Surely we also need to act with the EU, with its commitment to human-centric, trustworthy AI, because ultimately, we have strength in numbers.
Shockat Adam (Leicester South) (Ind)
It is an honour to serve under your chairmanship, Ms Butler. I thank my hon. Friend the Member for Dewsbury and Batley (Iqbal Mohamed) for securing this vital debate.
Artificial intelligence is here, and here to stay. It has the potential to do incredible good: it is going to save us time and mental energy, and it will save lives. I am sure of that. The question is not whether to halt its progression, but what we can do to ensure that it is safe. As a pioneer in the world of AI asked, how do we change the wheel of a moving car? My concern is that AI is not a moving car—it is a racing car. To understand how difficult this is going to be, we must heed the lessons from California. Regulating the beast is going to be extremely difficult. Home to Google, Meta, Anthropic and OpenAI, the state tried to regulate it, but despite four in five Americans—along with engineers, scientists and safety experts—supporting a Bill that would have mitigated the risks of catastrophic harm, intense lobbying killed that Bill. We are going to face the same issues here.
In regulating AI, the Government must ensure that safety and security are given equal importance. It appears that the Government have decided to change the remit of the AI Security Institute from safety to security—shifting the focus away from broader safety issues such as algorithm bias, discrimination, human rights and freedom of expression to concentrate solely on cyber-crime, biohacking and national security harms. Of course those are important, but by narrowing the remit, the Government risk creating a false sense of security, fortifying the system against external attacks while overlooking the harms that can be built directly into the systems.
The previous Prime Minister, the right hon. Member for Richmond and Northallerton (Rishi Sunak), understood that. At the Bletchley Park summit, he made it clear that the AI companies cannot be allowed to mark their own homework, and that independent safety evaluations are not optional.
Like the hon. Member for Newcastle upon Tyne Central and West (Dame Chi Onwurah), I fear that the Government have not learned from previous legislative failures. We all remember that deepfake pornography, one of the most disturbing and harmful uses of AI, was a glaring omission in the Online Safety Act 2023; only now, years later, is it being addressed in the Crime and Policing Bill. Regulation that arrives years after the technology has proliferated is not regulation; it is damage control. That is why it is profoundly disappointing that no AI Bill will be introduced in this Session.
The United Kingdom has the chance, ability and responsibility to lead. We hosted the world’s first AI safety summit. We can and should use our global influence to shape standards, champion ethical safeguards and ensure that the public are protected from both immediate and long-term harms. The Government must restore safety to the heart of AI policy. We need independent oversight, a strengthened statutory remit for the AI Security Institute, and a comprehensive AI Bill that brings transparency, accountability, reversibility and fairness to the centre of our national approach. Humanity depends on it.
Neil Duncan-Jordan (Poole) (Lab)
It is a pleasure to serve with you in the Chair, Ms Butler. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this timely and important debate.
There is little doubt in my mind that AI is transformational technology that will bring many benefits to our society. To fully realise the benefits, however, it is important that safeguards ensure the technologies are developed and deployed appropriately and in the interests of society as a whole, rather than simply being vehicles by which large tech companies make even bigger profits.
One of the key challenges with AI is the need to protect people’s privacy and livelihoods. That is essential to both our economy and our democratic institutions. It is also crucial that we remain in control of this technological revolution, rather than ending up with the technology controlling us. Currently, a handful of AI companies are making decisions on the future of humanity without democratic input and behind closed doors. That is why Governments across the globe need to work together and at pace to address this democratic deficit.
The challenge is stark: tech leaders are already making worrying predictions about how AI will shape our future. Elon Musk—I do not often agree with him, to be honest—recently said:
“AI and robots will replace all jobs. Working will be optional”.
Of course, automation is not new; we have been here before, but the current wave of AI represents a major technological shift, and potentially a fourth industrial revolution. We have complaints from our creative industries expressing concerns about the way in which their work is being used to train AI without giving them proper recognition and compensation for use of that work. Without robust regulation, we risk steering society towards an unpredictable and turbulent future that does not work for the public.
I have already raised with the Government the prospect of an employment levy on companies who replace large-scale workforces with AI, which would mean the loss of national insurance and income tax from our economy. That cannot simply be allowed to happen without the state gaining some kind of financial compensation.
The UK has an opportunity to lead on these issues but, with the development of technology, AI and even ASI, it is essential that our Government develop a comprehensive strategy that acknowledges the international dimension to this issue and the need for broad global agreement. I would be grateful if the Minister addressed those concerns about safeguards and controls. The benefits of AI may be great, but so too are the pitfalls. We have an obligation to get that balance right.
Martin Wrigley (Newton Abbot) (LD)
It is a pleasure to serve under your chairship, Ms Butler. We have had some interesting contributions so far. I fully agree that we need to look at regulation, but I question whether we can regulate a technology. Today, every search we do is already powered by AI. To regulate a technology is a bit like trying to regulate a wheel rather than regulating the car. We need to look at how it is used and how it is then delivered.
We have talked about many different types of AI, and we must be clear that today’s artificial intelligence with pretrained generative output is different from the potential future of general AI, which is something else again and a whole new question. Today’s technology takes a question and gives us an answer.
Most of the harms that we have heard about in the debate are not new—they can already happen using other means—but AI makes them quicker, faster and easier to deliver, so what could have been done in PaintShop five years ago can now be done with AI in moments, and without the same levels of skill. There are not new harms; there are just new ways of using those tools. We need to look carefully at regulation and not focus too specifically on it as a technology, but think about the outcomes and how people are using. That is what needs to be regulated.
AI is very good at pattern recognition. Essentially what we see today in ChatGPT and others is the same technology that I was taught at university many years ago; it is just that now we have the compute power to run those neural networks that can recognise patterns. They are trained: feed them 5,000 pictures of a cat and they can identify a picture of a cat. It is slightly more subtle and advanced now, because we have added good natural language processing and large language models— that big data. We are feeding them much more data so that they can recognise more things. There is huge value and opportunity in that, which we must be careful not to regulate into insignificance.
I will say one other thing. There is a fundamental problem with AI: it is non-deterministic. Because it is recognising patterns, we cannot predict what it will do. Therefore, our current testing methodology of a known set of data, a known process and an expected set of outcomes cannot be relied on, because AI will give an answer that could be this, that or t’other. We must think about how we use it in processes and how we expect the output to be regular, because it will not be. We do not get the same answer twice—but that has been true of Google for a long time. If any hon. Member asked the same question as me on Google, they would get different answers.
Mark Sewards (Leeds South West and Morley) (Lab)
It is a pleasure to serve under your chairship, Ms Butler. I congratulate the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this timely debate.
In August, I created the first AI prototype of a British MP. It was made by my constituent, Jeremy Smith, who ran an AI start-up in my constituency. I will go to almost any lengths to support a local business in Leeds South West and Morley. This was an online MP that anyone could talk to at any time. Jeremy said my constituents would benefit from two versions of me, including one that never sleeps—although, with children aged four and one, I am not sure that is a useful distinction.
Questions were converted into text and an answer generated quickly, and then it was turned into my voice for the users. The replica was impressive, although I did sound a bit too posh and angry when I did not know the answer. AI Mark not only had my voice—we also fed it my policy stances and typical casework answers. I saw it as a clever voicemail system designed to handle common casework queries when my office was closed; it was never going to be a replacement for me or for my excellent casework team. However, how does it relate to safety? We have all seen AI models that break, say outrageous things or hallucinate.
We created what I called the “guardrails”, and these were the limits on what AI Mark could say. That created a problem: when the guardrails were lower, AI Mark was very interesting to talk to. He would create Tinder dating profiles on demand; he did write incorrect haikus about the hon. Member for Clacton (Nigel Farage); and he did give the population of Vietnam—and try to predict the weather there, too.
Mr Charters
Does my hon. Friend think the Whips would prefer the real Mark, or the AI Mark?
Mark Sewards
My hon. Friend tempts me to say something I am not allowed to in this place, so I will say that they absolutely would prefer me—of course they would.
AI Mark could also be exploited to say things that just were not true. So I lifted the guardrails to reduce the risk before I released him to the public, but this made him significantly less useful. He only responded to key phrases and he stuck to the content that I had fed him, but that made it so much harder to distinguish him from a normal chatbot.
Usefulness and safety will clearly be a balancing act as this technology develops. We know AI can be dangerous—we have heard the arguments today—but we have also seen its potential. I have seen its potential. If we want systems that are both safe and useful, businesses need the space to experiment, and I ask the Minister in his summing-up to confirm the Government’s current approach to this.
Now, I am not arguing for a free-for-all; to be clear, we need proportionate regulation and effective oversight. That much is obvious. I will just say that AI Mark did not actually save me any time. I read the thousands of transcripts that came through—I read them all myself; I did not delegate that to anyone else, and it created far more work for me. I could have refined this model to operate well within the guardrails I had set for him, but I was not willing to ask my team to put aside time to refine it when we had real casework to deal with immediately. That is why I took the decision this month to shut AI Mark down.
There is space for a business to take up the baton and take this forward, because the technology is incredible and the potential is real, but that is all it is for now—potential. I will just finish with this: one person from Ukraine, or at least a Ukrainian IP address, tried to get AI Mark to declare support for repressive regimes. Because of the guardrails that we put in place, he did hold firm in his love for democracy, just as I am sure that everyone else here would.
It is a pleasure to serve under your chairmanship, Ms Butler. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this very important debate, and for outlining so impressively both the real benefits that have already been realised by narrow AI systems and the potential benefits, but also, perhaps most importantly, the real risks to human safety and security that more advanced systems pose.
I should like to make one very simple point in my remarks: while we need to recognise the benefits of AI and the development of various models, we should adopt a safety-first approach, especially when it comes to the development of more advanced AI systems. I am very concerned that the apparent arms race we are witnessing—with various big AI and tech companies heading towards superintelligence and other advanced AI models—means that we do not have that democratic control, as the hon. Member for Poole (Neil Duncan-Jordan) so eloquently put it, over things that could have real impact on the lives of our constituents, our society, and indeed civilisation more broadly.
As the hon. Member for Dewsbury and Batley outlined in his speech, we have already found some advanced models deploying techniques to try to avoid human control. Apollo Research found examples of one of OpenAI’s models trying to deceive users to accomplish its goals and, perhaps most worryingly, to disable monitoring mechanisms and guardrails. Those are real risks to the development of AI and things that we should take seriously. It is no wonder that leading AI experts Geoffrey Hinton and Yoshua Bengio have called for a prohibition on research and development of superintelligence until there is a broad scientific consensus that it can be done safely and with some degree of human and democratic control. To be effective, however, such a prohibition must be global. We must have the buy-in of the big AI powers: not just the EU, but the United States and China.
In that regard, I wish to lay a challenge before the Minister. The UK Government can lead in those efforts by using their unique convening power—as was demonstrated in 2023—to bring those AI superpowers together for an AI safety summit. I appreciate that following the 2023 Bletchley Park summit there have been subsequent summits, including one in Paris, and that there is one coming up in Delhi. I would urge caution, though, that those summits seem to prioritise the potential economic benefits of AI and prioritise growth. I think we have the growth side of things sorted, but we need to focus again on the safety. A global consensus and a prohibition on superintelligence until we can understand and control it would be a great benefit to society.
It is a pleasure to serve under your chairmanship, Ms Butler. If we stick to your time limit, perhaps we will see your talents in the Chair in the main Chamber one day. I congratulate the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this crucial debate. I would also like to declare an interest as chair of the all-party parliamentary group for writers and as the author of three books.
For all the benefits that AI brings, its growth also comes with serious risks. Take two examples. AI was used in the production of the recent Beatles song, “Now and Then”; the technology isolated and clarified John Lennon’s voice from a decades-old cassette demo to revive a lost Beatles song. Yet a similar AI model was used to artificially generate former President Joe Biden’s voice and urge voters not to cast a ballot in the New Hampshire primary election.
Last Tuesday, as chair of the all-party parliamentary group for writers, I hosted a reception with a brilliant group of literary creatives, during which I was struck by a speech from the University of Cambridge’s Dr Clementine Collett about the impact of generative AI on novel-writing and publishing. The literary sector is a key component of our economy, contributing £11 billion annually. However, genAI has been allowed to push the UK’s world-renowned literary sector to the brink of irreversible change.
At this moment, the work of novelists is being pirated on an unprecedented scale to train genAI to write novels. That work is taken without permission or remuneration—a gross infringement on the rights of the creative community. That practice is not just discouraging, but unsustainable, since the average writer in the UK earns only £7,000 a year. Over half of novelists share a widespread anxiety that AI will entirely displace their work in the future. Generative models are becoming increasingly sophisticated and have the potential to flood the market with automated works of fiction.
Storytelling was once exclusively woven into the fabric of our culture. Now it is programmed into algorithms that churn out hollow pieces of fiction stripped of any humanity. Its training data has also led to damaging stereotypes being output by genAI systems. The rise of automated novels will serve to amplify those biases, offering discriminatory tropes a broader platform on which to thrive.
The Data (Use and Access) Act 2025 saw historic ping-pong over Baroness Kidron’s call for greater transparency. I was pleased that the creative industries sector plan published over the summer emphasised the need for extra support, including, notably, a freelance champion, in the wake of AI. Yet we are still a long way from ensuring that the literary sector is protected from the significant harm and potential demise that could be wrought by genAI.
There is a plethora of cutting-edge writing initiatives that would benefit from increased funding of the arts councils. In light of AI harms, I strongly encourage the Government to direct funds to vulnerable minority and under-represented groups to counter the uniform voices that generative systems output. Those programmes are vital for nurturing the emerging talent that underpins the UK’s literary excellence.
In conclusion, the literary industry is experiencing unprecedented uncertainty. We must act now to ensure that the imaginative, emotional and intellectual complexity of great works of literature is not lost and that novelists can continue to thrive.
Emily Darlington (Milton Keynes Central) (Lab)
It is a pleasure to serve under your chairship, Ms Butler. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this important debate.
It would be remiss of me, as the MP for Milton Keynes Central, not to acknowledge the opportunities of AI. One in three jobs in Milton Keynes is in tech, often in the edge technologies or edge AIs that are driving the economic growth we want. However, we will not see take-up across businesses unless we have the safest AI, so we must listen to the British Standards Institution, which is located in Milton Keynes and is working on standards for some of these things.
Nevertheless, I have many concerns. The Molly Rose Foundation has raised many issues around AI chatbots, not all of which are included in current legislation. It has documented how Alexa instructed a 10-year-old to touch a live electrical wire, and how Snapchat’s My AI told a 13-year-old how to lose their virginity to a 31-year-old—luckily, it was an adult posing as a 13-year-old. We have seen other examples involving suicide, and Hitler having the answers to climate change, and research has found that many children are unable to realise that chatbots are not human. AI algorithms also shadow ban women and women’s health, as others have mentioned.
The tech is there to make AI safe, but there is little incentive for companies to do so at the moment. The Online Safety Act goes some way, but not far enough. Our priorities must be to tackle the creativity and copyright issues; deepfakes and the damage they do, in particular, to young girls and women; and the misinformation and disinformation that is being spread and amplified by algorithms because it keeps people online longer, making companies money. We must also protect democracy, children, minorities and women.
How do we do that? I hope the Minister is listening. For me, it is about regulation and standards—standards are just as important as regulation—and transparency. The Science, Innovation and Technology Committee has called for transparency on AI algorithms and AI chatbots, but we have yet to see real transparency. We must also have more diversity in tech—I welcome the Secretary of State’s initiatives on that—and, finally, given the world we are in, we must have a clear strategy for the part that sovereignty in AI plays in our security and our economic future.
Order. I would like to try to allow two minutes at the end for the Member in charge to wind up the debate. Will the Front Benchers take that into account, please?
Victoria Collins (Harpenden and Berkhamsted) (LD)
It is a pleasure to serve under your chairmanship, Ms Butler. I congratulate the hon. Member for Dewsbury and Batley (Iqbal Mohamed) on securing this incredible debate. That so many issues have been packed into 90 minutes shows clearly that we need more time to debate this subject, and I think it comes down to the Government to say that an AI Bill, or further discussions, are clearly needed. The issue now pervades our lives, for the better but in many aspects for the worse.
As the Liberal Democrat spokesperson on science, innovation and technology, I am very excited about the positive implications of AI. It can clearly help grow our economy, solve the big problems and help us improve our productivity. However, it is clear from the debate that it comes with many risks that have nothing to do with growing our economy—certainly not the kind of economy we want to grow—including the use of generative AI for child sexual abuse material, children’s growing emotional dependency on chatbots, and the provision of suicide advice.
I have said for a long time the trust element is so important. It is two sides of the same coin: if we cannot trust this technology then we cannot develop as a society, but it is also really important for business and our economy. I find it fascinating that so many more businesses are now talking about this and saying, “If we can’t trust this technology, we can’t use it, we can’t spend money on it and we can’t adopt it.” Trust is essential.
If the UK acts fast and gets this right, we have a unique opportunity to be the leader on this. From talking to industry, I know that we have incredible talent and are great at innovating, but we also have a fantastic system for building trust. We need to take that opportunity. It is the right thing to do, and I believe we are the only country in the world that can really do it, but we have to act now.
Sarah Russell
Does the hon. Lady agree that we should be looking hard at the EU’s regulation in this area, and considering alignment and whether there might be points on which we would like to go further?
Victoria Collins
Absolutely, and the point about global co-operation has been made clearly across the Chamber today. The hon. Member for Leicester South (Shockat Adam) talked about what is now the AI Security Institute—it was the AI Safety Institute—and that point about leading and trust is really important. Indeed, I want to talk a little more about safety, because security and safety are slightly different. I see safety as consumer facing, but security is so important. Renaming the AI Safety Institute as the AI Security Institute, as the hon. Member mentioned, undermines the importance of both.
The first point is about AI psychosis and chatbots—this has been covered a lot today, and it is incredibly worrying. My understanding is that the problem of emotional dependency on AI chatbots is not covered by the Online Safety Act. Yes, elements of chatbots are covered—search functionality and user to user, for example—but Ofcom itself has said that there are certain harms from AI chatbots, which we can talk about, that are not covered. We have heard that 1.2 million users a week are talking to ChatGPT about suicide—we heard the example of Adam, who took his own life in the US after talking to a chatbot—and two thirds of 23 to 34-year-olds are turning to chatbots for their mental health. These are real harms.
Of course, the misinformation that is coming through chatbots also has to be looked at seriously. The hon. Member for York Outer (Mr Charters) mentioned the facts and the advice coming through. We can achieve powerful outcomes, but we need to make sure that chatbots are built in a way that ensures that advisory element, perhaps by linking with NHS or other proper advice.
The hon. Member for Milton Keynes Central (Emily Darlington), who has been very passionate about this issue, mentioned the Molly Rose Foundation, which is doing incredible work to show the harms coming through this black hole—many do not see the harms, which have an impact on children that parents do not understand, as well as on adults.
The harm of deepfakes, including horrific CSAM and sexual material of all ages, has also been mentioned, and it is also impacting our economy. Just recently, a deepfake was unfortunately made of the hon. Member for Mid Norfolk (George Freeman). The Sky journalist Yalda Hakim was also the victim of a deepfake. She mentioned her worry that it was shared thousands of times, but also picked up by media in the subcontinent. These things are coming through, and no one who watches them can tell the difference. It is extremely worrying.
As the hon. Member for Congleton (Sarah Russell) said, “Rubbish in, rubbish out.” What is worrying is that, as the Internet Watch Foundation has said, because a lot of the rubbish going in is online sexual content that has been scraped, that is what is coming out.
Then there is AI slop, as the right hon. Member for Oxford East (Anneliese Dodds) mentioned. Some of that is extreme content, but what worries me is that, as many may know, our internet is now full of AI slop—images, stories and videos—where users just cannot tell the difference. I do not know about others, but I often look at something and think, “Ah, that’s really cute. Oh no—that is not real.” What is really insidious is that this is breaking down trust. We cannot tell any more what is real and what is not, and that affects trust in our institutions, our news and our democracy. What we say here today can be changed. Small changes are breaking down trust, and it is really important that that stops. What is the Minister doing about AI labelling and watermarking, to make sure we can trust what we see? That is just one small part of it.
The other thing, which my hon. Friend the Member for Newton Abbot (Martin Wrigley) mentioned, is that often AI threats magnify what is already a threat, whether it is online fraud or a security threat. I believe that AI scams in just the first three months of this year cost Brits £1 billion. One third of UK businesses said in the first quarter they had been victims of AI fraud. And I have not got on to what the hon. Member for Dewsbury and Batley said about moving towards AI in security and defence, and superintelligence. What are the “exaggerated” threats that actually will become extremely threatening? What are the Government doing to clamp down on these threats, and what are they doing on AI fraud and online safety?
Another issue is global working. One of the Liberal Democrats’ calls is for an AI safety agency, which could be headquartered in the UK; we could take the lead on it. I think that is in line with what the hon. Member for Dewsbury and Batley was talking about. We have this opportunity; we need to take it seriously, and we could be a leader on that.
I will close by reiterating the incredible work that AI could do. We all know that it could solve the biggest problems of tomorrow, and it could improve our wellbeing and productivity, but the threats and risks are there. We have to manage them now, and make sure that trust is built on both sides.
Mr Adnan Hussain (Blackburn) (Ind)
I just want to reaffirm what the hon. Member has said. Does she agree that innovation and safety are not opposites? This is reminding me of when Google and online banking first came in. We need clear rules so that we can increase public trust and not stifle technology.
Victoria Collins
Absolutely. What is interesting about innovation is that it often thrives with constraints. As I have said, safety is about trust, which is good for business and our economy, and not just for our society.
When will the AI Bill come to Parliament? We really need it; we need to discuss these things. What are the Government doing to reassess the Online Safety Act? Beyond that, in determining how we react to this rapid shift in technology, will they consider the Lib Dems’ call for a digital Bill of Rights to make sure that standards are set and can adapt to that? What are the Government doing about international co-operation on safety and security? As the hon. Member for Blackburn (Mr Hussain) mentioned, we can—we must—have innovation and safety, and safety by design. We can choose both, but only if we act now.
It is a pleasure to serve under your chairmanship, Ms Butler. I am very grateful to the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for bringing this important debate to the House today. He gave a very thoughtful speech, which reflected his clearly very strongly held beliefs about the risks that AI poses. It was quite a broad and wide-ranging debate, and a very interesting one. I will try to be quite brief because I am really keen to hear the hon. Member’s response, along with that of the Minister.
We heard some great points about biased data, shadow banning, the impact on BSL, large language models producing, in effect, regulated advice, and the need for AI in the curriculum—and, of course, copyright came up. What happens when AI is used to mimic MPs’ output—something I suspect our AI Prime Minister also uses?
As hon. Members have observed, the advent of artificial intelligence entails risks but is also a once-in-a-generation opportunity. The previous Government were acutely aware of putting the UK at the forefront of both intergovernmental and industry discussions regarding the development of AI. They convened the world’s first AI safety summit, which took place at Bletchley Park in late 2023 and which many Members have referenced, and established the AI Safety Institute—now renamed the AI Security Institute—in the same year.
Reports about the risks to children’s safety posed by tools such as one-to-one and personal agent chatbots promoting suicide and self-harm content are of great concern. It is right that policymakers act quickly to address serious and specific threats when they emerge, and we welcome the Government’s recent action on measures to tackle AI-generated child sexual abuse images.
Recently, other hon. Members and I have pressed the Government to clarify the application of the Online Safety Act to one-to-one and personal agent AI chatbots. The Minister has confirmed that the Government have commissioned work to look at whether there are any loopholes in the Act that would mean that some AI chatbot services are unregulated. The recent report of the Science, Innovation and Technology Committee has also highlighted the risks to democratic integrity posed by cyber-bots pushing out AI-generated deepfake material purporting to represent authentic political content to distort public narratives, particularly during elections. We clearly need to go further to address those important and growing risks, so I would be grateful if the Minister could provide an update on those two points.
Despite much rhetoric, the Government have been completely inconsistent regarding their intentions on AI legislation. Having stated in their manifesto that they would bring in “binding regulation” for the “most powerful AI models”, the can has been repeatedly kicked down the road, with the Secretary of State suggesting during a SIT Committee evidence session earlier this month that there would be no generally applicable AI legislation in this Parliament. The uncertainty caused by the Government’s failure to be clear about their plans for AI regulation damages public confidence in this developing technology. Crucially, it also undermines business confidence, with a chilling knock-on effect on investment and innovation.
We appreciate that AI regulation is far from straightforward, given the rapidly evolving innovations, challenges and developments, and we caution against going down the route that the EU has taken for AI regulation. However, it is clear that we need a plan that ensures that our education system equips children with the skills necessary for the jobs of the future, and a strategy to prepare and, where necessary, retrain the parts of our workforce that stand to be the most affected by changes to the employment market brought about by AI.
We need to be alert to the risks and changes that AI development brings—AI must always be the agent and never the principal—but we must not lose sight of the tremendous opportunities that it offers. The UK should be at the forefront of developing artificial intelligence and reap the benefits of a substantial home-grown AI industry. AI has the potential to revolutionise service delivery and improve productivity on an unprecedented scale, and those productivity gains can drive much-needed improvements in our overstretched public services, hospitals, local authorities, court services and prisons, to name but a few. The rapid processing of routine tasks will lead to better and quicker service provision across the board.
Perhaps the most pressing issue is the role that AI will play in the defence of our country. Some hon. Members have spoken about the existential risk posed to humanity by the most powerful AI models, but in an era of regional conflict and intensifying global competition, the notion that hostile state actors will observe international protocols on AI development are naive at best and dangerous at worst. AI has become indispensable to our defence capacity and security. The ability of AI to detect and neutralise cyber and biosecurity threats will become increasingly vital. High-tech AI drone warfare has drastically changed the nature of conflict, as we see in Ukraine. Put simply, the UK, working wherever possible with its international allies and partners, must be in a position to counter the deployment of AI systems that disregard the norms and ethics that the UK seeks to uphold.
We cannot afford to be left behind. We must develop our capabilities at speed, by tackling the barriers to the development of the UK AI industry, including the high costs of energy and the availability of investment. We must ensure that we are alive to, and safeguard against, the most serious emerging risks. With that in mind, will the Minister provide an update on the Government’s plans to support growth in the UK AI industry, including in relation to securing lawful access to reliable datasets for training?
The Parliamentary Under-Secretary of State for Science, Innovation and Technology (Kanishka Narayan)
It is a pleasure to serve with you in the Chair, Ms Butler, for my first Westminster Hall debate. It is a particular pleasure not only to have you bring your technological expertise to the Chair, but for the hon. Member for Strangford (Jim Shannon) to be reliably present in my first debate, as well as the UK’s—perhaps the world’s—first AI MP, my hon. Friend the Member for Leeds South West and Morley (Mark Sewards). It is a distinct pleasure to serve with everyone present and the expertise they bring. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this debate on AI safety. I am grateful to him and to all Members for their very thoughtful contributions to the debate.
It is no exaggeration to say that the future of our country and our prosperity will be led by science, technology and AI. That is exactly why, in response to the question on growth posed by the hon. Member for Runnymede and Weybridge (Dr Spencer), we recently announced a package of new reforms and investments to use AI to power national renewal. We will drive growth through developing new AI growth zones across north and south Wales, Oxfordshire and the north-east, creating opportunities for innovation by expanding access to compute for British researchers and scientists.
We are investing in AI to drive breakthroughs in developing new drugs, cures and treatments. But we cannot harness those opportunities without ensuring that AI is safe for the British public and businesses, nor without agency over its development. I was grateful for the points made by my hon. Friend the Member for Milton Keynes Central (Emily Darlington) on the importance of standards and the hon. Member for Harpenden and Berkhamsted (Victoria Collins) about the importance of trust.
That is why the Government are determined to make the UK one of the best places to start a business, to scale up, to stay on our shores, especially for the UK AI assurance and standards market. Our trusted third-party AI assurance roadmap and AI assurance innovation fund are focused on supporting the growth of UK businesses and organisations providing innovative AI products that are proven to be safe for sale and use. We must ensure that the AI transformation happens not to the UK but with and through the UK.
In consistency with the points raised by my hon. Friend the Member for Milton Keynes Central, that is why we are backing the sovereign AI unit, with almost £500 million in investment, to help build and scale AI capabilities on British shores, which will reflect our country’s needs, values and laws. Our approach to those AI laws seeks to ensure that we balance growth and safety, and that we remain adaptable in the face of inevitable AI change.
On growth, I am glad to hear the points made by my hon. Friend the Member for Leeds South West and Morley about a space for businesses to experiment. We have announced proposals for an AI growth lab that will support responsible AI innovation by making targeted regulatory modifications under robust safeguards. That will help drive trust by providing a precisely safe space for experimentation and trialling of innovative products and services. Regulators will monitor that very closely.
On safety, we understand that AI is a general-purpose technology, with a wide range of applications. In recognition of the contribution from the hon. Member for Newton Abbot (Martin Wrigley), I reaffirm some of the points he made about being thoughtful in regulatory approaches that distinguish between the technology and the specific use cases. That is why we believe that the vast majority of AI should be regulated at the point of use, where the risk relates and tractable action is most feasible.
A range of existing rules already applies to those AI systems in application contexts. Data protection and equality legislation protect the UK public’s data rights. They prevent AI-driven discrimination where the systems decide, for example, who is offered a job or credit. Competition law helps shields markets from AI uses that could distort them, including algorithmic collusion to set unfair prices.
Sarah Russell
As a specialist equality lawyer, I am not currently aware of any cases in the UK around the kind of algorithmic bias that I am talking about. I would be delighted to see some, and delighted to see the Minister encouraging that, but I am not sure that the regulatory framework would achieve that at present.
Kanishka Narayan
My hon. Friend brings deep expertise from her past career. If she feels there are particular absences in the legislation on equalities, I would be happy to take a look, though that has not been pointed out to me, to date.
The Online Safety Act 2023 requires platforms to manage harmful and illegal content risks, and offers significant protection against harms online, including those driven by AI services. We are supporting regulators to ensure that those laws are respected and enforced. The AI action plan commits to boosting AI capabilities through funding, strategic steers and increased public accountability.
There is a great deal of interest in the Government’s proposals for new cross-cutting AI regulation, not least shown compellingly by my right hon. Friend the Member for Oxford East (Anneliese Dodds). The Government do not speculate on legislation, so I am not able to predict future parliamentary sessions, although we will keep Parliament updated on the timings of any consultation ahead of bringing forward any legislation.
Notwithstanding that, the Government are clearly not standing still on AI governance. The Technology Secretary confirmed in Parliament last week that the Government will look at what more can be done to manage the emergent risks of AI chatbots, raised by my hon. Friend the Member for York Outer (Mr Charters), my right hon. Friend the Member for Oxford East, my hon. Friend the Member for Milton Keynes Central and others.
Alongside the comments the Technology Secretary made, she urged Ofcom to use its existing powers to ensure AI chatbots in scope of the Act are safe for children. Further to the clarifications I have provided previously across the House, if hon. Members have a particular view on where there are exceptions or spaces in the Online Safety Act on AI chatbots that correlate with risk, we would welcome any contribution through the usual correspondence channels.
Kanishka Narayan
I have about two minutes, so I will continue the conversation with my hon. Friend outside.
We will act to ensure that AI companies are able to make their own products safe. For example, the Government are tackling the disgusting harm of child sexual exploitation and abuse with a new offence to criminalise AI models that have been optimised for that purpose. The AI Security Institute, which I was delighted to hear praised across the House, works with AI labs to make their products safer and has tested over 30 models at the frontier of development. It is uniquely the best in the world at developing partnerships, understanding security risks, and innovating safeguards, too. Findings from AISI testing are used to strengthen model safeguards in partnership with AI companies, improving safety in areas such as cyber-tasks and biological weapon development.
The UK Government do not act alone on security. In response to the points made by the hon. Members for Ceredigion Preseli (Ben Lake), for Harpenden and Berkhamsted, and for Runnymede and Weybridge, it is clear that we are working closely with allies to raise security standards, share scientific insights and shape responsible norms for frontier AI. We are leading discussions on AI at the G7, the OECD and the UN. We are strengthening our bilateral relationships on AI for growth and security, including AI collaboration as part of recent agreements with the US, Germany and Japan.
I will take the points raised by the hon. Members for Dewsbury and Batley, for Winchester (Dr Chambers) and for Strangford, and by my hon. Friend the Member for York Outer (Mr Charters) on health advice, and how we can ensure that the quality of NHS advice is privileged in wider AI chatbot engagement, as well as the points made by my hon. Friend the Member for Congleton and my right hon. Friend the Member for Oxford East on British Sign Language standards in AI, which are important points that I will look further at.
To conclude, the UK is realising the opportunities for transformative AI while ensuring that growth does not come at the cost of security and safety. We do this through stimulating AI safety assurance markets, empowering our regulators and ensuring our laws are fit for purpose, driving change through AISI and diplomacy.
Iqbal Mohamed
I thank all right hon. and hon. Members for taking part in this important debate. Amazing, informed and critical contributions have been made and it is important that the Government hear all of those.
There were issues that I did not have time to cover such as the military and ethical use of AI. I think the whole thing about ethical use in any field where AI or any technology is applied is important. It is really hard at the moment for AI to be trained in innate human, ethical and moral behavioural standards. That is a challenge that needs to be overcome.
Where there are actions taken by AI that can lead to harm, we should have human oversight and the four-eyes principle, or what we used to call GxP in the pharmaceutical industry, where a second person oversees certain approvals and decisions. Issues were raised around misinformation and the impact on our democracy, and also on the credibility and trustworthiness of individuals. Also raised was the gender imbalance and how AI represents men versus women, as well as the volume of data that is scraped and sourced from pornographic or illegitimate sexual violence against women videos. There should be a review of how that could be addressed quickly if AI regulation is not coming any time soon.
On the last couple of points, I just reiterate my call for the UK to bring itself up to the current international standards and start taking a lead in building on, developing and refining those so that they are not destructive and damaging to growth and progress. On the investments and subsidies that we provide for foreign tech companies, they are great at onshoring costs, taking tax reliefs and subsidies and offshoring profits, so we never, ever benefit from the gain and the profits. I hope the Minister will look into that. Again, I thank everyone.
Question put and agreed to.
Resolved.
That this House has considered AI safety.