AI Safety Debate
Full Debate: Read Full DebateBen Lake
Main Page: Ben Lake (Plaid Cymru - Ceredigion Preseli)Department Debates - View all Ben Lake's debates with the Department for Science, Innovation & Technology
(1 day, 22 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
It is a pleasure to serve under your chairmanship, Ms Butler. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this very important debate, and for outlining so impressively both the real benefits that have already been realised by narrow AI systems and the potential benefits, but also, perhaps most importantly, the real risks to human safety and security that more advanced systems pose.
I should like to make one very simple point in my remarks: while we need to recognise the benefits of AI and the development of various models, we should adopt a safety-first approach, especially when it comes to the development of more advanced AI systems. I am very concerned that the apparent arms race we are witnessing—with various big AI and tech companies heading towards superintelligence and other advanced AI models—means that we do not have that democratic control, as the hon. Member for Poole (Neil Duncan-Jordan) so eloquently put it, over things that could have real impact on the lives of our constituents, our society, and indeed civilisation more broadly.
As the hon. Member for Dewsbury and Batley outlined in his speech, we have already found some advanced models deploying techniques to try to avoid human control. Apollo Research found examples of one of OpenAI’s models trying to deceive users to accomplish its goals and, perhaps most worryingly, to disable monitoring mechanisms and guardrails. Those are real risks to the development of AI and things that we should take seriously. It is no wonder that leading AI experts Geoffrey Hinton and Yoshua Bengio have called for a prohibition on research and development of superintelligence until there is a broad scientific consensus that it can be done safely and with some degree of human and democratic control. To be effective, however, such a prohibition must be global. We must have the buy-in of the big AI powers: not just the EU, but the United States and China.
In that regard, I wish to lay a challenge before the Minister. The UK Government can lead in those efforts by using their unique convening power—as was demonstrated in 2023—to bring those AI superpowers together for an AI safety summit. I appreciate that following the 2023 Bletchley Park summit there have been subsequent summits, including one in Paris, and that there is one coming up in Delhi. I would urge caution, though, that those summits seem to prioritise the potential economic benefits of AI and prioritise growth. I think we have the growth side of things sorted, but we need to focus again on the safety. A global consensus and a prohibition on superintelligence until we can understand and control it would be a great benefit to society.