Superintelligent AI Debate
Full Debate: Read Full DebateLord Clement-Jones
Main Page: Lord Clement-Jones (Liberal Democrat - Life peer)Department Debates - View all Lord Clement-Jones's debates with the Department for Business and Trade
(1 day, 8 hours ago)
Lords ChamberMy Lords, I declare an interest as a consultant on AI regulation and policy for DLA Piper. I too thank the noble Lord, Lord Hunt of Kings Heath, for provoking an extremely profound and thoughtful debate on an international moratorium on superintelligent AI development. I was very interested that he cited the Warnock approach as one to be emulated in this field. That was certainly one that our House of Lords Artificial Intelligence Committee recommended eight years ago, but sadly it has not been followed.
For nine years, I have co-chaired the All-Party Parliamentary Group on Artificial Intelligence. I remain optimistic about AI’s potential, but I am increasingly alarmed about our trajectory, particularly in the field of defence. Superintelligence—AI surpassing human intelligence across all domains—is the explicit goal of major AI companies. Many experts predict that we could reach this within five to 10 years. In September 2025, Anthropic detected the first large-scale cyber espionage campaign using agentic AI. Yoshua Bengio, one of the godfathers of AI development, warns that these systems show “signs of self-preservation”, choosing their own survival over human safety.
Currently, no method exists to contain or control smarter-than-human AI systems. This is the “control problem” that Professor Stuart Russell describes: how do we maintain power over entities more powerful than us? That is why I joined the Global Call for AI Red Lines, which was launched at the UN General Assembly by over 300 prominent figures, including Nobel laureates and former Heads of State. They call for international red lines to prevent unacceptable AI risks, including prohibiting superintelligence development, until there is broad scientific consensus on how it can be done safely and with strong public buy-in.
ControlAI’s UK campaign, described by the noble Lord, Lord Hunt, is backed by more than 100 cross-party parliamentarians in the UK. Its proposals include banning deliberate superintelligence development, prohibiting dangerous capabilities, requiring safety demonstrations before deployment, and establishing licensing for advanced AI.
The Montreal Protocol on Substances that Deplete the Ozone Layer offers a precedent. In 1987, every country signed it within two years—during the Cold War. When threats are universal, rapid international agreements are possible. Superintelligence presents such a threat. Yet the current situation is discouraging. The US has rejected moratoria. Sixty countries signed the Paris AI Summit declaration in February 25, but the UK did not. Even Anthropic’s CEO, who has been widely quoted today, admits that we understand only 3% of how current systems work. Today, AI systems are grown through processes their creators cannot interpret.
The Government’s response has been inadequate. Ministers focus on regulating the use of AI tools rather than their development. But this approach fails fundamentally when facing superintelligence. Once a system surpasses human intelligence across all domains, we cannot simply regulate how it is used. We will have lost the ability to control it at all. You cannot regulate the use of something more intelligent than the regulator just sector by sector.
Our AI Security Institute, as the noble Lord, Lord Tarassenko, pointed out, has advisory powers only. We were promised binding regulation in July 2024, but we have seen neither consultation nor draft legislation. Growth and safety are not mutually exclusive. Without public confidence that systems are under human control, adoption will stall.
It is clear what the Government should do. The question is whether we will act with the seriousness this moment demands or whether competitive pressures will override the fundamental imperative of keeping humanity in control. I look forward to the Minister’s response.