AI Systems: Risks Debate

Full Debate: Read Full Debate

Lord Goldsmith of Richmond Park

Main Page: Lord Goldsmith of Richmond Park (Conservative - Life peer)

AI Systems: Risks

Lord Goldsmith of Richmond Park Excerpts
Thursday 8th January 2026

(2 days, 20 hours ago)

Grand Committee
Read Full debate Read Hansard Text
Lord Goldsmith of Richmond Park Portrait Lord Goldsmith of Richmond Park (Con)
- Hansard - -

My Lords, I also thank my noble friend Lord Fairfax of Cameron for securing this hugely important debate. Like other noble Lords, I very much acknowledge the transformative potential of AI, not least in areas such as medicine.

However, there are dangers. We would be mad to ignore them because many of the same people who built this technology—people who have won Nobel Prizes and Turing Awards—are warning that AI poses an extinction risk to humanity. Hundreds of AI experts recently co-signed a letter that said:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.


Among the signatories, you will see the names of Sam Altman—CEO of OpenAI, as noble Lords will know—and Geoffrey Hinton, often referred to as the godfather of AI.

A separate letter signed by Elon Musk and Steve Wozniak, among many others, reads:

“AI systems with human-competitive intelligence can pose profound risks to society and humanity”.


It also calls for a moratorium on the next generation of AI until we know more about it. These people cannot be dismissed as Luddites or technophobes. They are the architects of this brave new world. They recognise that superintelligent AI is far more powerful than any of us can understand and has the capacity to overwhelm us.

It is not just that we do not understand where things will end up; we do not even understand where things are today. The CEO of Anthropic, one of the world’s largest AI companies, admitted:

“Maybe we … understand 3% of how”


AI systems work. That is not that reassuring. We have already had a glimpse of what losing control looks like; for example, in an experiment, an Anthropic AI system attempted to blackmail its managing engineer when told that it was going to be shut down.

However, although so many AI experts and AI bosses are blowing the whistle, Governments are miles behind; in my view, our Government need to step up. They can start by acknowledging the existential risk of advanced AI and joining the numerous UK parliamentarians—including many in this Room today—who have called for a prohibition on the development of superintelligence unless and until we know how it can be controlled.

Finally, there are a number of important choke points in the supply chain that potentially allow the opportunity to monitor and control it. The most advanced AI systems depend on state-of-the-art chips produced by a scarce supply of lithography machines. The chips are then installed into massive data centres, including some being built right now here in the UK. This raises some questions. Who should have access to those chips? Should data centres be required to include emergency shutdown mechanisms, as with nuclear power plants?

I do not pretend to be an expert on this subject but there are big questions that need answering. The Government need to get down to the job of addressing these questions before we are left scrambling for retroactive solutions.