Superintelligent AI

Lord Markham Excerpts
Thursday 29th January 2026

(1 day, 9 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Markham Portrait Lord Markham (Con)
- Hansard - -

I too thank the noble Lord, Lord Hunt, for bringing this serious issue in front of us today. Like others, I wish we had more time, but I think this shows the Lords at its best. We have had technology know-how, regulatory expertise, philosophy, religious wisdom—we have even learned that it is a good time to be old. So that is definitely something to look forward to.

Of course, we all know AI is a massive force for good. I have seen it first-hand in the health space. But we also know the risks of superintelligent AI. Examples have been mentioned where AI has taken to blackmail in the case of self-preservation. So I think we all understand the dangers of non-alignment and AI pursuing different objectives from our own. We are all aware that some very serious and knowledgeable people in this space talk about risks of 10% or so, which we would all agree is pretty significant.

For me, though, the real question is: how do we go about regulating? As we know, it works only if everyone in the world follows. Nuclear weapons, for example, are pretty hard to build: you need massive infrastructure, you need to enrich uranium, you need state-like resources, and it can be observed worldwide. But despite all of that, we have still had proliferation. We have still had the likes of North Korea getting nuclear weapons. Building superintelligent AI requires much more limited resource; it is much easier to hide, and so much easier for rogue states such as North Kore—or, dare I say it, an al-Qaeda—to develop it without detection, without us being able to do anything about it. If we really believe in the power of superintelligence, we have to accept that it is probably a winner-takes-all world, and whoever gets there first is likely to be the winner who takes all.

For me, while I worry about some of the dangers of us in the west developing it, I have to say that I worry even more about North Korea or al-Qaeda getting there first if we go ahead and tie one of our hands, or both our hands, behind our backs through a one-sided moratorium. There are things that we should be and are working on, and the AI Safety Institute is a very good example of that. A heavy investment in monitoring the programming, opening up the models, checking that they really are aligned—there probably is no limit to the resources we should be putting into that, and into investigations into whether we can put kill switches into these things. If we do find a way to do it, then let us offer that to the world, because it has to be in our interests that everyone who is developing this has access to a kill switch. That definitely makes sense, but for me, a one-sided moratorium which ties our hands behind our backs while the likes of North Korea and the al-Qaedas of the world crack on: no, I am afraid that that worries me even more.