Monday 24th July 2023

(9 months, 2 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Fairfax of Cameron Portrait Lord Fairfax of Cameron (Con)
- View Speech - Hansard - -

My Lords, it is a great pleasure to follow the right reverend Prelate. I declare my interest as a member of the AI in Weapon Systems Committee. I very much thank the noble Lord, Lord Ravensdale, for choosing for debate a subject that arguably now trumps all others in the world in importance. I also owe a debt of gratitude to a brilliant young AI researcher at Cambridge who is researching AI risk and impacts.

I could, but do not have the time to, discuss the enormous benefits that AI may bring and some of the well-known risks: possible extreme concentration of power; mass surveillance; disinformation and manipulation, for example of elections; and the military misuse of AI—to say nothing of the possible loss, as estimated by Goldman Sachs, of 300 million jobs globally to AI. Rather, in my five minutes I will focus on the existential risks that may flow from humans creating an entity that is more intelligent than we are. Five minutes is not long to discuss the possible extinction of humanity, but I will do my best.

Forty years ago, if you said some of the things I am about to say, you were called a fruitcake and a Luddite, but no longer. What has changed in that time? The changes result mainly from the enormous development in the last 10 years of machine learning and its very broad applicability—for example, distinguishing images and audio, learning patterns in language, and simulating the folding of proteins—as long as you have the enormous financial resources necessary to do it.

Where is all this going? Richard Ngo, a researcher at OpenAI and previously at DeepMind, has publicly said that there is a 50:50 chance that by 2025 neural nets will, among other things, be able to understand that they are machines and how their actions interface with the world, and to autonomously design, code and distribute all but the most complex apps. Of course, the world knows all about ChatGPT.

At the extreme, artificial systems could solve strategic-level problems better than human institutions, disempower humanity and lead to catastrophic loss of life and value. Godfathers of AI, such as Geoffrey Hinton and Yoshua Bengio, now predict that such things may become possible within the next five to 20 years. Despite two decades of concentrated effort, there has been no significant progress on, nor consensus among AI researchers about, credible proposals on the problems of alignment and control. This led many senior AI academics—including some prominent Chinese ones, I emphasise—as well as the leaderships of Microsoft, Google, OpenAI, DeepMind and Anthropic, among others, recently to sign a short public statement, hosted by the Center for AI Safety:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.


In other words, they are shouting “Fire!” and the escape window may be closing fast.

As a result, many Governments and international institutions, such as the UN and the EU, are suddenly waking up to the threats posed by advanced AI. The Government here are to host a global AI safety summit this autumn, apparently, but Governments, as some have said, are starting miles behind the start line. It will be critical for that summit to get the right people in the room and in particular not to allow the tech giants to regulate themselves. As Nick Bostrom wrote:

“The best path towards the development of beneficial superintelligence is one where AI developers and AI safety researchers are on the same side”.


What might the shape of AI regulation look like? Among other things, as the noble Lord, Lord Ravensdale, said, Governments need to significantly increase the information they have about the technological frontiers. The public interest far outweighs commercial secrecy. This challenge is international and global; AI, like a pandemic, knows no boundaries.

Regulation should document the well-known and anticipated harms of societal-scale AI and incentivise developers to address these harms. Best practice for the trustworthy development of advanced AI systems should include regular risk assessments, red teaming, third-party audits, mandatory public consultation, post-deployment monitoring, incident reporting and redress.

There are those who say that this is all unwarranted scaremongering, as some have touched on this afternoon, and that “there is nothing to see here”. But that is not convincing because those people—and they know who they are—are transparently just talking their own commercial and corporate book. I also pray in aid the following well-known question: would you board an aeroplane if the engineers who designed it said it had a 5% chance of crashing? Some, such as Eliezer Yudkowsky, say that we are already too late and that, as with Oppenheimer, the genie is already out of the bottle; all humanity can do is to die with dignity in the face of superhuman AI.

Nevertheless, there are some very recent possible causes for hope, such as the just-announced White House voluntary commitments by the large tech companies and the Prime Minister’s appointment of Ian Hogarth as the chair of the UK Government’s AI Foundation Model Taskforce.

For the sake of humanity, I end with the words of Dylan Thomas:

“Do not go gentle into that good night …

Rage, rage against the dying of the light”.

Given that we are starting miles behind the start line, I refer to Churchill’s well-known exhortation: “Action this day”.