Advanced Artificial Intelligence

Lord Houghton of Richmond Excerpts
Monday 24th July 2023

(9 months, 3 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Houghton of Richmond Portrait Lord Houghton of Richmond (CB)
- View Speech - Hansard - -

It is pleasure to follow the noble Lord, Lord Kakkar, and I thank the noble Lord, Lord Ravensdale, for scheduling this most timely debate. I draw attention to my relevant interests in the register, specifically my advisory roles with various tech companies—notably Tadaweb, Whitespace and Thales—and my membership of the House of Lords AI in Weapon Systems Committee.

It is as a result of my membership of that committee that I am prompted to speak, but I emphasise that the committee, although it has collected a huge amount of evidence, is still some way off reaching its final conclusions and recommendations. So today I speak from a purely personal and military perspective, not as a representative of the committee’s considered views. I want to make a few broad points in the context of the regulation of artificial intelligence in weapon systems.

First, it is clear to me at least, from the wide range of specialist evidence given to our committee, that much of that evidence is conflicted, lacks consensus or is short of factual support. This is especially true of the technology, the capability of which is mostly concerned with future risk rather than current reality. Secondly, it is reasonably clear that, although there is no perfect equilibrium, there are as many benefits to modern warfare from artificial intelligence as there are risks. I think of such things as greater precision, less collateral damage, speed of action, fewer human casualties, less human frailty and a greater deterrent effect—but this is not to deny that there are significant risks. My third general point is that to deny ourselves the potential benefit of AI for national military advantage in Armed Forces increasingly lacking scale would surely not be appropriate. It will most certainly not be the course of action that our enemies will pursue, though they may well impel us to do so through devious means.

My own view, therefore, is that the sensible policy approach to the future development of AI in weapon systems is to proceed but to do so with caution, the challenge being how to satisfactorily mitigate the risks. To some extent, the answer is regulation. The form that might take is up for further debate and refinement but, from my perspective, it should embrace at least three main categories.

The first would be the continued pursuit of international agreement or enhancements to international law or treaty obligations to prevent the misuse of artificial intelligence in lethal weapon systems. The second would be a refined regulatory framework which controlled the research, development, trials, testing and, ultimately—when passed—the authorisation of AI-assisted weapon systems prior to operational employment. This could be part of a national framework initiative.

As an aside, I think I can say without fear of contradiction that no military commander—certainly no British one—would wish to have the responsibility for a fielded weapon system that made autonomous judgments through artificial intelligence, the technology and reliability of which was beyond human control or comprehension.

The third area of regulation is on the battlefield itself. This is not to bureaucratise the battlefield. I think I have just about managed to convince a number of my fellow committee members that the use of lethal force on operations is already a highly regulated affair. But there would need to be specific enhancements to the interplay between levels of autonomy and the retention of meaningful human control. This is needed both to retain human accountability and to ensure compliance with international humanitarian law. This will involve a quite sophisticated training burden, but none of this is insurmountable.

I want to finish with two general points of concern. There are two dangerous and interlinked dynamics regarding artificial intelligence and the nature of future warfare. Together, they need us to reimagine the way future warfare may be, and arguably already is being, conducted. Future warfare may not be defined by the outcome of military engagement in set-piece battles that test the relative military quality of weapons, humans and tactics. The desirability of risking the unpredictability of crossing the threshold of formalised warfare may cause many people, including political leaders, to think of alternate means of gaining international competitive advantage.

The true dangers of artificial intelligence, in a reimagined form of warfare that is below the threshold of formalised war, lie in its ability to exploit social media and the internet of things to radicalise, fake, misinform, disrupt national life, create new dependencies and, ultimately, create alternate truths and destroy the democratic process. By comparison with the task of regulating this reimagined form of warfare, the regulation of autonomous weapon systems is relatively straightforward.