Artificial Intelligence in Weapon Systems Committee Report Debate

Full Debate: Read Full Debate
Department: Ministry of Defence

Artificial Intelligence in Weapon Systems Committee Report

Lord Houghton of Richmond Excerpts
Friday 19th April 2024

(1 month, 2 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Houghton of Richmond Portrait Lord Houghton of Richmond (CB)
- View Speech - Hansard - -

My Lords, it is a delight to follow the noble Lord, Lord Browne, whose companionship in the committee was but one of its many delights.

I start by drawing attention to my relevant interests in the register, particularly my advisory role with three companies, Thales, Tadaweb and Whitespace, all of which have some interest in the exploitation of AI for defence purposes.

It is great to see a few dedicated attendees of the Chamber still here late into Friday. My motivation to speak is probably as much to do with group loyalty as the likelihood of further value added, so I will keep my comments short and more focused on some contextual observations on the committee’s work, rather than in the pursuit of additional insights. There is not much more I want to stress and/or prioritise regarding the actual conclusions and recommendations of the report, and our chairman’s opening remarks were typically excellent and comprehensive. However, there are some issues of context that it is worth giving some prominence to. I will offer half a dozen, all of which represent not the committee’s view but a personal one.

The first comment is that the committee probably thought itself confronted by a prevailing sense of negativity about the risks of AI in general and autonomous weapons systems in particular. The negativity was not among the committee’s membership but rather among many of our expert witnesses, some of whom were technical doom-mongers, while others seemed to earn their living by turning what is ultimately a practical problem of battlefield management into an ethical challenge of Gordian complexity.

My second comment is specifically on the nature of the technical evidence that we heard, which, if not outright conflicted, was at least extremely diverse in its views on risk and timescale, particularly on the risks of killer robots achieving what you might call self-generated autonomy. The result was that, despite much evidence to the contrary, it was very difficult to wholly liberate ourselves from a sense of residual ignorance of some killer fact. I judge, therefore, that this is a topic that will as we go forward require persistent and dynamic stewardship.

My third observation relates to the Damascus road. I think that the committee experienced a conversion to an understanding of how, in stark contrast, for example, to financial services, the use of lethal force on the modern battlefield is already remarkably well regulated, at least by the armed forces of more civilised societies. In this context, I think that the committee achieved a more general understanding, confirmed by military professionals, that humans will nearly always be the deciding factor in the use of lethal force when any ethical or legal constraint is in play. Identifying the need to preserve the pre-eminence of human agency is perhaps the single most important element of the committee’s findings.

My fourth comment is that the committee’s deliberations played out in the context of the obscene brutality in Ukraine and Gaza. It was a constant concern not to deny our own people of, if you like, the benefits of ethical autonomy. There is so much beneficial advantage to be derived from AI in autonomy that we would be mad not to proceed with ways to exploit it, even if the requirements of regulations will undoubtedly constrain us in ways that patently will not trouble many of our potential enemies.

My fifth comment, it follows, is on our chosen title, Proceed with Caution. I forget whether this title was imposed by our chairman or more democratically agreed by the whole committee. I wholly accept that “proceed with reckless abandon” would not have survived the secretariat’s editorship, but, on a wholly personal level, I exhort the Minister to reassure us that the Government will not allow undue caution to inhibit progress. I fear that defence is currently impoverished, so it has to be both clever and technically ambitious.

I want to say something by way of wider context. The object of our study, AI in autonomous weapons systems, necessarily narrowed the focus of the committee’s attention on conflict above the threshold of formalised warfare. However, I think the whole committee was conscious of the ever-increasing scale of conflict in what is termed the grey zone, below the threshold of formalised warfare, where the mendacious use of AI to create alternate truth, undermine democracy and accelerate the decline of social integrity is far less regulated and far more existentially threatening to our way of life. This growing element of international conflict undoubtedly demands our urgent attention.