AI in the UK (Liaison Committee Report)

Lord Browne of Ladyton Excerpts
Wednesday 25th May 2022

(1 year, 11 months ago)

Grand Committee
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Browne of Ladyton Portrait Lord Browne of Ladyton (Lab)
- Hansard - -

My Lords, it is a significant pleasure to follow the noble Lord, Lord Holmes. I admire and envy his knowledge of the issue, but mostly I admire and envy his ability to communicate about these complex issues in a way that is accessible and, on occasions, entertaining. A couple of times during the course of what he said, I thought, “I wish I’d said that”, knowing full well that at some time in future I will, which is the highest compliment I can pay him.

As was specifically spelled out in the remit of the Select Committee on Artificial Intelligence, the issues that we are debating today have significant economic, security, ethical and social implications. Thanks to the work of that committee and, to a large degree, the expertise and the leadership of the noble Lord, Lord Clement-Jones, the committee’s report is evidence that it fully met the challenge of the remit. Since its publication—and I know this from lots of volunteered opinions that I have received since April 2018, when it was published—the report has gained a worldwide reputation for excellence. It is proper, therefore, that this report should be the first to which the new procedure put in place by the Liaison Committee, to follow up on the committee’s recommendations, should be applied.

I wish to address the issue of policy on autonomous weapons systems in my remarks. I think that it is known throughout your Lordships’ House that I have prejudices about this issue—but I think that they are informed prejudices, so I share them at any opportunity that I get. The original report, as the noble Lord, Lord Clement-Jones, said, referred to lethal autonomous weapons and particularly to the challenge of the definition, which continues. But that was about as far as the committee went. As I recollect, this weaponry was not the issue that gave the committee the most concern—but that was as far as it went, because it did not have the capacity to address it, saying that it deserved an inquiry of its own. Unfortunately, that has not yet taken place, but it may do soon.

The report that we are debating—which, in paragraph 83, comments on the welcome establishment of the Autonomy Development Centre, announced by the Prime Minister on 19 November 2020 and described as a new centre dedicated to AI, to accelerate the research, development, testing, integration and deployment of world-leading artificial intelligence and autonomous systems—highlighted that the work of that centre will be “inhibited” owing to the lack of alignment of the UK’s definition of autonomous weapons with the definitions used by international partners. The government response, while agreeing the importance of ensuring that official definitions do not undermine our arguments or diverge from our allies, responded further, and at length, by acknowledging that the various definitions relating to autonomous systems are challenging and, at length, set out a comparison of them.

Further, we are told that the Ministry of Defence is preparing to publish a new defence AI strategy that will allow the UK to participate in international debates and act as a leader in the space, and we are told that the definitions will be continually reviewed as part of that. It is hard not to conclude that this response alone justifies the warning of the danger of “complacency” deployed in the title of the report.

On the AI strategy, on 18 May the ministerial response to my contribution to the Queen’s Speech debate was, in its entirety, an assurance that the AI strategy would be published before the Summer Recess. We will wait and see. I look forward to that, but there is today an urgent need for strategic leadership by the Government and for scrutiny by Parliament as AI plays an increasing role in the changing landscape of war. Rapid advancements in technology have put us on the brink of a new generation of warfare where AI plays an instrumental role in the critical functions of weapons systems.

In the Ukraine war, in April, a senior Defense Department official said that the Pentagon is quietly using AI and machine-learning tools to analyse vast amounts of data, generate useful battlefield intelligence and learn about Russian tactics and strategy. Just how much the US is passing to Ukraine is a matter for conjecture, which I will not engage in; I am not qualified to do so anyway. A powerful Russian drone with AI capabilities has been spotted in Ukraine. Meanwhile, Ukraine has itself employed the use of controversial facial recognition technology. Vice Prime Minister Fedorov told Reuters that it had been using Clearview AI—software that uses facial recognition—to discover the social media profiles of deceased Russian soldiers, which authorities then use to notify their relatives and offer arrangements for their bodies to be recovered. If the technology can be used to identify live as well as dead enemy soldiers, it could also be incorporated into systems that use automated decision-making to direct lethal force. That is not a remote possibility; last year the UN reported that an autonomous drone had killed people in Libya in 2020. There are unconfirmed reports of autonomous weapons already being used in Ukraine, although I do not think it is helpful to repeat some of that because most of it is speculation.

We are seeing a rapid trend towards increasing autonomy in weapons systems. AI and computational methods are allowing machines to make more and more decisions themselves. We urgently need UK leadership to establish, domestically and internationally, when it is ethically and legally appropriate to delegate to a machine autonomous decision-making about when to take an individual’s life.

The UK Government, like the US, see AI as playing an important role in the future of warfighting. The UK’s 2021 Integrated Review of Security, Defence, Development and Foreign Policy sets out the Government’s priority of

“identifying, funding, developing and deploying new technologies and capabilities faster than our potential adversaries”,

presenting AI and other scientific advances as “battle-winning technologies”—in what in my view is the unhelpful context of a race. My fear of this race is that at some point the humans will think they have gone through the line but the machines will carry on.

In the absence of an international ban, it is inevitable that eventually these weapons will be used against UK citizens or soldiers. Advocating international regulation would not be abandoning the military potential of new technology, as is often argued. International regulation on AWS is needed to give our industry guidance to be a sci-tech superpower without undermining our security and values. Only this week, the leaders of the German engineering industry called for the EU to create specific law and tighter regulation on autonomous and dual-use weapons, as they need to know where the line is and cannot be expected to draw it themselves. They have stated:

“Imprecise regulations would do damage to the export control environment as a whole.”


Further, systems that operate outside human control do not offer genuine or sustainable advantage in the achievement of our national security and foreign policy goals. Weapons that are not aligned with our values cannot be effectively used to defend our values. We should not be asking our honourable service personnel to utilise immoral weapons—no bad weapons for good soldiers.

The problematic nature of nonhuman-centred decision-making was demonstrated dramatically when the faulty Horizon software was used to prosecute 900-plus sub-postmasters. Let me explain. In 1999, totally coincidentally at the same time as the Horizon software began to be rolled out in sub-post offices, a presumption was introduced into the law on how courts should consider electronic evidence. The new rule followed a Law Commission recommendation for courts to presume that a computer system has operated correctly unless there is explicit evidence to the contrary. This legal presumption replaced a section of the Police and Criminal Evidence Act 1984, PACE, which stated that computer evidence should be subject to proof that it was in fact operating properly.

The new rule meant that data from the Horizon system was presumed accurate. It made it easier for the Post Office, through its private prosecution powers, to convict sub-postmasters for financial crimes when there were accounting shortfalls based on data from the Horizon system. Rightly, the nation has felt moral outrage: this is in scale the largest miscarriage of justice in this country’s history, and we have a judiciary which does not understand this technology, so there was nothing in the system that could counteract this rule. Some sub-postmasters served prison sentences, hundreds lost their livelihoods and there was at least one suicide linked to the scandal. With lethal autonomous weapons systems, we are talking about a machine deciding to take people’s lives away. We cannot have a presumption of infallibility for the decisions of lethal machines: in fact, we must have the opposite presumption, or meaningful human control.

The ongoing war in Ukraine is a daily reminder of the tragic human consequences of ongoing conflict. With the use of lethal autonomous weapons systems in future conflicts, a lack of clear accountability for decisions made poses serious complications and challenges for post-conflict resolution and peacebuilding. The way in which these weapons might be used and the human rights challenges they present are novel and unknown. The existing laws of war were not designed to cope with such situations, any more than our laws of evidence were designed to cope with the development of computers and, on their own, are not enough to control the use of future autonomous weapons systems. Even more worrying, once we make the development from AI to AGI, they can potentially develop at a speed that we humans cannot physically keep up with.

Previously in your Lordships’ House, I have referred to a “Stories of Our Times” podcast entitled “The Rise of Killer Robots: The Future of Modern Warfare?”. Both General Sir Richard Barrons, former Commander of the UK Joint Forces Command, and General Sir Nick Carter, former Chief of the Defence Staff, contributed to what, in my view, should be compulsory listening for Members of Parliament, particularly those who hold or aspire to hold ministerial office. General Sir Richard Barrons says

“Artificial intelligence is potentially more dangerous than nuclear weapons.”


If that is a proper assessment of the potential of these weapon systems, there can be no more compelling reason for their strict regulation and for them to be banned in lethal autonomous mode. It is essential that all of us, whether Ministers or not, who share responsibility for the weapons systems procured and deployed for use by our Armed Forces, fully understand the implications and risks that come with the weapons systems and understand exactly what their capabilities are and, more importantly, what they may become.

In my view, and I cannot overstate this, this is the most important issue for the future defence of our country, future strategic stability and potentially peace: that those who take responsibility for these weapons systems are civilians, that they are elected, and that they know and understand them. Anyone who listens to the podcast will dramatically realise why, because already there are conversations going on among military personnel that demand the informed oversight of politicians. The development of LAWS is not inevitable, and an international legal instrument would play a major role in controlling their use. Parliament, especially the House of Commons Defence Committee, needs to show more leadership in this area. That committee could inquire into what military AI capabilities the Government wish to acquire and how these will be used, especially in the long term. An important part of such an investigation would be consideration of whether AI capabilities could be developed and regulated so that they are used by armed forces in an ethically acceptable way.

As I have already referred to, the integrated review pledged to

“publish a defence AI strategy and invest in a new centre to accelerate adoption of this technology”.

Unfortunately, the Government’s delay in publishing the AI defence strategy has cast doubt on the goal stated in the integrated review’s commitment of security, defence, development and foreign policy that the UK will become a “science and technology superpower”. The technology is already outpacing us, and presently the UK is unprepared to deal with the ethical, legal and practical challenges presented by autonomous weapons systems. Will that change with the publication of the strategy and the establishment of the autonomy development centre? Perhaps the Minister can tell us.