Artificial Intelligence (Select Committee Report)

Lord Reid of Cardowan Excerpts
Monday 19th November 2018

(5 years, 5 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Reid of Cardowan Portrait Lord Reid of Cardowan (Lab)
- Hansard - -

My Lords, I welcome this report and I want to make a few comments arising in particular from chapter 8, dealing with ethics and responsibility. The field of artificial intelligence sets out to create computer systems that perform tasks that would otherwise require human intelligence. That is the dictionary definition. They comprise a new generation of machines whose nature is entirely different from those we have been used to. In reaping the benefits of these new systems and ceding control, as our infrastructure comes to depend on them, I believe that we need to mark a watershed in how we think about and treat software.

First, intelligence needs to be clearly understood as distinct from being intelligent or sentient. While AI entities may act in ways that emulate humans, their underlying logic remains a function of their architecture. They are in a very real sense “alien” beings whose actions result from motivations, stimuli and neural circuits that are entirely non-human. Secondly, historically, machines were built to operate deterministically; that is, to perform specific tasks within parameters set by their designers. In building AI we are creating systems whose functioning is largely opaque and whose outputs are non-deterministic; that is, what they do under all circumstances cannot be predicted with certainty. Thirdly, competitive motivations are driving the evolution of ever more sophisticated machine intelligence functions, with greater predictive value and more human-like interfaces that improve our perception of both intelligence and empathy. Devices that respond with human voices and virtual call centre operatives who converse like humans are now commonplace. The ability to appear human-like, to conduct sophisticated, responsive conversations, and even to recognise emotions, allows organisations to project human-like responsibility from what are actually software agents.

Despite human-like appearances and the ability to take actions that are functionally “correct”, they are not doing so out of concern or empathy, nor in the context of a moral, legal or ethical framework, and neither today can they be held legally responsible for their actions. Today in law we make a distinction that a human being may be responsible while a machine or an animal may not be. This creates an asymmetry because when something goes wrong, who takes responsibility for sorting out the problem? It becomes increasingly easy and desirable for every party in the value chain to absolve himself or herself of blame.

As humans, we have law-based obligations as part of our social contract within a civilised society, we have promise-based obligations as part of contracts that we form with others, and we have societal moral principles that are the core of what we regard as ethics, whether derived from rational reason or from religion. Responsible humans are aware of these ethical, moral and legal obligations. We feel empathy towards our fellows and responsibility for our children, employees and society. Those who do not do so are called sociopaths at best and psychopaths in the worst case. Ethics, morality, principles and values are not a function solely of intelligence; they are dynamic, context-dependent social constructs.

Moreover, bias and specification gaming are two important emergent properties of machine learning systems—the latter where they successfully solve a problem but do so via an unintended method, just as humans discover ways to cheat various systems. We must understand that no matter how intelligent a machine is, it may learn to act in ways that we consider biased, unethical or even criminal. For instance, we may anticipate autonomous vehicles evolving unintended bad behaviours resulting from the goals that they have been given. Equally, AI is no less vulnerable than humans to being spoofed or deceived by others, whether intentionally or unintentionally. I will not go into that matter today but it should be alarming when we come to AI-driven autonomous weaponry.

Even in the future, when machine intelligence may exceed human intelligence, we must distinguish between being better at carrying out a set of tasks and human responsibility. Intelligence is not the sole determinant of responsibility, even in human society; we talk about the “age of responsibility”, which distinguishes a minor from an adult and is based on the inability of children to make good decisions, being too immature to understand the consequences of, or consent to, certain behaviour. Representing sophisticated concepts such as “the public good” or “volunteering” in the goal-functions of machines is a far harder and more complex challenge than machine intelligence, yet it is equally important for their correct functioning.

However, the commercial value of displaying empathy means that AI entities will emulate emotion long before they are able to feel it. When a railway announcement says, “We are sorry to announce that your train is late”, the voice is not sorry; the corporation that employs and uses that voice is not sorry either. However, the company sees value in appeasing its customers by offering an apology and an automated announcement is the cheapest way of providing that apparent apology. If it is not capable of experiencing pain and suffering, can it be truly empathetic?

Furthermore, as a machine cannot be punished or incarcerated in any meaningful sense—although it might be rehabilitated through reprogramming—the notion of consequence of actions has little meaning to it. If a machine apologises, serves a prison sentence or is put in solitary confinement, has it been punished? The basis of responsibility built on an understanding of ethics and morality does not exist. It is certainly not the sole by-product of the intelligence level of the machine.

Finally, all those problems are compounded because the software industry today operates in a very different way to others that are critical to modern society, where the notion of audit exists. When we read the annual report of a PLC, it is possible to place some degree of trust in it because the chief financial officer, the accountant and the auditor take professional responsibility for the output. Similarly, an audit chain in the pharmaceutical industry enables regulators to oversee a large, complex and diverse industry. In construction, when a tragedy happens, we are able to trace the building materials used in construction. That process of audit encourages responsibility and the knowledge beforehand of the consequences of actions. But most software today is sold with an explicit disclaimer of fitness for purpose and it is virtually impossible to answer the questions: by whom, against what specification, why and when was this code generated, tested or deployed? In the event of a problem with software, who is responsible? The human owner? The company that supplied the software? The programmer? The chief executive of the company that supplied it? I would therefore argue that machine intelligence needs to be subordinate in responsibility to a human controller and therefore cannot in itself be legally responsible as an adult human, although it might in future have the legal status of a corporation or of a minor—that is, intelligent, but below the age of responsibility.

The GDPR was designed to ensure that passive “data” was linked to responsible human control. Ultimately, we might need a GDPR-equivalent for active machine learning systems to link their function to a human controller to ensure that organisations and individuals have protective and proportionate control processes in place. We might refer to the concept of that clear chain of responsibility, linking an audit of the specifications, code, testing and function to responsible individuals, as “trustable software”. Recent developments, including distributed ledger technology—blockchain to the uninitiated—would permit oversight to be implemented relatively easily.

In an age where software is at the heart of our infrastructure, where these systems are both non-deterministic and fully interconnected, AI needs a responsible human “parent”. Whoever that “parent” might be, it will require a “trustable” process to introduce auditability, accountability and, ultimately, responsibility.