Debates between Greg Clark and Dawn Butler during the 2019 Parliament

Artificial Intelligence

Debate between Greg Clark and Dawn Butler
Thursday 29th June 2023

(10 months ago)

Commons Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Greg Clark Portrait Greg Clark (Tunbridge Wells) (Con)
- View Speech - Hansard - -

It is a pleasure to speak in this debate, and I congratulate my hon. Friend the Member for Boston and Skegness (Matt Warman) on securing it and on his excellent speech and introduction. It is a pleasure to follow my fellow Committee Chair, the hon. Member for Bristol North West (Darren Jones). Between the Business and Trade Committee and the Science, Innovation and Technology Committee, we have a strong mutual interest in this debate, and I know all of our members take our responsibilities seriously.

This is one of the most extraordinary times for innovation and technology that this House has ever witnessed. If we had not been talking so much about Brexit and then covid, and perhaps more recently, Russia and Ukraine, our national conversation and—this goes to the point made by my hon. Friend the Member for Boston and Skegness—debates in this Chamber, would have been far more about the technological revolution that is affecting all parts of the world and our national life.

It is true to say that, perhaps as well as the prominence that the discovery of vaccines against covid has engendered, AI has punctured through into public consciousness as a change in the development of technology. It has got people talking about it, and not before time. I say that because, as both Members who have made speeches have said, it is not a new technology, in so far as it is a technology at all. In fact, in a laconic question to one of the witnesses in front of our Committee, one member observed, “Was artificial intelligence not just maths and computers?” In fact, one of the witnesses said that in his view it was applied statistics. This has been going on for some time.

My Committee, the Science, Innovation and Technology Committee—I am delighted to see my colleague the hon. Member for Brent Central (Dawn Butler) here—is undertaking a fascinating and, we hope, impactful inquiry into the future governance of AI. We are taking it seriously to understand the full range of issues that do not have easy or glib answers—if they do, those are best avoided—and we want to help inform this House and the Government as to the best resolutions to some of the questions in front of us. We intend to publish a report in the autumn, but given the pace of debate on these issues and, as I am sure the hon. Lady will agree, the depth of the evidence we have heard so far, we hope to publish an interim report sooner than that. It would be wrong for me as Chair of the Committee to pre-empt the conclusions of our work, but we have taken a substantial amount of evidence in public, both oral and written, so I will draw on what we have found so far.

Having said that AI is not new—it draws on long-standing research and practice—it is nevertheless true to say that we are encountering an acceleration in its application and depth of progress. To some extent, the degree of public interest in it, without resolution to some of the policy questions that the hon. Member for Bristol North West alluded to, carries some risks. In fact, the nomenclature “artificial intelligence” is in some ways unhelpful. The word “artificial” is usually used in a pejorative, even disdainful way. When combined with the word “intelligence”, which is one of the most prized human attributes, the “artificial” rather negates the positivity of the “intelligence”, leading to thoughts of dystopia, rather than the more optimistic side of the argument to which my hon. Friend the Member for Boston and Skegness referred. Nevertheless, it is a subject matter with which we need to grapple.

In terms of the pervasiveness of AI, much of it is already familiar to us, whether it is navigation by sat-nav or suggestions of what we might buy from Amazon or Tesco. The analysis of data on our behaviour and the world is embedded, but it must be said that the launch of ChatGPT to the public just before Christmas has catapulted to mass attention the power already available in large language models. That is a breakthrough moment for millions of people around the world.

As my hon. Friend said, much of the current experience of AI is not only benign, but positively beneficial. The evidence that our Committee has taken has looked at particular applications and sectors. If we look at healthcare, for example, we took evidence from a medical company that has developed a means of recognising potential prostate cancer issues from MRI scans far before any symptoms present themselves, and with more accuracy than previous procedures. We heard from the chief executive of a company that is using AI to accelerate drug discovery. It is designing drugs from data, and selecting the patients who stand to benefit from them. That means that uses could be found, among more accurately specified patient groups, for drugs that have failed clinical trials on the grounds not of safety but of efficacy. That could lead to a very early prospect of better health outcomes.

We heard evidence that the positive effects of AI on education are significant. Every pupil is different; we know that. Every good teacher tailors their teaching to the responses and aptitudes of each student, but that can be done so much better if the tailoring is augmented through the use of technology. As Professor Rose Luckin of University College London told us,

“students who might have been falling through the net can be helped to be brought back into the pack”

with the help of personalised AI. In the field of security, if intelligence assessments of a known attacker are paired with AI-rich facial recognition technology, suspects may be pinpointed and apprehended before they have the chance to execute a deadly attack.

There are many more advantages of AI, but we must not only observe but act on the risks that arise from the deployment of AI. Some have talked about the catastrophic potential of AI. Much of what is suggested, as in the case of the example given by my hon. Friend the Member for Boston and Skegness, is speculative, the work of fiction, and certainly in advance of any known pathway. It is important to keep a cool head on these matters. There has been talk in recent weeks of the possibility of AI killing many humans in the next couple of years. We should judge our words carefully. There are important threats, but portents of disaster must be met with thinking from cool, analytical heads, and concrete proposals for steps to take.

I very much applaud the seriousness with which the Government are approaching the subject of the governance of AI. For example, a very sensible starting point is making use of the deep knowledge of applications among our sector regulators, many of which enjoy great respect. I have mentioned medicine; take the medical regulator, the Medicines and Healthcare products Regulatory Agency. With its deep experience of supervising clinical trials and the drug discovery process, it is clear that it is the right starting point; we should draw on its experience and expertise. If AI is to be used in drug discovery or diagnostics, it makes sense to draw on the MHRA’s years of deep experience, for which it is renowned worldwide.

It is also right to require regulators to come together to develop a joint understanding of the issues, and to ask them to work collectively on regulatory approaches, so that we avoid inconsistency and inadvertently applying different doctrines in different sectors. It is right that regulators should talk to each other, and that there should be coherence. Given the commonalities, there should be a substantial, well-funded, central capacity to develop regulatory competence across AI, as the Government White Paper proposed.

I welcome the Prime Minister’s initiative, which the hon. Member for Bristol North West mentioned. In Washington, the Prime Minister agreed to convene a global summit on AI safety in the UK in the autumn. Like other technologies, AI certainly does not respect national boundaries. Our country has an outstanding reputation on AI, the research and development around it, and—at our best—regulatory policy and regulation, so it is absolutely right that we should lead the summit. I commend the Prime Minister for his initiative in securing that very important summit.

The security dimension will be of particular importance. Like-minded countries, including the US and Japan, have a strong interest in developing standards together. That reflects the fact that we see the world through similar eyes, and that the security of one of us is of prime importance to the others. The hon. Member for Bristol North West, in his debate a few weeks ago, made a strong point about international collaboration.

One reason why a cool-headed approach needs to be taken is that the subject is susceptible to the involvement of hot heads. We must recognise that heading off the risks is not straightforward; it requires deep reflection and consideration. Knee-jerk regulatory responses may prove unworkable, will not be widely taken up by other countries, and may therefore be injurious to the protections that policy innovation aims to deliver. I completely agree with the hon. Gentleman that there is time for regulation, but not much time. We cannot hang around, but we need to take the appropriate time to get this right. My Committee will do what it can to assist on that.

If the Government reflect on these matters over the summer, their response should address a number of challenges that have arisen in this debate, and from the evidence that my Committee took. Solutions must draw on expertise from different sectors and professions, and indeed from people with expertise in the House, such as those contributing to this debate. Let me suggest briefly a number of challenges that a response on AI governance should address. One that has emerged is a challenge on bias and discrimination. My hon. Friend the Member for Brent Central has been clear and persistent in asking questions to ensure that the datasets on which algorithms are trained do not embed a degree of bias, leading to results that we would not otherwise tolerate. I dare say she will refer to those issues in her speech. For example, as has been mentioned, in certain recruitment settings, if data reflects the gender or ethnic background of previous staff, the profile of an “ideal” candidate may owe a great deal to past biases. That needs to be addressed in the governance regime.

There is a second and related point on the black box challenge. One feature of artificial intelligence is that the computer system learns from itself. The human operator or commissioner of the software may not know why the algorithm or AI software has made a recommendation or proposed a course of action. That is a big challenge for those of us who take an interest in science policy. The scientific method is all about transparency; it is about putting forward a hypothesis, testing it against the data, and either confirming or rejecting the hypothesis. That is all done publicly; publication is at the heart of the scientific method. If important conclusions are reached —and they may be accurate conclusions, with great predictive power—but we do not know how, because that is deep within the networks of the AI, that is a profound challenge to the scientific method and its applications.

Facial recognition software is a good example. The Metropolitan police is using facial recognition software combined with AI. It commissioned a study—a very rigorous study—from the National Physical Laboratory, which looks at whether there is any racial bias that can be determined from the subjects that are detected through the AI algorithms. The study finds that there is no evidence of that, but that is on the basis of a comparison of outputs against other settings; it is not based on a knowledge of the algorithms, which in this case is proprietary. It may or may not be possible to look into the black box, but that is one question that I think Governments and regulators will need to address.

Dawn Butler Portrait Dawn Butler
- View Speech - Hansard - - - Excerpts

In evidence to the Committee—of which I am a member— the Met said that there was no bias in its facial recognition system, whereas its own report states that there is bias in the system, and a bias with regard to identifying black and Asian women. In fact, the results are 86% incorrect. There are lots of ways of selling the benefits of facial recognition. Other countries across Europe have banned certain facial recognition, while the UK has not. Does the right hon. Gentleman think that we need to look a lot more deeply into current applications of facial recognition?

Greg Clark Portrait Greg Clark
- Hansard - -

The hon. Lady makes an excellent point. These challenges, as I put them, do not often have easy resolution. The question of detecting bias is a very important one. Both of us have taken evidence in the Committee and in due course we will need to consider our views on it, but she is right to highlight that as a challenge that needs to be addressed if public confidence and justice are to be served. It cannot be taken lightly or as read. We need to look at it very clearly.

There is a challenge on securing privacy. My hon. Friend the Member for Boston and Skegness made a very good point about an employer taking people’s temperatures, whether they could be an indication of pregnancy and the risk that that may be used in an illegal way. That is one example. I heard an example about the predictive power of financial information. The transaction that pays money to a solicitors’ firm that is known to have a reputation for advising on divorce can be a very powerful indicator of a deterioration in the financial circumstances of a customer in about six months’ time. Whether the bank can use that information, detecting a payment to a firm of divorce solicitors, to downgrade a credit rating in anticipation is a matter that I think at the very least should give rise to debate in this House. It shows that there are questions of privacy: the use of data gathered for one purpose for another.

Since we are talking about data, there is also a challenge around access to data. There is something of a paradox about this. The Committee has taken evidence from many software developers, which quite often are small businesses founded by a brilliant and capable individual. However, to train AI software, they need data. The bigger the dataset the more effective the training is, so there are real returns to economies of scale when it comes to data. There is a prospective contrast between potentially very small software developers who cannot do anything without access to data that may be in the hands of very large companies. Those of us who use Google know that it has a lot of information on us. I mentioned banks. They have a lot of information on us, too. That is not readily accessible to small start- ups, so access to data is something we will need to address.

Another challenge we need to address is access to compute, which is to say, the power to analyse data. Again, the bigger the computer, the bigger the compute power and the more effective and successful algorithms will be, but that can be a barrier to entry to smaller firms. If they are reserved to giants, that has profound consequences for the development of the industry. It is one of the reasons why I think the Government are right to consider plans for a dedicated compute resource in this country.

Those issues combine to make for what we might call an anti-trust challenge, to which the hon. Member for Bristol North West referred. There is a great danger that already we may concentrate market power in the hands of a very small number of companies, from which it is very difficult thereafter to diversify and have the degree of contestability and competition that the full benefits of AI should be able to respond to. Our regulators, in particular our competition regulators, will need to pay close attention to that.

Related to that is the law and regulation around intellectual property and copyright. In the creative industries, our copyright gives strong protection to people who create their own original work. The degree of modification or use without payment and licensing that is tolerable without damaging the returns and the vibrancy of our crucial creative sector is very important.

Another challenge is on liability, which mirrors some of the debates taking place about our large social media platforms. If we develop a piece of AI in an application that is used for illegal purposes, should we, as the developer or the person who licenses it, be responsible for its use by an end user or should that be a matter for them? In financial services, we have over time imposed strong requirements on providers of financial services, such as banks, to, in the jargon, know your customer—KYC. It is not sufficient just to say, “I had no reason to suppose that my facilities were going to be used for money laundering or drug trafficking.” There is a responsibility to find out what the intended use is. Those questions need to be addressed here. The hon. Member for Bristol North West raised questions about employment and the transition to a new model of employment, many of which have some upsides.

One of the classic definitions of a sentient computer is that it passes the Turing test: if there was a screen between a person and the computer they were interacting with, would they know that it was a computer, or would they think it was a human being? The experience of a lot of my constituents when dealing with some large bureaucracies is that even if there is a human on the end of the telephone, they might as well be a computer because they are driven by the script and the software. In fact, one might say that they fail the Turing test. The greater personalisation of AI may overcome what can be a pretty dispiriting experience for employees who have to park their humanity and read out a script to a consumer. There are big challenges but also opportunities there.

A couple of other things have been referred to, such as the challenge of international co-ordination. We have the agency to set our own rules, but there is no point in doing so without taking the opportunity to influence the world. We will be stronger if we have—at least among like-minded countries, and preferably beyond—a strong consensus about how we should proceed.