All 1 Debates between Dawn Butler and Matt Warman

Artificial Intelligence

Debate between Dawn Butler and Matt Warman
Thursday 29th June 2023

(10 months, 1 week ago)

Commons Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Matt Warman Portrait Matt Warman (Boston and Skegness) (Con)
- View Speech - Hansard - - - Excerpts

I beg to move,

That this House has considered artificial intelligence.

Is it not extraordinary that we have not previously had a general debate on what is the issue of our age? Artificial intelligence is already with us today, but its future impact has yet to truly be felt, or indeed understood.

My aim in requesting this debate—I am very grateful to the Backbench Business Committee for awarding it—is twofold. First, it is to allow Members to express some views on an issue that has moved a long way since I was partially Minister for it, and even since the Government White Paper came out, which happened only very recently. Secondly, it is to provide people with an opportunity to express their views on a technology that has to be regulated in the public interest, but also has to be seized by Government to deliver the huge improvements in public services that we all know it is capable of. I hope that the industry will hear the views of parliamentarians, and—dare I say it?—perhaps better understand where the gaps in parliamentarians’ knowledge might be, although of course those gaps will be microscopic.

I will begin with a brief summary of where artificial intelligence is at, which will be self-avowedly superficial. At its best, AI is already allowing the NHS to analyse images better than ever before, augmenting the expertise of our brilliant and expanding workforce with technology that is in a way analogous to something like adaptive cruise control—it helps; it does not replace. It is not a technology to be scared of, and patients will welcome that tool being put at the disposal of staff.

We are already seeing AI being used to inform HR decisions such as hiring and firing—an area that is much more complex and much more in need of some kind of regulation. We see pupils using it to research—and sometimes write—their essays, and we sometimes see schools using AI to detect plagiarism. Every time I drive up to my constituency of Boston and Skegness, I listen to Politico’s “Playbook”, voiced by Amazon’s Polly AI system. It is everywhere; it is in the car too, helping me to drive it. AI is creating jobs in prompt engineering that did not exist just a few years ago, and while it is used to generate horrific child sex abuse images, it is also used to detect them.

I want to take one example of AI going rogue that a senior American colonel talked about. It was claimed that a drone was awarded points for destroying a certain set of targets. It consulted its human controller on whether it should take a certain course of action, and was told that it should not. Because it got points for those targets, it decided that the logical thing to do was to kill its human controller, and when it was told that it should not do so, it tried to target the control tower that was communicating with its controller. That is the stuff of nightmares, except for the fact that that colonel was later declared to have misspoken. No such experiment ever took place, but just seconds ago, some people in this House might have believed that it did. AI is already damaging public trust in technology. It is damaging public trust in leadership and in democracy; that has already happened, and we must guard against it happening further. Both here in and America, elections are coming up soon.

Even in the most human sector, the creative industries, one radio presenter was recently reported to have uploaded her previous shows so that the artificial intelligence version of her could cover for her during the holidays. How are new staff to get their first break, if not on holiday cover? Millions of jobs in every sector are at stake. We also hear of analysts uploading the war games of Vladimir Putin to predict how he will fight in Ukraine, with remarkable accuracy. We hear of AI being used by those interested in antibiotics and by those interested in bioweapons. There are long-term challenges here, but there are very short-term ones too.

The Government’s White Paper promotes both innovation and regulation. It does so in the context of Britain being the most advanced nation outside America and China for AI research, development and, potentially, regulation. We can and should cement that success; we are helped by DeepMind, and by OpenAI’s decision only yesterday to open its first office outside the US in London. The Prime Minister’s proposed autumn summit should allow us to build a silicon bridge to the most important technology of this century, and I welcome it hugely.

I want to lay out some things that I hope could be considered at the summit and with this technology. First, the Government clearly need to understand where AI will augment existing possibilities and challenges, and most of those challenges will already be covered by legislation. Employment, for instance, is already regulated, and whether or not companies use AI to augment their HR system, it is already illegal to discriminate. We need to make sure that those existing laws continue to be reinforced, and that we do not waste time reinventing the wheel. We do not have that time, because the technology is already with us. Transparency will be key.

Dawn Butler Portrait Dawn Butler (Brent Central) (Lab)
- Hansard - -

The hon. Member is making an important speech. Is he aware of the AI system that, in identifying potential company chief executive officers, would identify only male CEOs because of the data that had been input? Even though there is existing legislation, we have to be mindful of the data that is going into new technology and AI systems.

Matt Warman Portrait Matt Warman
- Hansard - - - Excerpts

The hon. Member is absolutely right that, when done well, AI allows us to identify discrimination and seek to eliminate it, but when done badly, it cements it into the system in the worst possible way. That is partly why I say that transparency about the use of AI will be absolutely essential, even if we largely do not need new legislation. We need principles. When done right, in time this technology could end up costing us less money and delivering greater rewards, be that in the fields of discrimination or public services and everywhere in between.

There is a second-order point, which is that we need to understand where loopholes that the technology creates are not covered by existing bits of legislation. If we think back to the time we spent in his House debating upskirting, we did not do that because voyeurism was somehow legal; we did it because a loophole had been created by a new technology and a new set of circumstances, and it was right that we sought to close it. We urgently need to understand where those loopholes are now, thanks to artificial intelligence, and we need to understand more about where they will have the greatest effects.

In a similar vein, we need to understand, as I raised at Prime Minister’s questions a few weeks ago, which parts of the economy and regions of the country will be most affected, so that we can focus the immense Government skills programmes on the areas that will be most affected. This is not a predictable industry, such as when we came to the end of the coalmining industry, and we are not able to draw obvious lines on obvious maps. We need to understand the economy and how this impacts on local areas. To take just one example, we know that call centres—those things that keep us waiting for hours on hold—are going to get a lot better thanks to artificial intelligence, but there are parts of the country that are particularly seeing an increase in local call centre employees. This will be a boom for the many people working in them, but it is also a hump that we need to get over, and we need to focus skills investment in certain areas and certain communities.

I do believe that, long term, we should be profoundly optimistic that artificial intelligence will create more jobs than it destroys, just as in every previous industrial revolution, but there will be a hump, and the Government need to help as much as they can in working with businesses to provide such opportunities. We should be optimistic that the agency that allows people to be happier in their work—personal agency—will be enhanced by the use of artificial intelligence, because it will take away some of the less exciting aspects of many jobs, particularly at the lower-paid end of the economy, but not by any means solely. There is no shame in eliminating dull parts of jobs from the economy, and there is no nobility in protecting people from inevitable technological change. History tells us that if we do seek to protect people from that technological change, we will impoverish them in the process.

I want to point to the areas where the Government surely must understand that potentially new offences are to be created beyond the tactical risk I have described. We know that it is already illegal to hack the NHS, for instance. That is a tactical problem, even if it might be somewhat different, so I want to take a novel example. We know that it is illegal to discriminate on the grounds of whether someone is pregnant or likely to get pregnant. Warehouses, many of them run by large businesses, gather a huge amount of data about their employees. They gather temperature data and movement data, and they monitor a huge amount. They gather data that goes far beyond anything we had previously seen just a few years ago, and from that data, companies can infer a huge amount, and they might easily infer from that whether someone is pregnant.

If we do that, which we already do, should we now say that it will be illegal to collect such data because it opens up a potential risk? I do not think we should, and I do not think anyone would seriously say we should, but it is open to a level of discrimination. Should we say that such discrimination is illegal, which is the situation now—companies can gather data but it is what they do with it that matters—or should we say that it actually exposes people to risk and companies to a legal risk, and that it may take us backwards rather than forwards? Unsurprisingly, I think there is a middle ground that is the right option.

Suddenly, however, a question as mundane as collecting data about temperature and movements, ostensibly for employee welfare and to meet existing commitments, turns into a political decision: what information is too much and what analysis is too much? It brings us as politicians to questions that suddenly and much more quickly revert to ethics. There is a risk of huge and potentially dangerous information asymmetry. Some people say that there should be a right to a human review and a right to know what cannot be done. All these are ethical issues that come about because of the advent of artificial intelligence in the way that they have not done so previously. I commend to all Members the brilliant paper by Oxford University’s Professor Adams-Prassl on a blueprint for regulating algorithmic management, and I commend it to the Government as well.

AI raises ethical considerations that we have to address in this place in order to come up with the principles-based regulation that we need, rather than trying to play an endless game of whack-a-mole with a system that is going to go far faster than the minds of legislators around the world. We cannot regulate in every instance; we have to regulate horizontally. As I say, the key theme surely must be transparency. A number of Members of Parliament have confessed—if that is the right word—to using AI to write their speeches, but I hope that no more people have used AI to write their speeches than those who have already confessed. Transparency has been key in this place, and it should be key in financial services and everywhere else. For instance, AI-generated videos could already be forced to use watermarking technology that would make it obvious that they are not the real deal. As we come up to an election, I think that such use of existing technology will be important. We need to identify the gaps—the lacunae—both in legislation and in practice.

Artificial intelligence is here with us today and it will be here for a very long time, at the very least augmenting human intelligence. Our endless creativity is what makes us human, and what makes us to some extent immune from being displaced by technology, but we also need to bear in mind that, ultimately, it is by us that decisions will be made about how far AI can be used and what AI cannot be used for. People see a threat when they read some of the most hyperbolic headlines, but these are primarily not about new crimes; they are about using AI for old crimes, but doing them a heck of a lot better.

I end by saying that the real risk here is not the risk of things being done to us by people using AI. The real risk is if we do not seize every possible opportunity, because seizing every possible opportunity will allow us to fend off the worst of AI and to make the greatest progress. If every student knows that teachers are not using it, far more fake essays will be submitted via ChatGPT. Every lawyer and every teacher should be encouraged to use this technology to the maximum safe extent, not to hope that it simply goes away. We know that judges have already seen lawyers constructing cases using AI and that many of the references in those cases were simply fictional, and the same is true of school essays.

The greatest risk to progress in our public services comes from not using AI: it comes not from malevolent people, but from our thinking that we should not embrace this technology. We should ask not what AI can do to us; we should ask what we can do with AI, and how Government and business can get the skills they need to do that best. There is a risk that we continue to lock in the 95% of AI compute that sits with just seven companies, or that we promote monopolies or the discrimination that the hon. Member for Brent Central (Dawn Butler) mentioned. This is an opportunity to avert that, not reinforce it, and to cement not prejudice but diversity. It means that we have an opportunity to use game-changing technology for the maximum benefit of society, and the maximum number of people in that society. We need to enrich the dialogue between Government, the private sector and the third sector, to get the most out of that.

This is a matter for regulation, and for global regulation, as is so much of the modern regulatory landscape. There will be regional variations, but there should also be global norms and principles. Outside the European Union and United States, Britain has that unique position I described, and the Prime Minister’s summit this autumn will be a key opportunity—I hope all our invites are in the post, or at least in an email. I hope that will be an opportunity not just for the Prime Minister to show genuine global leadership, but also an opportunity to involve academia, parliamentarians and broader society in having that conversation, and allow the Government to seize the opportunity and regain some trust on this technology.

I urge the Minister to crack on, seize the day, and take the view that artificial intelligence will be with us for as long as we are around. It will make a huge difference to our world. Done right, it will make everything better; done badly, we will be far poorer for it.