Artificial Intelligence

Darren Jones Excerpts
Thursday 29th June 2023

(10 months, 3 weeks ago)

Commons Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Darren Jones Portrait Darren Jones (Bristol North West) (Lab)
- View Speech - Hansard - -

Thank you, Mr Deputy Speaker. I am Chair of the Business and Trade Committee, but if there is an AI Committee I am certainly interested in serving on it. I declare my interest, as set out in the Register of Members’ Financial Interests, and I thank the hon. Member for Boston and Skegness (Matt Warman) and the Backbench Business Committee for organising and agreeing to this important debate.

I will make the case for the Government to be more involved in the technology revolution, and explain what will happen if we leave it purely to the market. It is a case for a technology revolution that works in the interests of the British people, not against our interests. In my debate on artificial intelligence a few weeks ago, I painted a picture of the type of country Britain can become if we shape the technology revolution in our interests. It is a country where workers are better paid, have better work and more time off. It is a country where public servants have more time to serve the public, with better access and outcomes from our public services, at reduced cost to the taxpayer. It is a country where the technological revolution is seen as an exciting opportunity for workers and businesses alike—an opportunity to learn new things, improve the quality of our work, and create an economy that is successful, sustainable, and strong.

I also warned the House about the risks of the technology revolution if we merely allow ourselves to be shaped by it. That is a country where technology is put upon people, instead of being developed with them, and where productivity gains result in economic growth and higher profits, but leave workers behind with reduced hours or no job at all. It is where our public services remain in the analogue age and continue to fail, with increased provision from the private sector only for those who can afford it. It is a world in which the pace of innovation races ahead of society, creatively destroying the livelihoods of many millions of people, and where other countries leap ahead of our own, as we struggle to seize the economic opportunities of the technology revolution for our own economy, and through the potential for exports to support others.

The good news is that we are only really at the start of that journey, and we can shape the technology revolution in our interests if we choose to do so. But that means acting now. It means remembering, for all our discussions about artificial intelligence and computers, that we serve the people. It means being honest about the big questions that we do not yet have answers to. It is on some of those big questions that I will focus my remarks. That is not because I have fully formed answers to all of them at this stage, but because I think it important to put those big questions on the public record in this Parliament.

The big questions that I wish to address are these: how do we maintain a thriving, innovative economy for the technology sector; how can we avoid the risk of a new age of inequality; how can we guarantee the availability of work for people across the country; and how can we balance the power that workers have, and their access to training and skills? Fundamental to all those issues is the role and capacity of the state to support people in the transition.

We will all agree that creating a thriving, innovative economy is a good idea, and we all want Britain to be the go-to destination for investment, research and innovation. We all want the British people, wherever they are from and from whatever background, to know that if they have an idea, they can turn it into a successful business and benefit from it. As the hon. Member for Boston and Skegness alluded to, that means getting the balance right between regulation and economic opportunity, and creating the services that will support people in that journey. Ultimately, it means protecting the United Kingdom’s status as a great place to invest, start, and scale up technology businesses.

Although we are in a relatively strong position today, we risk falling behind quickly if we do not pay attention. In that context, the risk of a new age of inequality is perhaps obvious. If the technology revolution is an extractive process, where big tech takes over the work currently done by humans and restricts the access to markets needed by new companies, power and wealth will be taken from workers and concentrated in the already powerful, wealthy and largely American big-tech companies. I say that not because I am anti-American or indeed anti-big tech, but because it is our job to have Britain’s interest at the front of our minds.

Will big tech pick up the tab for universal credit payments to workers who have been made redundant? Will it pay for our public services in a situation where fewer people are in work paying less tax? Of course not. So we must shape this process in the interests of the British people. That means creating inclusive economic opportunities so that everybody can benefit. For example, where technology improves productivity and profits, workers should benefit from that with better pay and fewer days at work. Where workers come up with innovative ideas on how to use artificial intelligence in their workplace, they should be supported to protect their intellectual property and start their own business.

The availability of work is a more difficult question, and it underpins the risk of a new age of inequality. For many workers, artificial intelligence will replace the mundane and the routine. It can result in human workers being left with more interesting and meaningful work to do themselves. But if the productivity gains are so significant, there is conceivably a world in which we need fewer human workers than we have today. That could result in a four-day week, or even fewer days than that, with work being available still for the majority of people. The technology revolution will clearly create new jobs—a comfort provided to us by the history of previous industrial revolutions. However, that raises two questions, which relate to my next point about the power of workers and their access to training and skills.

There are too many examples today of technology being put upon workers, not developed with them. That creates a workplace culture that is worried about surveillance, oppression, and the risk of being performance managed or even fired by an algorithm. That must change, not just because it is the right thing to do but because, I believe, it is in the interests of business managers and owners for workers to want to use these new technologies, as opposed to feeling oppressed by them. On training, if someone who is a worker today wants to get ahead of this revolution, where do they turn? Unless they work in a particularly good business, the likelihood is that they have no idea where to go to get access to such training or skill support. Most people cannot just give up their job or go part time to complete a higher education course, so how do we provide access to free, relevant training that workers are entitled to take part in at work? How does the state partner with business to co-create and deliver that in the interests of our country and the economy? The role of the Government in this debate is not about legislation and regulation; it is about the services we provide, the welfare state and the social contract.

That takes me to my next point: the role and capacity of the Government to help people with the technology transition. Do we really think that our public services today are geared towards helping people benefit from what will take place? Do we really believe our welfare system is fit for purpose in helping people who find themselves out of work? Artificial intelligence will not just change the work of low-paid workers, who might just be able to get by on universal credit; it will also affect workers on middle and even higher incomes, including journalists, lawyers, creative sector workers, retail staff, public sector managers and many more. Those workers will have mortgages or rents to pay, and universal credit payments will go nowhere near covering their bills. If a significant number of people in our country find themselves out of work, what will they do? How will the Government respond? The system as it is designed today is not fit for that future.

I raise those questions not because I have easy answers to them, but because the probability of those outcomes is likely. The severity of the problem will be dictated by what action we take now to mitigate those risks. In my view, the state and the Government must be prepared and must get themselves into a position to help people with the technology transition. There seems now to be political consensus about the opportunities of the technology revolution, and I welcome that, but the important unanswered question is: how? We cannot stop this technology revolution from happening. As I have said, we either shape it in our interests or face being shaped by it. We can sit by and watch the market develop, adapt and innovate, taking power and wealth away from workers and creating many of the problems I have explained today, leaving the Government and our public services to pick up the pieces, probably without sufficient resources to do so. Alternatively, we can decide today how this technology revolution will roll out across our country.

I was asked the other day whether I was worried that this technology-enabled future would create a world of despair for my children. My answer was that I am actually more worried about the effects of climate change. I say that because we knew about the causes and consequences of climate change in the 1970s, but we did nothing about it. We allowed companies to extract wealth and power and leave behind the damage for the public to pick up. We are now way behind where we need to be, and we are actively failing to turn it around, but with this technology revolution, we have an opportunity in front of us to show the public that a different, more hopeful future is possible for our country—a country filled with opportunity for better work, better pay and better public services. Let us not make the same mistakes as our predecessors in the 1970s, and let us not be trapped in the current debate of doom and despair for our country, even though there are many reasons to feel like that.

Let us seize this opportunity for modernisation and reform, remembering that it is about people and our country. We can put the technology revolution at the heart of our political agenda and our vision for a modern Britain with a strong, successful and sustainable economy. We can have a technology revolution that works in the interests of the British people and a Britain that is upgraded so that it works once again. However, to shape the technology revolution in our interests, that work must start now.

--- Later in debate ---
Dawn Butler Portrait Dawn Butler (Brent Central) (Lab)
- Hansard - - - Excerpts

It is a pleasure to follow the Chair of the Science, Innovation and Technology Committee, of which I am a member. I congratulate the hon. Member for Boston and Skegness (Matt Warman) on securing the debate and the Backbench Business Committee on allowing it.

I used to be a computer programmer, so I like tech— I need to put that out there. This debate on AI follows the categories of three types of people. I compare those three types of people to some friends that I used to go out partying with. One would get completely wasted; one would just sit in the corner and not want to get involved; and I would be the driver, so I would stay sober but enjoy the party. In this debate I like to think that I am staying sober, enjoying the party and making sure that everyone, myself included, gets home safely.

Darren Jones Portrait Darren Jones
- Hansard - -

Could I invite my hon. Friend to inform the House which of us in this debate is the one who got wasted?

Dawn Butler Portrait Dawn Butler
- Hansard - - - Excerpts

I do not think my friend would look kindly on me if I said their name. I always made sure they got home safely—that is the main thing.

My hon. Friend the Member for Streatham (Bell Ribeiro-Addy) told me the other day that she asked ChatGPT to write a letter from her to constituents on the dangers of AI. This is what ChatGPT churned out:

“Dear [Constituent],

Thank you for reaching out to me with your concerns about large language model AI. I share your worries about the potential risks that this technology poses and I appreciate the opportunity to address them.

As you may be aware, large language model AI is an advanced form of artificial intelligence that can generate human-like text by training on vast amounts of data. While this technology has shown tremendous potential in various fields, including language translation, chatbots, and content creation, it also presents significant ethical, social, and legal challenges.

One of the main concerns about large language model AI is its potential to perpetuate biases and discrimination. If the data used to train these models are biased, the AI will learn and replicate those biases in its output, leading to harmful outcomes. Moreover, these models have the ability to generate highly realistic but entirely fake text, which could be used to spread disinformation, manipulate public opinion, and even harm individuals.

Therefore, I believe that the regulation and governance of large language model AI are essential to ensure that it is used ethically and responsibly. The UK government has been actively engaging with the tech industry, academia, and civil society to understand the implications of AI and develop a regulatory framework that promotes innovation while safeguarding human rights and values.”

That, to me, is an illustration of the fact that even AI itself recognises that there are some issues with AI. It shows that we need clear regulation, which we do not quite have at the moment. There is still time for the Government’s White Paper to change that, and I hope that debates of this kind will enable change to happen.

Many Members have referred to the use of AI for medical advances, and quantum computers will certainly enable medicines and medical solutions to be found much more quickly. However, as I said when evidence was being given to the Science, Innovation and Technology Committee, even something as simple as body mass index, which is used in the medical world, is a flawed measurement. The use of BMI in the building of AI will integrate that bias into anything that the AI produces. Members may not be aware that the BMI scale was created not by a doctor but by an astronomer and mathematician in the 1800s. What he was trying to do was identify l’homme moyen—the average man—in statistical terms. The scale was never meant to be used in the medical world in the way that it is. People can be prevented from having certain medical procedures if their BMI is too high. The Committee was given no evidence that we would rule out, or mitigate, a flawed system such as BMI in the medical profession and the medical world. We should be worried about this, because in 10 or 20 years’ time it will be too late to explain that BMI was always discriminatory against women, Asian men and black people. It is important for us to get this right now.

I recognise the huge benefits that AI can have, but I want to stress the need to stay sober and recognise the huge risks as well. When we ask certain organisations where they get their data from, the response is very opaque: they do not tell us where they are getting their data from. I understand that some of them get their mass data scraping from sites such as Reddit, which is not really where people would go to become informed on many things.

If we do not take this seriously, we will be automating discrimination. It will become so easy just to accept what the system is telling us, and people who are already marginalised will become further marginalised. Many, if not most, AI-powered systems have been shown to contain bias, whether against people of colour, women, people with disabilities or those with other protected characteristics. For instance, in the case of passport applications, the system keeps on saying that a person’s eyes are closed when in fact they have a disability. We must ensure that we measure the impact on the public’s rights and freedoms alongside the advances in AI. We cannot become too carried away—or drunk—with all the benefits, without thinking about everything else.

At the beginning, I thought it reasonable for the Government to say, “We will just expand legislation that we already have,” but when the Committee was taking evidence, I realised that we need to go a great deal further—that we need something like a digital Bill of Rights so that people understand and know their rights, and so that those rights are protected. At the moment, that is not the case.

There was a really stark example when we heard some information in regard to musicians, music and our voices. Our voices are currently not protected, so with the advancements of deepfake, anybody in this House can have their voice attached to something using deepfake and we would have no legal recourse, because at the moment our voices are not protected. I believe that we need a digital Bill of Rights that would outlaw the most dangerous uses of AI, which should have no place in a real democracy.

The Government should commit to strengthening the rights of the public so that they know what is AI-generated or whether facial recognition—the digital imprint of their face—is being used in any way. We know, for instance, that the Met police have on file millions of people’s images—innocent people—that should not be there. Those images should be taken off the police database. If an innocent person’s face is on the database and, at some point, that is put on a watch list, the domino effect means that they could be accused of doing something they have not done.

The UK’s approach to AI currently diverges from that of our closest trading partners, and I find that quite strange. It is not a good thing and there is an apparent trade-off between progress and safety. I think we should always err on the side of safety and ethics. Progress will always happen; we cannot stop progress. Companies will always invest in AI. It is the future, so we do not have to worry about that—people will run away with that. What we have to do is ensure that we protect people’s safety, because otherwise, instead of being industry leaders in the UK, we will be known as the country that has shoddy or poor practices. Nobody really wants that.

There are countries that are outlawing how facial recognition is used, for instance, but we are not doing that in the UK, so we are increasingly looking like the outlier in this discussion and protection around AI. Government’s first job is to protect their citizens, so we should protect citizens now from the dangers of AI.

Harms are already arising from AI. The Government’s recently published White Paper takes the view that strong, clear protections are simply not needed. I think the Government are wrong on that. Strong, clear protections are most definitely needed—and needed now. Even if the Government just catch up with what is happening in Europe and the US, that would be more than we are doing at the moment. We need new, legally binding regulations.

The White Paper currently has plans to water down data rights and data protection. The Data Protection and Digital Information (No. 2) Bill paints an alarming picture. It will redefine what counts as personal data. All these things have been put in place piecemeal to ensure that personal data is protected. If we lower the protection in the definition of what is personal data, that will mean that any company can use our personal data for anything it wants and we will have very limited recourse to stop that. At the end of the day, our personal data is ultimately what powers many AI systems, and it will be left ripe for exploitation and abuse. The proposals are woefully inadequate.

The scale of the challenge is vast, but instead of reining in this technology, the Government’s approach is to let it off the leash, and that is problematic. When we received evidence from a representative from the Met police, she said that she has nothing to hide so what is the problem, for instance, in having the fingerprint, if you like, of her face everywhere that she goes? I am sure that we all have either curtains or blinds in our houses. If we are not doing anything illegal, why have curtains or blinds? Why not just let everyone look into our house? Most abuse happens in the home so, by the same argument, surely allowing everyone to look into each other’s houses would eliminate a lot of abuse.

In our country we have the right to privacy, and people should have that right. Our digital fingerprints should not be taken without our consent, as we have policing by consent. The Met’s use of live facial recognition and retrospective facial recognition is worrying. I had a meeting with Mark Rowley the other day and, to be honest, he did not really understand the implications, which is a worry.

Like many people, I could easily get carried away and get drunk with this AI debate, but I am the driver. I need to stay sober to make sure everyone gets home safely.