Artificial Intelligence and the Labour Market Debate

Full Debate: Read Full Debate
Department: Department for Business and Trade

Artificial Intelligence and the Labour Market

Justin Madders Excerpts
Wednesday 26th April 2023

(1 year ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Justin Madders Portrait Justin Madders (Ellesmere Port and Neston) (Lab)
- Hansard - -

It is a pleasure to see you in the Chair, Dame Maria. This has been a thoughtful and engaging debate on an important subject, and the contributions have raised very important issues.

I particularly thank my hon. Friend the Member for Birkenhead (Mick Whitley) for introducing this debate. I thought his opening remarks about me were uncharacteristically generous, so I had a suspicion that it did not all come from him—if he wants to blame the computer, that’s fine! As he did, I refer to my entry in the Register of Members’ Financial Interests. My hon. Friend has a long history in the workplace and has seen how automation has changed work—particularly the kind done at Vauxhall Motors in Ellesmere Port—dramatically over many years. What we are talking about today is an extension of that, probably at a greater pace and with greater consequences for jobs than we have seen in the past.

My hon. Friend the Member for Birkenhead said there will be winners and losers in this; that is very important. We must be cognisant of sectors affected by AI where there will probably be more losers than winners, including manufacturing, transport and public administration. My hon. Friend hit the nail on the head when he said that we must have a rights-based and people-focused approach to this incredibly complicated subject. He was right to refer to the TUC paper about the issue. We cannot go far wrong if we hold to the principles and recommendations set out there.

The hon. Member for Folkestone and Hythe (Damian Collins) made an excellent contribution, showing a great deal of knowledge in this area. He is absolutely right to say that there has to be a level of human responsibility in the decision-making process. His references to AI in defence systems were quite worrying and sounded like something from the “Terminator” films. It sounds like dramatic science fiction, but it is a real, live issue that we need to address now. He is right that we should ensure that developers are able to clearly demonstrate the data on which they are basing their decisions, and in saying that the gig economy is a big part of the issue and that the intervention of apps in the traditional employment relationship should not be used as a proxy to water down employment rights.

The hon. Member for Watford (Dean Russell) also gave a very considered speech. He summed it up when he said that this is both amazing and terrifying. We have heard of some wonderful things that can be done, but also some extremely worrying ones. He gave examples of deception, as well as of the wonderful art that can be created through AI, and encapsulated why it is so important that we have this debate today. Although the debate is about the potential impacts of AI, it is clear that change is happening now, and at a dramatic pace that we need to keep up with; the issue has been affecting workers for some time now.

When we survey the Government’s publications on the impact of AI on the market, it is readily apparent that they are a little bit behind the curve when it comes to how technologies are affecting the way work is conducted and supervised. In the 2021 report, “The Potential Impact of Artificial Intelligence on UK Employment and the Demand for Skills”, and the recent White Paper that was published last month, there was a failure to address the issues of AI’s role in the workplace. The focus in both publications was the bigger picture, but I do not think they addressed in detail the concerns we have discussed today.

That is not to downplay the wider structural economic change that AI could bring. It has the potential to have an impact on demand for labour and the skills needed, and on the geographical distribution of work. This will be a central challenge for any Government over the next few decades. As we have heard, the analysis already points in that direction, with the 2021 Government report estimating that 7% of jobs could be affected in just five years and 18% in 10 years, with up to 30% of jobs over 20 years facing the possibility of automation. That is millions of people who may be displaced in the labour market if we do not get this right.

I will focus my comments on the impact on individual workers, because behind the rhetoric of making the UK an AI superpower, there are statements about having a pro-innovation, light-touch and coherent regulatory framework, with a desire not to legislate too early or to place undue burdens on business. That shows that the Government are, unfortunately, content to leave workers’ protections at the back of the queue. It is telling that in last month’s White Paper—a document spanning 91 pages—workplaces are mentioned just three times, and none of those references are about the potential negative consequences that we have touched on today. As we are debating this issue now, and as the Minister is engaged on the topic, we have the opportunity to get ahead of the curve, but I am afraid that the pace of change in the workplace has completely outstripped the pace of Government intervention over the last number of years.

It has been four years since we saw the Government’s good work plan, which contained many proposals that might help mitigate elements of AI’s use in the workplace. The Minister will not be surprised to hear me mention the employment Bill, which has been promised on many occasions and could have been an opportunity to consider some of these issues. We need an overarching, transformative legislative programme to deal with these matters, and the many other issues around low pay and chronic insecurity in the UK labour market—and we need a Labour Government to provide that.

With an absence of direction from Government, there is already a quiet revolution in the workplace being caused by AI. Workers across a broad range of sectors have been impacted by management techniques derived from the use of artificial intelligence. The role of manager is being diluted. Individual discretion, be it by the manager or worker, has in some instances been replaced by unaccountable algorithms. As we have heard, such practices carry risks.

Reports both in the media and by researchers have found that workplaces across a range of sectors are becoming increasingly monitored and automated, and decisions of that nature are becoming normalised. A report on algorithmic systems by the Institute for the Future of Work noted that that is ultimately redefining work in much narrower terms than can be quantified by any algorithm, with less room for the use of human judgment. Crucially, the institute found that workers were rarely involved in or even consulted about these types of data-driven technologies. The changes have completely altered those people’s experience of work, with greater surveillance and greater intensification, and use in disciplinary procedures. Members may be aware that there is now a greater use of different varieties of surveillance, including GPS, cameras, eye-tracking software, heat sensors and body-worn devices, so the activities of workers can be monitored to an extent that was hitherto unimaginable.

Of course, surveillance is not new, but the way it is now conducted reduces trust, and makes workers feel more insecure and as if they cannot dispute the evidence that the technology tells people. Most at risk of that monitoring, as the Institute for Public Policy Research has said, are those in jobs with lower worker autonomy, those with lower skills, and those without trade union representation. The latter is an area where the risk increases substantially, which tells us everything that we need to know about the importance of becoming a member of a trade union. The news today that the GMB is making progress in obtaining recognition at Amazon is to be welcomed in that respect.

Increased surveillance and monitoring is not only problematic in itself; it can lead to an intensification of work. Testimony from workers in one study stated that they are expected to be conducting work that the system can measure for 95% of the working day. Time spent talking to colleagues, using the bathroom or even taking a couple of minutes to make a cup of tea will not be registered as working, and will be logged for a manager to potentially take action against the individual. That pressure cannot be conducive to a healthy workplace in the long run. It feels almost like automated bullying, with someone monitoring their every move.

Many businesses now rely on AI-powered systems for fully automated or semi-automated decision making about task allocation, work scheduling, pay, progression and disciplinary proceedings. That presents many dangers, some of which we have talked about. Due to the complexities in the technology, AI systems can sometimes be a trusted black box by those who use them. The people using them assume that the outcome that emerges from the AI system is free of bias and discrimination, and constitutes evidence for the basis of their decisions, but how does someone contest a decision if they cannot question an algorithm?

As we have heard, there is potential for algorithmic bias. AI technology can operate only on the basis of the information put into it. Sometimes human value judgments form the basis of what is fed into the AI, and how the AI analyses it. As the hon. Member for Folkestone and Hythe mentioned, there are some famous examples, such as at Amazon, where AI was found to be systematically disconsidering women for particular job applications because of the way the algorithm worked. There is little transparency and a lack of checks and balances regarding how the technology can be used, so there is a palpable risk of AI-sanctioned discrimination running riot without transparency at the forefront.

I would like the Minister to commit to looking at how the technology works in the workplace at the moment, and to making an assessment of what it is being used for and its potential to discriminate against people with protected characteristics. The Data Protection and Digital Information (No. 2) Bill will create new rights where wholly automated decision making is involved, but the question is: how will someone know when a fully automated decision has been taken if they are not told about it? Is there not a risk that many employers will slot into the terms and conditions of employment a general consent to automated decision making, which will remove the need for the person to be notified all together?

A successful AI strategy for this country should not be built on the back of the poor treatment of workers, and it is the Government’s role to create a legal and regulatory environment that shields workers from the most pernicious elements of these new technologies. That cannot be fixed by introducing single policies that tinker at the edges; it requires a long overdue wholesale update to our country’s employment laws. As the Minister will know, our new deal for working people will set out a suite of policies that address that. Among other things, it will help to mitigate the worst effects of AI, and will introduce measures that include a right to switch off, which will guard against some of the egregious examples of AI being used to intensify people’s work.

As the organised representation of the workforce, trade unions should be central to the introduction of any new technologies into the workplace. Not only will that enable employers and their representatives to find agreeable solutions to the challenges raised by modern working practices, but it will encourage more transparency from employers as to how management surveillance and disciplinary procedures operate. Transparency has been picked up a few times and it is key to getting this right.

Artificial intelligence’s impact is already being felt up and down the country, but the Government have not been quick enough to act, and its worst excesses are already out there. The need for transparency and trust with technology is clear, and we need to make sure that that has some legislative backing. It is time for a Labour Government to clear that up, stand up for working people and bolster our labour market so that new technologies that are already with us can be used to make work better for everyone.

Kevin Hollinrake Portrait The Parliamentary Under-Secretary of State for Business and Trade (Kevin Hollinrake)
- Hansard - - - Excerpts

I am grateful to be called, Dame Maria, and it is a pleasure to speak in the debate. I congratulate the hon. Member for Birkenhead (Mick Whitley) on bringing this timely subject forward. I thought it would be appropriate to type his question into ChatGPT. I put in, “What is the potential impact of AI on the labour market?” It said, “AI has the potential to transform many aspects of the economy and society for the better. It also raises concerns about job displacement and the future of work.” That is it in a nutshell. It did not say that it was time for a Labour Government.

Justin Madders Portrait Justin Madders
- Hansard - -

Did the AI tell the Minister that the Conservative Government have got everything right?

Kevin Hollinrake Portrait Kevin Hollinrake
- Hansard - - - Excerpts

I have not actually posed that question, but perhaps I could later.

This is an important debate, and it is important that we look at the issue strategically. The Government and the Labour party probably have different approaches: the Labour party’s natural position on this kind of stuff is to regulate everything as much as possible, whereas we believe that free markets have had a tremendous effect on people’s lives right across the planet. Whether we look at education, tackling poverty or child mortality, many of the benefits in our society over the last 100 years have been delivered through the free market.

Our natural inclination is to support innovation but to be careful about its introduction and to look to mitigate any of its damaging effects, and that is what is set out in the national AI strategy. As we have seen, it has AI potential to become one of the most significant innovations in history—a technology like the steam engine, electricity or the internet. Indeed, my hon. Friend the Member for Folkestone and Hythe (Damian Collins) said exactly that: this is like a new industrial revolution, and I think it is a very exciting opportunity for the future. However, we also have key concerns, which have been highlighted by hon. Members today. Although the Government believe in the growth potential of these technologies, we also want to be clear that growth cannot come at the expense of the rights and protections of working people.

Only now, as the technology rapidly improves, are most of us beginning to understand the transformative potential of AI. However, the technology is already delivering fantastic social and economic benefits for real people. The UK’s tech sector is home to a third of Europe’s AI companies, and the UK AI sector is worth more than £15.6 billion. The UK is third in the world for AI investment, behind the US and China, and attracts twice as much venture capital investment as France and Germany combined. As impressive as they are, those statistics should be put into the context of the sector’s growth potential. Recent research predicts that the use of AI by UK businesses will more than double in the next 20 years, with more than 1.3 million UK businesses using AI by 2040.

The Government have been supporting the ethical adoption of AI technologies, with more than £2.5 billion of investment since 2015. We recently announced £100 million for the Foundation Models Taskforce to help build and adopt the next generation of safe AI, £110 million for our AI tech missions fund and £900 million to establish new supercomputer capabilities. These exascale computers were mentioned in the Budget by my right hon. Friend the Chancellor. These developments have incredible potential to bring forward new forms of clean energy, and indeed new materials that can deliver that clean energy, and to accelerate things such as medical treatment. There are exciting opportunities ahead.

If we want to become an AI superpower, it is crucial that we do all we can to create the right environment to harness the benefits of AI and remain at the forefront of technological developments. Our approach, laid out in the AI White Paper, is designed to be flexible. We are ensuring that we have a proportionate, pro-innovation regulatory regime for AI in the UK, which will build on the existing expertise of our world-leading sectoral regulators.

Our regulatory regime will function by articulating five key principles, which are absolutely key to this debate and tackle many of the points that have been made by hon. Members across the Chamber. Regulators should follow these five principles when regulating AI in their sectors: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. That feeds into the important points made by my hon. Friend the Member for Watford (Dean Russell), who held this ministerial position immediately prior to myself, about deception, scams and fraud. We can all see the potential for that, of course.

Clearly, right across the piece, we have regulators with responsibility in those five areas. Those regulators are there to regulate bona fide companies, which should do the right thing, although we have to make sure that they do. For instance, if somebody held a database with inappropriate data on it, the Information Commissioner’s Office could easily look at that, and it has significant financial penalties at its disposal, such as 4% of global turnover or a £17 million fine. My hon. Friend the Member for Watford made a plea for a Turing clause, which I am, of course, very happy to look at. I think he was referring to organisations that might not be bona fide, and might actually be looking to undertake nefarious activities in this area. I do not think we can regulate those people very effectively, because they are not going to comply with anybody’s regulations. The only way to deal with those people is to find them, catch them, prosecute them and lock them up.