Artificial Intelligence and the Labour Market

Debate between Damian Collins and Richard Thomson
Wednesday 26th April 2023

(1 year ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Richard Thomson Portrait Richard Thomson (Gordon) (SNP)
- Hansard - - - Excerpts

It is a pleasure to serve under your chairship this afternoon, Dame Maria, and to take part in this particularly timely debate. I congratulate the hon. Member for Birkenhead (Mick Whitley) on securing it.

I begin by declaring a rather tenuous interest—a constituency interest of sorts—regarding the computing pioneer Alan Turing. The Turing family held the baronetcy of Foveran, which is a parish in my constituency between the north of Aberdeen and Ellon. Although there is no evidence that Alan Turing ever actually visited, it is a connection that the area clings to as fastly as it can.

Alan Turing, of course, developed what we now know as the Turing test—a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. One of the developments to come closest to that in recent times is, of course, ChatGPT, which several speakers have mentioned already. It is a natural-language processing tool driven by AI technology, which has the ability to generate text and interact with humans.

The hon. Member for Birkenhead was a bit braver than I was; I only toyed with the idea of using ChatGPT to produce some of my speech today. However, I was put off somewhat by a very good friend of mine, with an IT background, using the ChatGPT interface to produce a biography of me. He then shared it with his friendship group on Facebook.

I think it is fair to say that it shows up clearly that if ChatGPT does not know the answer to something, it will fill the gap by making up something that it thinks will sound plausible. In that sense, it is maybe no different from your average Cabinet Minister. However, that does mean that, in subject areas where the data on which it is drawing is rather scant, things can get quite interesting and inventive.

Damian Collins Portrait Damian Collins
- Hansard - -

The hon. Gentleman makes an incredibly important point. When AI systems such as that are asked questions that they do not know, rather than responding, “I don’t know,” they just make something up. A human is therefore required to understand whether what they are being showed is correct. The hon. Gentleman knows his own biography better than ChatGPT does, but someone else may not.

Richard Thomson Portrait Richard Thomson
- Hansard - - - Excerpts

I thank the hon. Member for that intervention. He has perhaps read ahead towards the conclusion of my speech, but it is an interesting dichotomy. Obviously, I know my biography best, but there are people out there, not in the AI world—Wikipedia editors, for example—who think that they know my biography better than I do in some respects.

However, to give the example, the biography generated by AI said that I had been a director at the Scottish Environmental Protection Agency, and, prior to that, I had been a senior manager at the National Trust for Scotland. I had also apparently served in the Royal Air Force. None of that is true, but, on one level, it does make me want to meet this other Richard Thomson who exists out there. He has clearly had a far more interesting life than I have had to date.

Although that level of misinformation is relatively benign, it does show the dangers that can be presented by the manipulation of the information space, and I think that the increasing use and application of AI raises some significant and challenging ethical questions.

Any computing system is based on the premise of input, process and output. Therefore, great confidence is needed when it comes to the quality of information that goes in—on which the outputs are based—as well as the algorithms used to extrapolate from that information to create the output, the purpose for which the output is then used, the impact it goes on to have, and, indeed, the level of human oversight at the end.

In March, Goldman Sachs published a report indicating that AI could replace up to 300 million full-time equivalent jobs and a quarter of all the work tasks in the US and Europe. It found that some 46% of administrative tasks and even 44% in the legal professions could be automated. GPT-4 recently managed to pass the US Bar exam, which is perhaps less a sign of machine intelligence than of the fact that the US Bar exam is not a fantastic test of AI capabilities—although I am sure it is a fantastic test of lawyers in the States.

Our fear of disruptive technologies is age-old. Although it is true to say that generally what we have seen from that disruption is the creation of new jobs and the ability to allow new technologies to take on more laborious and repetitive tasks, it is still extremely disruptive. Some 60% of workers are currently in occupations that did not exist in 1940, but there is still a real danger, as there has been with other technologies, that AI depresses wages and displaces people faster than any new jobs can be created. That ought to be of real concern to us.

In terms of ethical considerations, there are large questions to be asked about the provenance of datasets and the output to which they can lead. As The Guardian reported recently:

“The…datasets used to train the latest generation of these AI systems, like those behind ChatGPT and Stable Diffusion, are likely to contain billions of images scraped from the internet, millions of pirated ebooks”

as well as all sorts of content created by others, who do not get reward for its use; the entire proceedings of 16 years of the European Parliament; or even the entirety of the proceedings that have ever taken place, and been recorded and digitised, in this place. The datasets can be drawn from a range of sources and they do not necessarily lead to balanced outputs.

ChatGPT has been banned from operating in Italy after the data protection regulator there expressed concerns that there was no legal basis to justify the collection and mass storage of the personal data needed to train GPT AI. Earlier this month, the Canadian privacy commissioner followed, with an investigation into OpenAI in response to a complaint that alleged that the collection, use and disclosure of personal information was happening without consent.

This technology brings huge ethical issues not just in the workplace but right across society, but questions need to be asked particularly when it comes to the workplace. For example, does it entrench existing inequalities? Does it create new inequalities? Does it treat people fairly? Does it respect the individual and their privacy? Is it used in a way that makes people more productive by helping them to be better at their jobs and work smarter, rather than simply forcing them—notionally, at least—to work harder? How can we be assured that at the end of it, a sentient, qualified, empowered person has proper oversight of the use to which the AI processes are being put? Finally, how can it be regulated as it needs to be—beneficially, in the interests of all?

The hon. Member for Birkenhead spoke about and distributed the TUC document “Dignity at work and the AI revolution”, which, from the short amount of time I have had to scrutinise it, looks like an excellent publication. There is certainly nothing in its recommendations that anyone should not be able to endorse when the time comes.

I conclude on a general point: as processes get smarter, we collectively need to make sure that, as a species, we do not consequentially get dumber. Advances in artificial intelligence and information processing do not take away the need for people to be able to process, understand, analyse and critically evaluate information for themselves.

Joint Committee on the Draft Online Safety Bill

Debate between Damian Collins and Richard Thomson
Thursday 16th December 2021

(2 years, 4 months ago)

Commons Chamber
Read Full debate Read Hansard Text
Damian Collins Portrait Damian Collins
- View Speech - Hansard - -

The hon. Gentleman makes an important point about the impact on children. Important work on this has already been done and this Government have passed legislation on the design of services, which is known as the age-appropriate design code. In our report and in the Bill, we stress the importance of risk assessment by the regulator of the different services that are offered, and of the principles of safety by design, particularly in regard to services that are accessed by children and products that are designed for and used by children. I spoke earlier about the regulator’s power to seek data and information from companies about younger users and to challenge companies whose platform policy is that those under 13 cannot access their content and ask whether they have research showing that they know people under that age are using it but allow them to keep their accounts open anyway. Keeping children off the systems that are not designed for them, and from which they are supposed to be deliberately excluded, could be an important role for the regulator to take on.

Richard Thomson Portrait Richard Thomson (Gordon) (SNP)
- View Speech - Hansard - - - Excerpts

I add my own party’s grateful thanks to the Committee for the diligent and thorough way in which it has gone about compiling the report, and we hope to see that feed through into the legislation that eventually comes forward. Does the hon. Gentleman agree that, with the enhanced role that is envisaged for Ofcom, it is all the more important that, whoever heads Ofcom, the regulator can act as a genuinely independent regulator?

Damian Collins Portrait Damian Collins
- View Speech - Hansard - -

I thank the hon. Gentleman for his question. We are also grateful to the hon. Member for Ochil and South Perthshire (John Nicolson), who is a member of the Committee. He is not in his place today. The question of the next chair of Ofcom was not one that the Committee was asked to consider. The Government will run a process, and the DCMS Committee will hold a hearing for the pre-appointment scrutiny of the new candidates. The hon. Gentleman is right to say that online safety will be a big job for Ofcom. The world will be watching, and we have to get the legislation right and ensure that Ofcom has the resources it needs to do the job. It believes that it has, and that it has the powers to do the job, but it should be an ongoing role for this House to scrutinise that process and ensure that it is being run effectively.