Baroness Kidron debates involving the Department for Science, Innovation & Technology during the 2019 Parliament

Artificial Intelligence: Regulation

Baroness Kidron Excerpts
Tuesday 14th November 2023

(6 months, 1 week ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Viscount Camrose Portrait Viscount Camrose (Con)
- View Speech - Hansard - - - Excerpts

I think there are two things. First, we are extremely keen, and have set this out in the White Paper, that the regulation of AI in this country should be highly interoperable with international regulation—I think all countries regulating would agree on that. Secondly, I take some issue with the characterisation of AI in this country as unregulated. We have very large areas of law and regulation to which all AI is subject. That includes data protection, human rights legislation, competition law, equalities law and many other laws. On top of that, we have the recently created central AI risk function, whose role is to identify risks appearing on the horizon, or indeed cross-cutting AI risks, to take that forward. On top of that, we have the most concentrated and advanced thinking on AI safety anywhere in the world to take us forward on the pathway towards safe, trustworthy AI that drives innovation.

Baroness Kidron Portrait Baroness Kidron (CB)
- View Speech - Hansard - -

My Lords, given the noble Viscount’s emphasis on the gathering of evidence and evidence-based regulation, can we anticipate having a researchers’ access to data measure in the upcoming Data Protection and Digital Information Bill?

Viscount Camrose Portrait Viscount Camrose (Con)
- View Speech - Hansard - - - Excerpts

I thank the noble Baroness for her question and recognise her concern. In order to be sure that I answer the question properly, I undertake to write to her with a full description of where we are and to meet her to discuss further.

King’s Speech

Baroness Kidron Excerpts
Tuesday 14th November 2023

(6 months, 1 week ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Kidron Portrait Baroness Kidron (CB)
- View Speech - Hansard - -

My Lords, I too welcome the right reverend Prelate the Bishop of Newcastle. I admire her bravery in wearing the colours of Sunderland and Newcastle simultaneously.

I declare my interests as chair of 5Rights Foundation, chair of the Digital Futures Commission at the LSE and adviser to the Institute for Ethics in AI at Oxford. Like others, I will start with Bletchley Park. That was kicked off by the Prime Minister, who set out his hopes for an AI-enabled world, while promising to tackle head-on its potential dangers. He said:

“Criminals could exploit AI for cyber-attacks, disinformation, fraud, or even child sexual abuse”—


but these are not potential dangers; they exist here and now.

In the race for AI prominence and the vast riches the technology promises, the tech leaders came to town warning us that the future they are creating is untrammelled, unprincipled and insecure and that AI will overwhelm human agency. I think that that language of existential threat makes for fabulous headlines, but it rather disempowers the rest of us. Because, if we ask if we want to supercharge the creation of child sexual abuse material, I would hazard a guess that the answer will be no; or if it is okay for facial recognition trained on white faces to prevent a black parent or child getting a security pass to enter a school, again no; or if we believe that just because something is technically possible—the creation of a disease or a weapon—it should be done, again no. Indeed, we have a record of containing the distribution of inventions that have the capability of annihilating us.

AI is not separate and different, and the language that we use to describe it—either its benefits or threats—must make that clear. AI is built, used and purveyed by business, government, civil society and even criminals. It is part of the human arrangements over which, for the moment, we still have agency. Language that disempowers us is part of the deliberate strategy of tech exceptionalism, advocated by industry lobbyists over decades, which has successfully secured the privatisation of technology, creating untold wealth for a few while outsourcing the cost to society. Who owns AI, who benefits, who is responsible and who gets hurt is still in the balance and I would assert that these are questions that we must deal with here and now.

I was disappointed to hear the noble Viscount say earlier at Questions that the Government were taking a sit-back-and-wait approach, so I have three rather more modest questions for the Minister, each of which could be tackled here and now. The first is: what plans do the Government have to ensure the robust application of our existing laws? As we saw earlier, the large language models and image creation services have used copyright material at scale. Getty Images has been testing it in court on behalf of its artists and photographers, but other rights holders, including some of the world’s finest authors, are unable to challenge this on an individual basis while their art and livelihood is scraped into vast datasets from which they do not benefit. I ask the Minister whether it would be a good idea to have an analysis of how new models are failing to uphold existing law and rights obligations as a first and urgent task for the new AI Safety Institute.

Secondly, how do the Government plan to use their legislative programme to tackle gaps that have been identified? For example, the creation, distribution and consumption of CSAM content is illegal, covered by at least three separate laws in the UK. But not one of these laws covers the models or plug-ins that create CSAM at scale—in one case, more than 20,000 images in a matter of hours—so the upcoming data protection Bill provides us with an opportunity to make training, sharing and possessing software that is trained on or trained to produce CSAM content an offence.

Also on the Prime Minister’s list is disinformation. Synthetic information that passes for real is also a here and now problem: the London Mayor, whose voice was fabricated, celebrities falsely endorsing products or a child’s picture scraped from a school website to train those aforesaid CSAM models. The loss of control of one’s personhood carries with it a democratic deficit and potentially overwhelming individual suffering. I ask the Minister whether the Government are willing to put beyond doubt that AI-generated biometric and image data constitutes a form of personal data over which an individual, whether adult or child, has rights, including the right to object to its use.

Both the data Bill and the digital markets Bill could create new data models—a subject that the noble Baroness, Lady Stowell, articulated very well in a recent article in the Times. New approaches to data rights, with new owners of data, are one way of having a voice in our AI-enabled future.

Thirdly and finally, I would like to ask the Minister why the Government have left children on the margins. I attended two official fringe events of the summit, one hosted by the then Home Secretary about child sexual abuse, the other convened by St Mary’s and the Turing Institute about embedding children’s rights in AI systems. Children are early adopters of technology—canaries in the coal mine—and many of us know the cost of poorly regulated digital environments for them. I am bewildered that, so soon after Royal Assent to the Online Safety Act, and in clear sight of the challenges that AI brings, the Government risk downgrading children’s data rights rather than explicitly protecting the age-appropriate design code and the definitions on which it is founded. Children should have been front and centre of the concerns at Bletchley, not pushed to the fringe, and perhaps the Minister could repair that damage by putting them front and centre of the new AI Safety Institute. After all, it is children who will inhabit the world we are building.

Finally, AI will create enormous benefits and upheaval across all sectors, but it also promises to put untold wealth and power in the hands of even fewer people. However, there are things in the here and now that we can do to ensure that technology innovates in ways that support human agency. It is tech exceptionalism that poses an existential threat to humanity, not the technology itself.