Artificial Intelligence (Regulation) Bill [HL]

Lord Fairfax of Cameron Excerpts
2nd reading
Friday 22nd March 2024

(3 months, 4 weeks ago)

Lords Chamber
Read Full debate Artificial Intelligence (Regulation) Bill [HL] 2023-24 Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Fairfax of Cameron Portrait Lord Fairfax of Cameron (Con)
- View Speech - Hansard - -

My Lords, I too congratulate my noble friend Lord Holmes on bringing forward this AI regulation Bill, in the context of the continuing failure of the Government to do so. At the same time, I declare my interest as a long-term investor in at least one fund that invests in AI and tech companies.

A year ago, one of the so-called godfathers of AI, Geoffrey Hinton, cried “fire” about where AI was going and, more importantly, when. Just last week, following the International Dialogue on AI Safety in Beijing, a joint statement was issued by leading western and Chinese figures in the field, including Chinese Turing award winner Andrew Yao, Yoshua Bengio and Stuart Russell. Among other things, that statement said:

“Unsafe development, deployment, or use of AI systems may pose catastrophic or even existential risks to humanity within our lifetimes … We should immediately implement domestic registration for AI models and training runs above certain compute or capability thresholds”.


Of course, we are talking about not only extinction risks but other very concerning risks, some of which have been mentioned by my noble friend Lord Holmes: extreme concentration of power, deepfakes and disinformation, wholesale copyright infringement and data-scraping, military abuse of AI in the nuclear area, the risk of bioterrorism, and the opacity and unreliability of some AI decision-making, to say nothing of the risk of mass unemployment. Ian Hogarth, the head of the UK AI Safety Institute, has written in the past about some of these concerns and risks.

Nevertheless, despite signing the Center for AI Safety statement and publicly admitting many of these serious concerns, the leading tech companies continue to race against each other towards the holy grail of artificial general intelligence. Why is this? Well, as they say, “It’s the money, stupid”. It is estimated that, between 2020 and 2022, $600 billion in total was invested in AI development, and much more has been since. This is to be compared with the pitifully small sums invested by the AI industry in AI safety. We have £10 million from this Government now. These factors have led many people in the world to ask how it is that they have accidentally outsourced their entire futures to a few tech companies and their leaders. Ordinary people have a pervading sense of powerlessness in the face of AI development.

These facts also raise the question of why the Government continue to delay putting in place proper and properly funded regulatory frameworks. Others, such as the EU, US, Italy, Canada and Brazil, are taking steps towards regulation, while, as noble Lords have heard, China has already regulated and India plans to regulate this summer. Here, the shadow IT Minister has indicated that, if elected, a new Labour Government would regulate AI. Given that a Government’s primary duty is to keep their country safe, as we so often heard recently in relation to the defence budget, this is both strange and concerning.

Why is this? There is a strong suspicion in some quarters that the Prime Minister, having told the public immediately before the Bletchley conference that AI brings national security risks that could end our way of life, and that AI could pose an extinction risk to humanity, has since succumbed to regulatory capture. Some also think that the Government do not want to jeopardise relations with leading tech companies while the AI Safety Institute is gaining access to their frontier models. Indeed, the Government proudly state that they

“will not rush to legislate”,

reinforcing the concern that the Prime Minister may have gone native on this issue. In my view, this deliberate delay on the part of the Government is seriously misconceived and very dangerous.

What have the Government done to date? To their credit, they organised and hosted Bletchley, and importantly got China to attend too. Since then, they have narrowed the gap between themselves and the tech companies—but the big issues remain, particularly the critical issue of regulation versus self-regulation. Importantly, and to their credit, the Government have also set up the UK AI Safety Institute, with some impressive senior hires. However, no one should be in any doubt that this body is not a regulator. On the critical issue of the continuing absence of a dedicated unitary AI regulator, it is simply not good enough for the Government to say that the various relevant government bodies will co-operate together on oversight of AI. It is obvious to almost everyone, apart from the Government themselves, that a dedicated, unitary, high-expertise and very well-funded UK AI regulator is required now.

The recent Gladstone AI report, commissioned by the US Government, has highlighted similar risks to US national security from advanced AI development. Against this concerning background, I strongly applaud my noble friend Lord Holmes for bringing forward the Bill. It may of course be able to be improved, but its overall intention and thrust are absolutely right.