Artificial Intelligence: Legislation

Viscount Colville of Culross Excerpts
Monday 21st July 2025

(5 days, 2 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Vallance of Balham Portrait Lord Vallance of Balham (Lab)
- View Speech - Hansard - - - Excerpts

I agree that this is an urgent issue, and it is changing day by day. The urgency is reflected in the work that has already taken place through the Online Safety Act, the Data (Use and Access) Act and, of course, the Crime and Policing Bill. But the need to get the legislation right for a more widespread AI Bill is important and has to be taken with due consideration. It would be very wrong to try to rush this. A consultation that brings in all the relevant parties will be launched, and that will be the time when we can make sure that we get this absolutely right.

Viscount Colville of Culross Portrait Viscount Colville of Culross (CB)
- View Speech - Hansard - -

My Lords, at the Bletchley AI safety summit, major AI companies, such as Google, signed a voluntary agreement that they would not release AI frontier models without a safety card explaining how they had been tested and by whom. However, in March this year, Google released its Gemini 2.5 model without such a safety card. Does the Minister agree that examples such as this only add pressure for AI models safety testing to be put on a statutory basis?

Lord Vallance of Balham Portrait Lord Vallance of Balham (Lab)
- View Speech - Hansard - - - Excerpts

We do agree that the issue of safety in AI is very important. That is why we formed the AI Security Institute, which is busy working with companies around the world, testing their models, bringing their models in, working out where the vulnerabilities are, working in a way that allows those companies to build in the safety requirements that are needed, and, importantly, working with other AI safety and security institutes around the world. They have between them formed a group that is looking at these very issues. This is something we will be very vigilant on. It is something the world needs to be vigilant on as these models rapidly advance.