AI Safety Debate
Full Debate: Read Full DebateKanishka Narayan
Main Page: Kanishka Narayan (Labour - Vale of Glamorgan)Department Debates - View all Kanishka Narayan's debates with the Department for Science, Innovation & Technology
(1 day, 22 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
The Parliamentary Under-Secretary of State for Science, Innovation and Technology (Kanishka Narayan)
It is a pleasure to serve with you in the Chair, Ms Butler, for my first Westminster Hall debate. It is a particular pleasure not only to have you bring your technological expertise to the Chair, but for the hon. Member for Strangford (Jim Shannon) to be reliably present in my first debate, as well as the UK’s—perhaps the world’s—first AI MP, my hon. Friend the Member for Leeds South West and Morley (Mark Sewards). It is a distinct pleasure to serve with everyone present and the expertise they bring. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this debate on AI safety. I am grateful to him and to all Members for their very thoughtful contributions to the debate.
It is no exaggeration to say that the future of our country and our prosperity will be led by science, technology and AI. That is exactly why, in response to the question on growth posed by the hon. Member for Runnymede and Weybridge (Dr Spencer), we recently announced a package of new reforms and investments to use AI to power national renewal. We will drive growth through developing new AI growth zones across north and south Wales, Oxfordshire and the north-east, creating opportunities for innovation by expanding access to compute for British researchers and scientists.
We are investing in AI to drive breakthroughs in developing new drugs, cures and treatments. But we cannot harness those opportunities without ensuring that AI is safe for the British public and businesses, nor without agency over its development. I was grateful for the points made by my hon. Friend the Member for Milton Keynes Central (Emily Darlington) on the importance of standards and the hon. Member for Harpenden and Berkhamsted (Victoria Collins) about the importance of trust.
That is why the Government are determined to make the UK one of the best places to start a business, to scale up, to stay on our shores, especially for the UK AI assurance and standards market. Our trusted third-party AI assurance roadmap and AI assurance innovation fund are focused on supporting the growth of UK businesses and organisations providing innovative AI products that are proven to be safe for sale and use. We must ensure that the AI transformation happens not to the UK but with and through the UK.
In consistency with the points raised by my hon. Friend the Member for Milton Keynes Central, that is why we are backing the sovereign AI unit, with almost £500 million in investment, to help build and scale AI capabilities on British shores, which will reflect our country’s needs, values and laws. Our approach to those AI laws seeks to ensure that we balance growth and safety, and that we remain adaptable in the face of inevitable AI change.
On growth, I am glad to hear the points made by my hon. Friend the Member for Leeds South West and Morley about a space for businesses to experiment. We have announced proposals for an AI growth lab that will support responsible AI innovation by making targeted regulatory modifications under robust safeguards. That will help drive trust by providing a precisely safe space for experimentation and trialling of innovative products and services. Regulators will monitor that very closely.
On safety, we understand that AI is a general-purpose technology, with a wide range of applications. In recognition of the contribution from the hon. Member for Newton Abbot (Martin Wrigley), I reaffirm some of the points he made about being thoughtful in regulatory approaches that distinguish between the technology and the specific use cases. That is why we believe that the vast majority of AI should be regulated at the point of use, where the risk relates and tractable action is most feasible.
A range of existing rules already applies to those AI systems in application contexts. Data protection and equality legislation protect the UK public’s data rights. They prevent AI-driven discrimination where the systems decide, for example, who is offered a job or credit. Competition law helps shields markets from AI uses that could distort them, including algorithmic collusion to set unfair prices.
Sarah Russell
As a specialist equality lawyer, I am not currently aware of any cases in the UK around the kind of algorithmic bias that I am talking about. I would be delighted to see some, and delighted to see the Minister encouraging that, but I am not sure that the regulatory framework would achieve that at present.
Kanishka Narayan
My hon. Friend brings deep expertise from her past career. If she feels there are particular absences in the legislation on equalities, I would be happy to take a look, though that has not been pointed out to me, to date.
The Online Safety Act 2023 requires platforms to manage harmful and illegal content risks, and offers significant protection against harms online, including those driven by AI services. We are supporting regulators to ensure that those laws are respected and enforced. The AI action plan commits to boosting AI capabilities through funding, strategic steers and increased public accountability.
There is a great deal of interest in the Government’s proposals for new cross-cutting AI regulation, not least shown compellingly by my right hon. Friend the Member for Oxford East (Anneliese Dodds). The Government do not speculate on legislation, so I am not able to predict future parliamentary sessions, although we will keep Parliament updated on the timings of any consultation ahead of bringing forward any legislation.
Notwithstanding that, the Government are clearly not standing still on AI governance. The Technology Secretary confirmed in Parliament last week that the Government will look at what more can be done to manage the emergent risks of AI chatbots, raised by my hon. Friend the Member for York Outer (Mr Charters), my right hon. Friend the Member for Oxford East, my hon. Friend the Member for Milton Keynes Central and others.
Alongside the comments the Technology Secretary made, she urged Ofcom to use its existing powers to ensure AI chatbots in scope of the Act are safe for children. Further to the clarifications I have provided previously across the House, if hon. Members have a particular view on where there are exceptions or spaces in the Online Safety Act on AI chatbots that correlate with risk, we would welcome any contribution through the usual correspondence channels.
Kanishka Narayan
I have about two minutes, so I will continue the conversation with my hon. Friend outside.
We will act to ensure that AI companies are able to make their own products safe. For example, the Government are tackling the disgusting harm of child sexual exploitation and abuse with a new offence to criminalise AI models that have been optimised for that purpose. The AI Security Institute, which I was delighted to hear praised across the House, works with AI labs to make their products safer and has tested over 30 models at the frontier of development. It is uniquely the best in the world at developing partnerships, understanding security risks, and innovating safeguards, too. Findings from AISI testing are used to strengthen model safeguards in partnership with AI companies, improving safety in areas such as cyber-tasks and biological weapon development.
The UK Government do not act alone on security. In response to the points made by the hon. Members for Ceredigion Preseli (Ben Lake), for Harpenden and Berkhamsted, and for Runnymede and Weybridge, it is clear that we are working closely with allies to raise security standards, share scientific insights and shape responsible norms for frontier AI. We are leading discussions on AI at the G7, the OECD and the UN. We are strengthening our bilateral relationships on AI for growth and security, including AI collaboration as part of recent agreements with the US, Germany and Japan.
I will take the points raised by the hon. Members for Dewsbury and Batley, for Winchester (Dr Chambers) and for Strangford, and by my hon. Friend the Member for York Outer (Mr Charters) on health advice, and how we can ensure that the quality of NHS advice is privileged in wider AI chatbot engagement, as well as the points made by my hon. Friend the Member for Congleton and my right hon. Friend the Member for Oxford East on British Sign Language standards in AI, which are important points that I will look further at.
To conclude, the UK is realising the opportunities for transformative AI while ensuring that growth does not come at the cost of security and safety. We do this through stimulating AI safety assurance markets, empowering our regulators and ensuring our laws are fit for purpose, driving change through AISI and diplomacy.