AI Safety Debate
Full Debate: Read Full DebateEmily Darlington
Main Page: Emily Darlington (Labour - Milton Keynes Central)Department Debates - View all Emily Darlington's debates with the Department for Science, Innovation & Technology
(1 day, 22 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
Emily Darlington (Milton Keynes Central) (Lab)
It is a pleasure to serve under your chairship, Ms Butler. I thank the hon. Member for Dewsbury and Batley (Iqbal Mohamed) for securing this important debate.
It would be remiss of me, as the MP for Milton Keynes Central, not to acknowledge the opportunities of AI. One in three jobs in Milton Keynes is in tech, often in the edge technologies or edge AIs that are driving the economic growth we want. However, we will not see take-up across businesses unless we have the safest AI, so we must listen to the British Standards Institution, which is located in Milton Keynes and is working on standards for some of these things.
Nevertheless, I have many concerns. The Molly Rose Foundation has raised many issues around AI chatbots, not all of which are included in current legislation. It has documented how Alexa instructed a 10-year-old to touch a live electrical wire, and how Snapchat’s My AI told a 13-year-old how to lose their virginity to a 31-year-old—luckily, it was an adult posing as a 13-year-old. We have seen other examples involving suicide, and Hitler having the answers to climate change, and research has found that many children are unable to realise that chatbots are not human. AI algorithms also shadow ban women and women’s health, as others have mentioned.
The tech is there to make AI safe, but there is little incentive for companies to do so at the moment. The Online Safety Act goes some way, but not far enough. Our priorities must be to tackle the creativity and copyright issues; deepfakes and the damage they do, in particular, to young girls and women; and the misinformation and disinformation that is being spread and amplified by algorithms because it keeps people online longer, making companies money. We must also protect democracy, children, minorities and women.
How do we do that? I hope the Minister is listening. For me, it is about regulation and standards—standards are just as important as regulation—and transparency. The Science, Innovation and Technology Committee has called for transparency on AI algorithms and AI chatbots, but we have yet to see real transparency. We must also have more diversity in tech—I welcome the Secretary of State’s initiatives on that—and, finally, given the world we are in, we must have a clear strategy for the part that sovereignty in AI plays in our security and our economic future.
Order. I would like to try to allow two minutes at the end for the Member in charge to wind up the debate. Will the Front Benchers take that into account, please?
Kanishka Narayan
My hon. Friend brings deep expertise from her past career. If she feels there are particular absences in the legislation on equalities, I would be happy to take a look, though that has not been pointed out to me, to date.
The Online Safety Act 2023 requires platforms to manage harmful and illegal content risks, and offers significant protection against harms online, including those driven by AI services. We are supporting regulators to ensure that those laws are respected and enforced. The AI action plan commits to boosting AI capabilities through funding, strategic steers and increased public accountability.
There is a great deal of interest in the Government’s proposals for new cross-cutting AI regulation, not least shown compellingly by my right hon. Friend the Member for Oxford East (Anneliese Dodds). The Government do not speculate on legislation, so I am not able to predict future parliamentary sessions, although we will keep Parliament updated on the timings of any consultation ahead of bringing forward any legislation.
Notwithstanding that, the Government are clearly not standing still on AI governance. The Technology Secretary confirmed in Parliament last week that the Government will look at what more can be done to manage the emergent risks of AI chatbots, raised by my hon. Friend the Member for York Outer (Mr Charters), my right hon. Friend the Member for Oxford East, my hon. Friend the Member for Milton Keynes Central and others.
Alongside the comments the Technology Secretary made, she urged Ofcom to use its existing powers to ensure AI chatbots in scope of the Act are safe for children. Further to the clarifications I have provided previously across the House, if hon. Members have a particular view on where there are exceptions or spaces in the Online Safety Act on AI chatbots that correlate with risk, we would welcome any contribution through the usual correspondence channels.
Kanishka Narayan
I have about two minutes, so I will continue the conversation with my hon. Friend outside.
We will act to ensure that AI companies are able to make their own products safe. For example, the Government are tackling the disgusting harm of child sexual exploitation and abuse with a new offence to criminalise AI models that have been optimised for that purpose. The AI Security Institute, which I was delighted to hear praised across the House, works with AI labs to make their products safer and has tested over 30 models at the frontier of development. It is uniquely the best in the world at developing partnerships, understanding security risks, and innovating safeguards, too. Findings from AISI testing are used to strengthen model safeguards in partnership with AI companies, improving safety in areas such as cyber-tasks and biological weapon development.
The UK Government do not act alone on security. In response to the points made by the hon. Members for Ceredigion Preseli (Ben Lake), for Harpenden and Berkhamsted, and for Runnymede and Weybridge, it is clear that we are working closely with allies to raise security standards, share scientific insights and shape responsible norms for frontier AI. We are leading discussions on AI at the G7, the OECD and the UN. We are strengthening our bilateral relationships on AI for growth and security, including AI collaboration as part of recent agreements with the US, Germany and Japan.
I will take the points raised by the hon. Members for Dewsbury and Batley, for Winchester (Dr Chambers) and for Strangford, and by my hon. Friend the Member for York Outer (Mr Charters) on health advice, and how we can ensure that the quality of NHS advice is privileged in wider AI chatbot engagement, as well as the points made by my hon. Friend the Member for Congleton and my right hon. Friend the Member for Oxford East on British Sign Language standards in AI, which are important points that I will look further at.
To conclude, the UK is realising the opportunities for transformative AI while ensuring that growth does not come at the cost of security and safety. We do this through stimulating AI safety assurance markets, empowering our regulators and ensuring our laws are fit for purpose, driving change through AISI and diplomacy.