Online Harms Debate
Full Debate: Read Full DebateBen Spencer
Main Page: Ben Spencer (Conservative - Runnymede and Weybridge)Department Debates - View all Ben Spencer's debates with the Department for Science, Innovation & Technology
(1 day, 13 hours ago)
Commons ChamberI congratulate the hon. Member for St Neots and Mid Cambridgeshire (Ian Sollom) on securing this debate. As with so many debates over recent months, it has shown that online harms are a matter of paramount importance to Members on both sides of the House and in the other place. As was to be expected, every Member who spoke focused on online harms. I think only one Member, the hon. Member for Cowdenbeath and Kirkcaldy (Melanie Ward), spoke about some of the positives of the internet age.
I would usually say that it has been a pleasure to listen to and take part in the debate, but it really has not been in this case, because it has been a pretty grim debate. We have had a tour de force discussion of all the horrors that young people and adults are exposed to on the internet, and we have heard about the importance for our society and country of tackling them.
I am very proud of the steps that His Majesty’s official Opposition took in government to make the online environment a safer place, from bringing the Online Safety Act into force to the commendable and tenacious work of my noble Friends in the other place, especially Baroness Owen and Baroness Bertin, who are the staunchest of advocates for protections from digital forms of abuse for women and girls. Members will know that Baroness Owen secured important amendments to the Sexual Offences Act 2003 to criminalise the solicitation of sexually explicit deepfake images. Baroness Bertin’s report and campaigning has resulted in amendments being tabled to the Crime and Policing Bill concerning nudification apps. That is by no means the extent of their important work.
The aim of the Conservatives’ Online Safety Act was to build an environment where adults could access legal content freely and where children enjoy greater protections. I welcome in particular the entry into force last year of Ofcom’s protection of children codes. I also welcome the enforcement action that Ofcom has already taken under the Act to tackle file-sharing sites disseminating child sexual abuse material and pornography sites that have failed to put in place highly effective age-assurance measures to prevent children from accessing content. However, we know that concerns regarding children’s social media use go well beyond content that is explicitly harmful and subject to restrictions under the Online Safety Act.
As a result of addictive algorithms that drive excessive use and unhealthy patterns of behaviour, parents across the country are rightly concerned about their children’s social and emotional development. That is why we called for a social media ban for children under the age of 16. This month, the Government regrettably voted down amendments to the Children’s Wellbeing and Schools Bill, which were secured in the other place by my noble Friend Lord Nash, to bring such a ban into effect. In response to pressure from His Majesty’s loyal Opposition and other Members across the House, the Government have now launched their own consultation on measures to restrict access to social media for under 16s, alongside several other online safety matters. If the Government had accepted our amendment to the Data (Use and Access) Act 2025, such a review would be well under way by now, and we would be closer to a solution on this generationally important issue, but we are where we are. Consultation is no substitute for action, and I sincerely hope that the Government will deliver on the timescales set out for responding and bringing in measures after their consultation concludes in May.
As with any rapidly evolving technology, social media and other online tools develop new apps and sites that pose novel threats and demand a response from Government and regulators. We have seen most recently in AI chatbots, some of which may fall outside the scope of the regulatory framework in promoting self-harm content to young people. A particular harm that I have raised with Ministers and Ofcom, of which there has been a disturbing increase, is the use of AI chatbots to obtain medical or other advice that should be sought from regulated professionals. The hon. Member for Winchester (Dr Chambers) mentioned that in his speech. Last year, the Medicines and Healthcare products Regulatory Agency established a national commission, which ran a call for evidence on the suitability of the UK’s framework for regulating AI in healthcare. The call for evidence closed last month, and I look forward to seeing the commission’s conclusions and the Government’s proposals for dealing with the risks that will no doubt be highlighted.
A fundamentally important aspect of online harm that has attracted comparably less media attention and debating time in Parliament is the threat to democracy of online disinformation campaigns perpetrated by hostile state actors and their affiliates. The Science, Innovation and Technology Committee reported last year that online foreign interference and disinformation campaigns are putting UK citizens at risk. We also had credible reports last year of Iranian state-backed digital interference in the Scottish independence referendum. The risk posed by that type of activity is intensifying, as artificial intelligence tools provide the capability to generate deepfake content purporting to represent politicians or campaigns, amplified by foreign, hostile, state-controlled bot accounts.
As people—particularly young people—increasingly obtain their news online, it is more important than ever that we consider the potential of digital watermarking tools that can be used to demonstrate the provenance and authenticity of content published on the internet. This danger is likely to increase as a result of geopolitical tensions. In their report on artificial intelligence and copyright, which was published yesterday, the Government recognised the risks posed by digital replicas and deepfakes in spreading convincing disinformation online, and committed to exploring options to address the growing problem. The Government also discussed the need to label AI-generated content to make disinformation easier for users to spot.
Will the Minister provide timescales for that further work, and an update on the Government’s broader strategy on countering AI-generated democratic interference material? What role does Minister think digital watermarking tools will play in countering the proliferation and impact of deepfake videos and content?