Social Media: Misinformation and Algorithms Debate

Full Debate: Read Full Debate

Social Media: Misinformation and Algorithms

Ben Spencer Excerpts
Thursday 17th July 2025

(1 day, 16 hours ago)

Commons Chamber
Read Full debate Read Hansard Text Watch Debate
Chi Onwurah Portrait Dame Chi Onwurah (Newcastle upon Tyne Central and West) (Lab)
- View Speech - Hansard - - - Excerpts

I am grateful to the Backbench Business Committee for allocating time for this statement. Today I speak on behalf of the Science, Innovation and Technology Committee, but also the hundreds of thousands of people whose lives were profoundly affected by last year’s riots, as well as everyone impacted by the long shadow of social media misinformation.

I want to put on the record my thanks to the Committee Clerks and specialists who have supported this inquiry, and the many witnesses who gave evidence. They have helped to shape our report, and we are very grateful, particularly to those who shared their real-life experience of the riots and the online hate that accompanied them. The Committee would also like to extend our deepest sympathies to the families of the three little girls murdered in Southport, and everyone affected.

Like many nations, the UK is grappling with the immense challenge of regulating global tech giants—companies whose platforms shape our societies, economies and democracies, often with resources that dwarf those of Governments. For example, the UK’s entire public sector budget is about equal to Meta’s market capitalisation. As the representative of the British people, it is essential that Parliament understands the impact of these companies, and is able to scrutinise their actions and regulate them in the public interest, where necessary. However, the Committee experienced significant challenges in seeking to do that during the course of the inquiry. We were reassured by statements from Google, Meta, TikTok and X in our evidence session that they accepted their responsibility to be accountable to the British people through Parliament, and we hope to see that in practice as our work in this area continues.

The horrific attacks in Southport on 29 July 2024 and the violent unrest that followed are a stark reminder of the real-world consequences of the viral spread of online misinformation. Hateful and misleading content spread rapidly, amplified by opaque recommendation algorithms. Protests turned violent, often targeting Muslim and migrant communities, driven in part by the spread of these messages. These events provided a snapshot of how online activity can contribute to real-world violence and hate.

Many parts of the long-awaited Online Safety Act 2023 were not fully in force at the time of the riots, but the Committee found little evidence that they would have made a difference if they were. Moreover, the Act is out of date. It regulates at a technology and content level, rather than on principles or outcomes, and therefore fails to adequately address generative artificial intelligence —ChatGPT, deep fakes and synthetic misinformation—even as it becomes cheaper and easier to access. Generative AI will make the next misinformation crisis much more dangerous.

Having spent six years working for Ofcom before entering Parliament, I believe strongly that regulating technology does not work. Our online safety regime should be based on principles that remain sound in the face of technological development. Social media has made many important and positive contributions, helping to democratise access to a public voice and connect people far and wide. It also has significant risks.

The advertisement-based business models of most social media companies mean that they promote engaging content, often regardless of its authenticity. That spills out across the entire internet via the unclear, under-regulated digital advertising market, incentivising the creation of content that will perform well on social media, as we saw during the 2024 unrest.

This is not just a social media problem; it is a systemic issue that promotes harmful content and undermines public trust. Our concerns were exacerbated when we questioned representatives of regulators and the Government. We were met with confusion and contradiction at high levels, and it became evident that the UK’s online safety regime has some major holes.

After four public sessions, more than 80 written submissions and extensive deliberations, our findings are clear: the British people are not adequately protected from online harms. We have identified five key principles that we believe are crucial for the regulation of social media and related technologies and drive our recommendations to Government.

Our first principle is public safety. Algorithmically enhanced misinformation is a danger that companies, Government, law enforcement and security services need to work together to address. That is basically saying that misinformation is harmful, which may sound obvious, but it has not been recognised as such. As a consequence, platforms should be compelled to demote fact-checked misinformation and establish processes to take more stringent measures during crises. We propose that the Government carry out research into how platforms should tackle misinformation and how far recommendation algorithms spread harm. Furthermore, all AI content should be visibly labelled.

Our second principle is free and safe expression. Steps to tackle amplified misinformation should be in line with the fundamental right to free expression, and measures to meet misinformation must align with that right.

Our third principle is responsibility. Users should be held liable for what they post online, but the platforms on which they post are also responsible and should be held accountable for the impact of the amplification of harmful content. That may sound obvious—indeed, it is not the first time that a Select Committee has said this—yet widespread uncertainty remains as to whether platforms have a responsibility for the legal content that they host and distribute. The report recommends that platforms be obliged to undertake risk assessments and report on content that is legal but harmful. New regulatory oversight, clear and enforceable standards and proportionate penalties are needed to cover the process of digital advertising.

Our fourth principle is control. Critically, users should have control over both their personal data and what they see online. We recommend that users have a right to reset the data used by platform recommendation algorithms.

Our fifth and final principle is transparency. The technology used by social media companies, including recommendation algorithms and generative AI, should be transparent, accessible and explainable to public authorities. Transparency is needed for participants in the digital advertising market. Basically, if we cannot explain it, we cannot understand the harm it may do.

I am a tech evangelist: I believe that technology, like politics, is an engine of progress, but that does not mean we have to accept it as it is. Our report sets out detailed recommendations to ensure that we do not have a repeat of the violent and harmful riots last year. We urge the Government to acknowledge that the Online Safety Act is not fit for purpose; to adopt these five principles to build a stronger, future-proof online safety regime, with clear, enforceable standards for social media platforms; and to implement our recommendations. Without action, it is only a matter of time before we face another crisis like the Southport riots, or even worse.

Ben Spencer Portrait Dr Ben Spencer (Runnymede and Weybridge) (Con)
- View Speech - Hansard - -

I thank the hon. Lady and the Select Committee that she chairs for delivering this important review. I also thank her for her statement to the House, which has highlighted the scale of the challenge we face in relation to the proliferation of misleading and harmful content online. I join her in putting out my prayers and sympathies to all those affected by the horrors in Southport last year.

Given the report’s findings that young people are particularly susceptible to misleading and harmful content and online radicalisation, due to the stage of their cognitive development, does the hon. Lady consider that the Government should commit to conducting a review of the evidential case for raising the digital age of consent for social media platforms from 13 to 16?

Chi Onwurah Portrait Dame Chi Onwurah
- View Speech - Hansard - - - Excerpts

I thank the hon. Member for his comments. I also thank him for highlighting the particular issue of young people, their cognitive development and the lack of protection they enjoy from misinformation as a consequence. The Committee did not recommend that the Government should commit to a review, but we are considering a further inquiry into the impact on the cognitive development of young people. I am sure that we will have recommendations with regard to that as a consequence.