Thursday 19th March 2026

(1 day, 13 hours ago)

Commons Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Ian Sollom Portrait Ian Sollom (St Neots and Mid Cambridgeshire) (LD)
- View Speech - Hansard - - - Excerpts

I beg to move,

That this House believes that current legislation is falling short in preventing online harms; and calls on the Government to review whether it is necessary to introduce new legislation that is centred around harm reduction in this Parliament.

I thank the Backbench Business Committee for granting this debate. Not long after my election in 2024, I visited the Internet Watch Foundation in Cambridgeshire. That organisation is on the frontline of the fight against child sexual abuse material, and is one of only a handful of non-law enforcement bodies worldwide with the legal power to proactively seek out and remove online images and videos of such abuse. During my visit, the IWF told me that, in the preceding five years alone, it had taken down more than 1 million webpages that showed at least one child sexual abuse image—often, they showed hundreds or thousands. The IWF’s annual report last year revealed that 2025 was the worst year on record for child sexual abuse material. Its analysts confirmed 312,000 reports—a 7% rise on the year before. Most starkly, in 2024 they discovered 13 AI-generated videos of child sexual abuse, but in 2025 the figure was 3,440—a rise of over 26,000%, for those who are interested in numbers. Nearly two thirds of those videos were category A material, which is the most extreme classification.

A little while after my visit, I began to work with the Molly Rose Foundation on the proposal in this motion. At the time, the Online Safety Act 2023 had been in law for nearly two years, and the protection of children codes of practice that came from it, which promised to improve user safety dramatically, had just been published and implemented. The text of those codes was heavily criticised by civil society, and even by the Children’s Commissioner, who said they would simply not be strong enough to protect children from the

“multitude of harms they are exposed to online every day.”

It seemed timely for a motion to be brought before the House so that we could scrutinise the Online Safety Act and its resultant codes, as they now are being used in practice, and highlight to the Government the need to take action in this Parliament to protect young people. After the codes were implemented in mid-2025, the Mental Health Foundation published research stating that 68% of young people had experienced harmful content online. It described the harm as one of

“the biggest looming threats to young people’s mental health”.

In October 2025, the Molly Rose Foundation found that over a third of children reported that they had been exposed to at least one type of high-risk content in the past week. In a classroom of 30 children, that is 11 who are, every day, being shown content that promotes suicide and self-harm or that romanticises depression and eating disorders. That is the exact “primary priority content” that the UK’s flagship piece of online safety legislation explicitly promised it would protect them from. Just this week, the BBC aired “Inside the Rage Machine”, which used whistleblower testimony and evidence to lay bare how social media giants such as Meta and TikTok are consistently and deliberately pushing harmful content to users, after finding that their outrage fuelled engagement.

All of that is to say that if the motion for this debate seemed appropriate at the beginning of this Parliament, when I first visited the IWF, it is now urgent. Every week, I hear from parents, young people and organisations who are fighting a losing battle against the proliferation of online harms because, despite its noble aims, the current legislation is falling short of what Parliament envisaged it would do.

Leigh Ingham Portrait Leigh Ingham (Stafford) (Lab)
- Hansard - -

Last week, I ran a supermarket surgery in my constituency. I had a flipboard that asked whether people felt that social media should be banned for under-16s. It is rare to get this level of agreement, but 78% of my constituents of all ages—older people, young people and even children—said yes. What was consistent was the fear they felt about this space and the belief that it is doing damage to young people as they grow up. I am not 100% sure on my position yet, but does the hon. Member agree that the Government are right to consult to work out the best option to protect young people from social media?

Ian Sollom Portrait Ian Sollom
- Hansard - - - Excerpts

The text of the motion asks for a review, and that is certainly what I want to see.

I have not come here today to stir up panic or to imply that the wellbeing of our children, or indeed our adults, is doomed. There is hope and we should not have to accept harm as a reality of life on the internet. As the Molly Rose Foundation chief executive officer, Andy Burrows, noted this week after campaigning pushed both TikTok and Meta to row back on plans for end-to-end encryption in direct messaging,

“tech firms are not immune to pressure”.

However, pressure on its own is not enough. The Government must urgently look at strengthening the Online Safety Act to ensure that pressure has robust legislative backing behind it, and that Ofcom actually has the power to enforce the regulations that will protect us all from harm.

Online harm comes in three forms. First, there is harmful content: the outright illegal and the extreme, posted and peddled by bad actors across social media platforms. Then we have harmful interactions with bad actors, including grooming, cyber-bullying and extortion. I am sure that Members across the House will share many stories of the impact of both types of harm today; it is a tragedy just how many there are. I want to focus on the third form of online harm, which is the harm that arises from not just the type of content encountered online, but the intensity with which it is repeatedly pushed on to young people by the platforms themselves.

This week, I was pleased to participate in the Royal Society pairing scheme. I was paired with Doctor Lizzy Winstone, a researcher from the University of Bristol whose work focuses on how young people use social media and its impact on their mental health. Her most recent research investigates the algorithmic recommendation of content as one of the primary mechanisms that shapes young people’s digital mental health. She and others have found that a large part of online harm is structural, arising from not just individual bad actors, but business models designed at their very core to maximise attention and to profit from provocation.

Social media is built to be addictive. Hooking users in and keeping them engaged is at the very heart of almost every platform’s business model. Algorithmic models cause harm through both overtly harmful content and content that is harmless on the face of it. There are attention deficit harms caused by passive screen watching and health harms associated with an increasingly sedentary lifestyle. Higher social media use has been directly linked to shorter sleep duration and difficulties with sleep onset. Gambling harm is often overlooked, but a recent Guardian investigation found that Meta AI was pointing vulnerable social media users to illegal online casinos and even suggesting ways to bypass UK gambling safeguards. Regulation is clearly not keeping pace with the evolving digital landscape.

Often, it is the directly harmful, even illegal, content that is caught up in these algorithms. The shock, disgust and strong emotion inevitably caused by this content creates engagement: we watch for longer, we engage more, and the algorithm takes this as permission to show us even more of it to keep us hooked. Endless scrolling functionalities allow already vulnerable users to fall into a world where there is no escape from this cycle. Members will be aware that we Liberal Democrats have long called for platforms to implement built-in caps on social media doomscrolling.

In 2017, it was concluded for the first time ever that content on social media had contributed to the death of a young person when teenager Molly Russell tragically took her own life. Before she died, she had viewed thousands of suicide and self-harm videos and images on Pinterest and Instagram, some of which were pushed to her without her asking to see them. The word used by the coroner was that Molly was able—even encouraged by platforms—to “binge” this content.

The normalisation of these recommendation mechanisms has created an awful, self-perpetuating cycle. One case study from the University of Bristol described a 17-year-old girl who was forcing herself to repeatedly watch graphic content of a gory accident on TikTok to try to desensitise herself to violence. She knew that she would be regularly exposed to this kind of content online and wanted to train herself to be able to watch it and not feel sick. We can only assume that due to her increased attention, she was shown even more of this horrific content.

Recommendation systems in and of themselves are no bad thing. They create a personalised space to explore interests and sometimes do filter out content that a user has no interest in. The problem is that a user’s engagement with content does not always indicate their actual interest in it. Another young person from the University of Bristol study—a trans man—described feeling compelled to intervene in homophobic and transphobic comments sections, to try to support his community and challenge prejudice. He was understood by the platform to have engaged, and subsequently he was bombarded with more and more of the same hateful content. The tension between knowing that his algorithm would register his intervention as interest and wanting to actively challenge hateful views was a constant source of stress online.

Problems also arise from a lack of transparency. Not only are social media platforms under no obligation to publish their algorithms, but with AI increasingly being used to build and continually iterate these algorithms, the platforms themselves are often unaware of the exact mechanisms that shape experience. Harm is occurring as a result of an unaccountable black box. Young people are not entirely passive in this system—they know it is happening—but platform tools provide very limited control over what the algorithm continues to recommend.

Looking at Ofcom’s summary of the protection of children codes of practice, we can see how a weak interpretation of the Online Safety Act is allowing such harm to be perpetuated. Volume 4, section 17 says that platforms must

“Ensure content recommender systems are designed and operated so that content indicated potentially to be PPC”—

primary priority content, which is suicide, self-harm, eating disorders and mental health content—

“is excluded from the recommender feeds of children”.

Research shows that children were most likely to report having seen harmful content through feeds with recommender systems—very few actively seek it out—so the intention behind this measure seems good. But then we see that it applies only to “child-accessible” parts of a service that are

“medium or high risk for one or more specific kinds of PPC”.

In Ofcom’s December review, not a single social media platform rated itself high risk for suicide or self-harm content. There is a clear gap between the intention of the legislation and how it is being implemented. That is because the Online Safety Act and its codes are ultimately built around compliance and not harm reduction. Rules-based legislation means that platforms can happily meet their legal duties if measures in the codes are followed, and they are under no obligation to effectively and proactively address the harms identified in their risk assessments. Putting only a moral duty on platforms to protect young people from harm is not going to work—we have seen for years that it does not work.

How can we expect the very same platforms that have been shown to deliberately and knowingly peddle harmful content to young people to essentially police themselves? Why would they bother when it is so much more profitable to tick already loosely defined boxes? A full review of the current legislation must investigate the barriers that Ofcom says are preventing it from delivering on the intentions of Parliament. That includes the safe harbour principle, which allows platforms to claim compliance and skirt enforcement action on harms about which they are already aware, and the complete lack of any obligation in the Act that platforms take active steps to reduce the risk of harm to users. In practice, that means that a platform can follow Ofcom’s codes to the letter, even while its own risk assessment shows that it is aware of serious ongoing harm, and face no enforcement consequences.

Amendments could be passed within months to introduce the robust, risk-based minimum age limits that we Liberal Democrats have been calling for. Minimum joining ages should be determined by a platform-specific assessment of age appropriateness in risk. That will incentivise the market to adopt lower-risk functionalities if platforms wish to open themselves to a wider pool of users.

We could argue that a review of sorts has already taken place: every coroner’s report, every tragic story told in the Chamber and every investigation by charities and organisations make up that review. The evidence is plainly there, but the harm is being allowed to continue. We are here as Members of Parliament to scrutinise, and we have done that. There have been 12 debates with the words “online safety” in the title this Parliament and there have been hundreds of references to “online harm”, yet there has been little indication that the Government are addressing the core issues raised in this debate.

I hope that Members will use this debate to raise the full range of harms we hear about in our work. I ask the Minister to respond specifically to these questions: will the Government examine whether the safe harbour principle is serving Parliament’s original intentions or has become a mechanism that platforms use to avoid accountability for harms about which they are already aware? Will the Government commit to ensuring that any new legislation this Parliament brings forward is built around harm reduction and not compliance?