Online Safety Act 2023: Repeal

Jim McMahon Excerpts
Monday 15th December 2025

(1 day, 20 hours ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Jim McMahon Portrait Jim McMahon (Oldham West, Chadderton and Royton) (Lab/Co-op)
- Hansard - -

It is a pleasure to serve under your chairmanship, Mr Pritchard. The new media has quickly become the worst of the old media: owned, controlled and directed by the wealthy and powerful. My particular focus will be on social media, because it is no longer a movement of the people, nor has it been built or designed for the public good. It had the potential to be, but it has been deliberately designed not to be, and we are paying the price, with real harm, hate, division, exploitation and extremism normalised.

I hear the petitioners’ concerns about the impact on community forums, but the truth is that online regulation does not go anywhere near far enough. That is because the previous Government failed to take the action that was needed. For instance, there is no fit and proper persons test—there should be. There is no editorial responsibility for content on the platforms—there should be. There is no adequate protection from malign foreign influence—there should be. There is no protection from disinformation —there should be. There are no meaningful safeguards against racism, misogyny or hate—there should be. There are no restrictions on Members of Parliament monetising content they produce—instead, they should post solely in the public interest, rather than generating income into their bank account—and there should be. There is no transparency on algorithms, nor the need to declare in-kind benefit in politics in the way that there is in almost every other aspect of political gain—again, there should be.

As it stands, truth, democracy and the safeguarding of the public interest are under threat. The previous Government ducked it, offering a watered-down version that was backed up by a toothless regulator. We have seen what is possible when red lines are drawn; Australia has decided that the welfare of its children is more important than the interests of the powerful and the wealthy. That is leadership.

The UK’s failure to stand up to powerful vested interests has played right into the hands of foreign forces who wish harm on our country, our way of life and our democracy. Technology is moving fast, as we are seeing with AI, and frankly, lawmakers need to be much sharper and quicker to keep up. The first duty of any Government is to protect the national security of their citizens, so for the Government, the question is simply this: when will they start to fight on this new front with vigour and finally do what the previous Government failed to do?

--- Later in debate ---
Tom Collins Portrait Tom Collins
- Hansard - - - Excerpts

Thank you, Mr Pritchard. I agree with my hon. Friend the Member for Bournemouth East (Tom Hayes). I was fortunate enough to meet the Worcestershire youth cabinet, which is based in my constituency. I was struck that one of its members’ main concerns was their online safety. I was ready for them to ask for more support in navigating the online world, but that is not what they asked for. They said, “Please do not try to support us any more; support our adults to support us. We have trusted adults, parents and teachers, and we want to work with them to navigate this journey. Please help them so that they can help us.” I thank my hon. Friend for his excellent point.

Jim McMahon Portrait Jim McMahon
- Hansard - -

My hon. Friend is making an excellent speech that gets to the heart of some of the tensions. However, he seems to be leaning quite strongly into how the algorithms are self-learning and catch on to what people share organically, which they double down on to commercialise the content. Does he accept that some widely used platforms are not just using an algorithm but are deliberately suppressing mainstream opinion and fact in order to amplify false information and disinformation, and that the people benefiting are those who have malign interests in our country?

Tom Collins Portrait Tom Collins
- Hansard - - - Excerpts

Absolutely. My hon. Friend is right. All those algorithms now have hidden interests, which are sometimes just to increase use, but I think we all strongly suspect that they may stray into political agendas. It is remarkable how powerful that part of the online world is. My personal view is that it is not dissimilar to the R number during covid. If a person sees diverse enough content, their worldview will have enough overlap with other people that it will tend to converge. In the old days, “The Six O’Clock News”, or the news on the radio, provided us with shared content that we all heard, whether we agreed with it or not. That anchored us to a shared narrative.

We are now increasingly in echo chambers of reality where we are getting information that purports to be news and reactions that purport to be from human beings in our communities, both of which reinforce certain views. It is increasingly possible that the R number will become greater than one, and our worldviews will slowly diverge further and further. Such an experiment has never been carried out on a society, but it strikes me that it could be extremely harmful.

While we are exploring this theme, I would like to point to the opposite possibility. In Taiwan, trust in the Government was at 9% when the digital Minister took office. They created a digital platform that reversed the algorithm so that, instead of prioritising content based on engagement—a good proxy for how polarising or divisive something is—it prioritised how strongly content resonated with both sides of the political divide. The stronger a sentiment was in bridging between those two extremes, the more it was prioritised.

Instead of people competing to become more and more extreme, to play to their own audiences, they competed to express sentiments and make statements that bridged the divide more and more. In the end, as the system matured, the Government were able to start to say things like, “Once a sentiment receives 85% agreement and approval, the Government will take it on as a goal. We will work out how to get there, but we will take it as a goal that the public say we should be shooting for.” By the end of the project, public trust in the Government was at 70%. Algorithms are powerful—they can be powerful for good or for ill. What we need to make sure is that they are safe for us as a society. That should be the minimum standard.

Finally, we can imagine harms that apply at a societal level but come through interaction. That comes, I would say, when we start to treat machines as if they are members of our society—as people. When I first started exploring this issue, I thought that we had not seen that yet. Then I realised that we have: bots on social media and fake accounts that we do not know are not human beings. They are not verified as human beings, yet we cannot help but start to believe and trust what we see. I would say that it is only a matter of time before these bots become more and more sophisticated and with more and more of an agenda—more able to build relationships with us and to influence us even more deeply. That is a dangerous threshold, which points to the need for us to deal with the issue in a sophisticated way.

What next? It is critical that we first start to develop tools—technically speaking, these are models—that classify and quantify these hazards to individual people and to us as a society, so that we can understand what is hazardous and what is not. Then, based on that, we can start to build tools and models that allow us to either validate products as safe—they should, I agree, be safe by design—or provide protective features.

Already, some companies are developing protection algorithms that can detect content that is illegal or hazardous in different ways and provide a trigger to an operating system to, for example, mask that by making it blurred or opaque, either at the screen or the camera level. Such tools are rapidly becoming more and more capable, but they are not being deployed. At the moment, there is very little incentive for them to be deployed.

If, for example, we were to standardise in the software environment interfaces or sockets of some kind so that these protective tools could be plugged into operating systems or back ends, we could create a market for developing more and more accurate and capable software.

In the world of physical safety, we use a principle called “state of the art”. In contrast to how we all might understand that term, it does not mean the cutting edge of technology; rather, it means safety features that are common enough that they should be adopted as standard and we should expect to have them. The automotive industry is a great example. Perhaps the easiest feature for me to point to is anti-lock brakes, which started out as a luxury feature in high-end vehicles, but rolled out into more and more cars as they became more affordable and accessible. Now they come as standard on all cars. A car without anti-lock brakes could not be sold because it would not meet the state of the art.

If we apply a similar principle to online protection software, tech companies with capable protections would have a guaranteed market. The digital product manufacturers or service providers would have to keep up; that would drive both innovation and uptake. These are already practised in industry. They cost the public purse nothing and generate growth, high-value jobs and national capabilities. Making the internet safe in the right way does not close it down; it creates freedoms and opens it up—freedom to trust what we are seeing; freedom to use it without being hurt; and freedom to rely on it without endangering our national security.

There is another parallel. We would not dream of building a balcony without a railing, but if we had built one we would not decide that the only way to make it safe was to declare that the balcony was for use only by adults. It still would not be safe. Adults and children alike would inevitably come to harm and many of our regulations would not allow it: in fact, there must be a railing that reaches a certain height and is able to withstand certain forces, and it must be designed with safety in mind and be maintained. We would have an inspection to make sure it was safe. Someone designing or opening a building with an unprotected, unbarriered balcony could easily expect to go to prison. We have come to expect our built environment to be safe in that way; having been made robustly safe for adults, it is also largely safe for children. If we build good standards and regulation, we can all navigate the digital world safely and freely.

Likewise, we need to build the institutions to ensure fast and dynamic enforcement. For services, there are precedents for good enforcement. We have seen great examples of that when sites have not complied, such as TCP ports for payment systems being turned off instantly. That is a really strong motivation for a website to comply. It is fast, dynamic and robust, and is very quickly reversible, as the TCP port can be turned back on and the website can once again accept payments. We need that kind of fast, dynamic enforcement if we are to keep up with the fast and adaptive world working around us.

On the topic of institutions, I would like to point out—I would not be surprised if my hon. Friend the Member for Rugby (John Slinger) expands on this—that when television and radio came into existence, we built the BBC so that we would have a trusted source among those services. It kept us safe, and it also ended up projecting our influence around the world. We need once again to build the institutions or expand them and the infrastructure to provide digital services in our collective interest.

Jim McMahon Portrait Jim McMahon
- Hansard - -

My hon. Friend is making a very good speech; maybe he should consider a career in TED Talks after this. A number of competitor platforms have been tried, such as Bluesky as an alternative to X, but the take-up is not sustained. I wonder whether the monopoly that some of these online platforms have is now so well embedded that people have become attached to them out of habit. As Members, we must all feel the tension at times about whether we should or should not be on some of these platforms.

There is a need for mainstream voices to occupy these spaces to ensure that we do not concede to extremes of any political spectrum, but we are always going to be disadvantaged if the algorithm drives towards those extremes and not to the mainstream. I just test the principle of an online BBC versus whether or not there should be a more level playing field for mainstream content on existing platforms.

Tom Collins Portrait Tom Collins
- Hansard - - - Excerpts

My hon. Friend is, of course, right. If we regulate for safety, we do not need to worry about the ecosystem needing good actors to displace it. At the same time, however, those good actors would have a competitive and valuable role to play, and I do not want to undervalue the currency of trust. Institutions such as the BBC are so robustly trustworthy that they have a unique value to offer, even if we do manage to create a safe ecosystem or market of online services.

I am convening a group of academics to start trying to build the models I discussed as the foundation for technical standards for safe digital products. I invite the Minister to engage the Department in this work. That is vital for the safety of each of us and our children as individuals, and for the security and resilience of our society. I also invite anybody in the technical space of academia or industry exploring some of these models and tools to get in touch with me if they see this debate and are interested.

Only by taking assertive action across all levels of technical, regulatory and legal governance can we ensure the safety of citizens. Only by expanding our institutions can we provide meaningful enforcement, designing and building online products, tools and infrastructure. If we do those things, the internet will be more open, secure, private, valuable and accessible to all of us. Good regulation is the key to a safe and open internet.

--- Later in debate ---
Emily Darlington Portrait Emily Darlington
- Hansard - - - Excerpts

I completely agree. As parents, we all want to be able to have those conversations, but because of the way the algorithms work, we do not see what they see. We say, “Yes, you can download this game, because it has a 4+ rating.” Who knows what a 4+ rating actually means? It has nothing to with the BBFC ratings that we all grew up with and understand really well. Somebody else has decided what is all right and made up the 4+ rating.

For example, Roblox looks as if it is child-ready, but many people might not understand that it is a platform on which anyone can develop a game. Those games can involve grooming children and sexual violence; they are not all about the silly dances that children do in the schoolyard. That platform is inhabited equally by children as it is by adults.

Jim McMahon Portrait Jim McMahon
- Hansard - -

My hon. Friend does well to draw attention to the gaming world. When most of us think about online threats, we think about social media and messaging, but there are interactive ways of communicating in almost every game in existence, and that can happen across the world.

In Oldham, we have had a number of section 60 stop-and-search orders in place, because of the number of schoolchildren who have been carrying knives and dangerous weapons. Largely, that has been whipped up not in the classroom, but online, overnight, when children are winding each other up and making threats to each other. That has real-life consequences: children have been injured and, unfortunately, killed as a result of carrying weapons in our community. Does my hon. Friend share my concern that this threat is multifaceted, and that the legislation probably should not be so prescriptive for particular platforms at a point in time, but should have founding principles that can be far more agile as new technology comes on stream?

Emily Darlington Portrait Emily Darlington
- Hansard - - - Excerpts

My hon. Friend raises two really important points. First, if we try to create legislation to address what companies do today, it will be out of date by the time that it passes through the two Houses. What we do must be done on the basis of principles, and I think a very good starting principle is that what is illegal offline should be illegal online. That is a pretty clear principle. Offline legislation has been robustly challenged over hundreds of years and got us to where we are with our freedom of speech, freedom of expression and freedom to congregate. All those things have been robustly tested by both Houses.

--- Later in debate ---
Victoria Collins Portrait Victoria Collins (Harpenden and Berkhamsted) (LD)
- Hansard - - - Excerpts

It is a pleasure to serve under your chairmanship, Sir John. I congratulate the hon. Member for Sunderland Central (Lewis Atkinson), who made a very eloquent opening speech, and Members from across the Chamber, who have touched on really important matters.

As the hon. Member mentioned, the online space gives us great opportunities for connection and knowledge gathering, but also opportunities for greater harms. What has come across today is that we have addictive algorithms that are pushed in furtherance of commercial and malevolent interests—security interests, for example, although not the security of Great Britain—with no regard for the harm or impact they have on individuals or society.

When it comes to the Online Safety Act, we must get the balance right. Its protections for children and the vulnerable are vital. Of course, it is important to maintain freedom of speech and access to information. The Act is a step in the right direction in protecting children from extreme content, and we have seen changes in pornographic content. However, there are areas where it has not gone far enough, and it is not ready for the changes that are coming at a fast pace. There are websites that serve a public good that are age-gated, and forums for hobbies and communities that are being blocked. As the Liberal Democrats have said, we have to get the balance right. We also have to look at introducing something like a digital Bill of Rights with agile standards in the face of fast-paced changes, to embed safety by design at the base.

The harms that we need to protect children and vulnerable people from online are real. The contributions to this debate from hon. Members from across the House have been, as always, eye-opening and a reminder of how important this issue is. On pornographic content, we heard from the hon. Members for Morecambe and Lunesdale (Lizzi Collinge) and for Milton Keynes Central (Emily Darlington) sickening reminders of the horrific content online that young people see—and not by choice. We must never forget that, as has also been said, people are often not seeking this content, but it comes through, whether on X, which was Twitter, or other platforms. The Molly Rose Foundation highlighted that

“children using TikTok and X were more than twice as likely to have encountered…high risk content compared to users of other platforms.”

The online world coming to life has been mentioned in this debate. One of my constituents in Harpenden wrote to me, horrified that her daughter had been strangled on a dancefloor, because it showed how violent, graphic content is becoming normalised. That struck me to my core. Other content that has been mentioned: suicidal content, violent content and eating disorder misinformation, which the hon. Member for Worcester (Tom Collins) talked so eloquently about. The Molly Rose Foundation also highlighted that one in 10 harmful videos on TikTok have been viewed more than 1 million times, so we have young people seeing that ex content.

Even beyond extreme content, we are starting to see the addictive nature of social media, and the insidious way that this short-form content is becoming such a normalised part of many of our lives. Recent polling by the Liberal Democrats revealed that 80% of parents reported negative behaviours in their child due to excess phone usage, including skipping meals, having difficulty sleeping, or reporting physical discomforts such as eye strain or headaches. Parents and teachers know the real harms that are coming through, but young people themselves do too. I carried out a safer screens tour in my constituency in which I spoke to young people. Many of them said that they are seeing extreme content that they do not want to see, and that, although they have blocked the content, it comes back. The Online Safety Act is helping to change that, but it has not gone far enough. The addictive element of social media is important. In our surveys, two quotes from young people stood out. One sixth-former said that social media is

“as addictive as a drug”,

and that they felt its negative effects every day. Another young person simply wrote, “Help, I can’t stop.” Young people are asking for help and protection; we need to hold social media giants and online spaces to account.

It is welcome that some of those harms have been tackled by the Online Safety Act. On pornography, Pornhub has seen a 77% reduction in visitors to its website; Ofcom has launched 76 investigations into pornography providers and issued one fine of £50,000 for failing to introduce age checks, but we need to ask whether that goes far enough. It has come across loud and clear in this debate that the Online Safety Act has not gone far enough. Analysis has shown that Instagram and TikTok have started to introduce new design features that comply with the Online Safety Act, but game the system to still put forward content that is in those companies’ commercial interests, and not in the interests of young people.

Other extremely important harms include the new harms from AI. Many more people are turning to AI for mental health support. Generative AI is creating graphic content, and the Internet Watch Foundation found that

“reports of AI-generated child sexual abuse material have more than doubled in the past year”

and the IWF says it is at the point where it cannot tell the difference any more—it is horrific.

Jim McMahon Portrait Jim McMahon
- Hansard - -

The hon. Lady is making a very important point. It really concerns me to see just how desensitised young people or adults can become when they see that type of content, and that inhumane content is directly linked to misogyny and racism. While I know no Member of this House would say such a thing, outside this place I could imagine an argument being made that harm depicted in AI-generated content is not real harm, because the content in itself is not real and no real abuse has been carried out. However, does the hon. Lady share my concern that such content is incredibly harmful, and that there is a real danger that it entraps even more people down the very dark route to what is essentially child abuse and to further types of harm, which will then present in the real world in a way that I do not think even Parliament has yet registered? In a sense, this problem is becoming more and more of a public health crisis.

Victoria Collins Portrait Victoria Collins
- Hansard - - - Excerpts

Absolutely. The insidious part of this issue is the normalisation of such harmful content. In a debate on Lords amendments to the then Data (Use and Access) Bill, on creatives and AI, I mentioned the fact that, in the two weeks since the previous vote, we had seen the release of Google Veo 3—the all-singing, all-dancing video creation software. We are moving so quickly that we do not see how good AI-generated content is becoming. Some content that we see online is probably AI-generated, but we do not realise it. On top of that, as the hon. Gentleman said, AI normalises extreme content and produces content that people think is real, but is not. That is very dangerous for society.

My next point concerns deepfakes, which are undermining trust. Some deepfakes are obvious; some Members of Parliament and news presenters have been targeted through deepfakes. Just as important, however, is the fact that much deepfake content seems normal, but is undermining trust in what we see—we do not know what is real and what is not any more. That is going to be very dangerous not only in terms of extreme content, but for our democracy, and that argument has been made by other Members in this debate.

It is also worrying that social media platforms do not seem to see that problem. To produce its risk assessment report, Ofcom analysed 104 platforms and asked them to put in submissions: not a single social media platform classified itself as high risk for suicide, eating disorder or depression—yet much of what we have heard during this debate, including statistics and anecdotal stories, shows that that is just not true.

On the other hand, while there are areas where the Online Safety Act has not gone far enough, in other areas it has overstepped the mark. When the children’s code came into place, Lord Clement-Jones and I wrote to Secretary of State to outline some of our concerns, including political content being age-gated, educational sites such as Wikipedia being designated as category 1, and important forums about LGBTQ+ rights, sexual health or potentially sensitive topics being age-gated, despite being important for many who are learning about the world.

Jamie from Harpenden, a young person who relies on the internet heavily for education, found when he was looking for resources that a lot of them were flagged as threatening to children and blocked, and felt that that prevented his education. Age assurance systems also pose a problem to data protection and privacy. The intention behind this legislation was never to limit access to political or educational content, and it is important that we support access to the content that many rely on—but we must protect our children and vulnerable people online, and we must get that balance right.

I have a few questions for the Minister. Does he agree with the Liberal Democrats that we should have a cross-party Committee of both Houses of Parliament to review the Online Safety Act? Will he confirm what resources Ofcom has been given? Has analysis been conducted to ensure that Ofcom has enough resources to tackle these issues? What are the Government doing about AI labelling and watermarking? What are they doing to tackle deepfakes? Does the Minister agree that it is time to support the wellbeing of our children, rather than the pockets of big tech? Will the Minister support Liberal Democrat calls to increase the age of data consent and ban social media giants from collecting children’s data to power the addictive algorithms against them? We are calling for public health warnings on addictive social media for under-18s and for a doomscroll cap. Most important is a digital bill of rights and standards that, in light of the fast pace of change, need to be agile.

Our young people deserve better. We need to put children, young people and vulnerable people before the profits of big tech. We will not stop fighting until that change is made.

--- Later in debate ---
Ian Murray Portrait The Minister for Digital Government and Data (Ian Murray)
- Hansard - - - Excerpts

It is great to see you in the Chair, Sir John. I did not realise you were such a technophobe until we heard from the shadow Minister, the hon. Member for Hornchurch and Upminster (Julia Lopez). I am disappointed that you were not able to contribute to this debate. I thank my hon. Friend the Member for Sunderland Central (Lewis Atkinson) for moving the motion on behalf of the Petitions Committee, and I thank him and other speakers for their contributions.

I have not been on the RTG fans message board that my hon. Friend mentioned, but I am sure it has been very busy this weekend. I wondered if some of the trolls mentioned by the hon. Member for Bromley and Biggin Hill (Peter Fortune) were perhaps wearing black and white over the weekend. My hon. Friend the Member for Sunderland Central raised an important point, however: it is the site managers and volunteers who are hosting those forums, keeping them legitimate and working very hard to abide by the law.

Jambos Kickback is an important site for my football team, and many people use it to find out what is going on. It is run by volunteers with no money at all—just for the sheer love of being on the forum together—so I fully understand what the petitioner wants to bring forward. I thank my hon. Friend for the measured way in which he put forward the e-petition. He called for robust, effective and proportionate regulation, which is what the Government are trying to do through the Online Safety Act.

The shadow Minister highlighted that by going through the ledger of the positive and negative issues that the Government face, and indeed that were faced when her party was in government. The one thing on that ledger that is non-negotiable is the safety of children online—I think all hon. Members made that point; in fact, I am disappointed that those who do not make that point are not in this debate to try to win that argument, because I would be very interested to hear what they have to say.

The petition received over 550,000 signatures. Although I appreciate the concerns that it raised, I must reiterate the Government’s very strong response that we have no plans to repeal the Online Safety Act. Parents should know and be confident that their children—I am a father of two young girls, aged five years and ten months—are safe when they access popular online services and that they can benefit from the opportunities that the online world offers. That is why the Government are working closely with Ofcom to implement the Act as quickly and as effectively as possible to enable UK users to benefit from the Act’s protections.

This year, 2025, has been one of significant action on online safety. On 17 March the illegal harms codes of practice came into effect. Those codes will drive significant improvements in online safety in several areas. Services are now required to put in place measures to reduce the risk of their services facilitating illegal content and activity, including terrorism, child sexual abuse and exploitation, and other kinds of illegal activity.

I asked the officials for a list of the priority offences in the Act; there were 17, but that number has increased to 20, with the new Secretary of State at the Department adding some others. It is worth reading through them because it shows the problem and the scale of it. I was really struck by Members who talked about the real world and the online world: if any of these offences were happening in the real world, someone would be carted off to jail immediately rather than being allowed to continue to operate, as they do online.

The priority offences are assisted suicide; threats to kill; public order offences such as harassment, stalking and fear of provocation of violence; drugs and psychoactive substances; firearms and other weapons; assisted illegal immigration; human trafficking; sexual exploitation; sexual images; intimate images of children; proceeds of crime; fraud; financial services fraud; foreign interference; animal welfare; terrorism; and controlling or coercive behaviour. The new ones that have been added by the Secretary of State include self-harm, cyber-flashing and strangulation porn. Do we honestly have to write that into a schedule of an Online Safety Act to say that those things are unacceptable and should not be happening on our computers?

On 25 July, the child safety regime came into force. Services now use highly effective age assurance to prevent children in the UK from encountering pornography and content that encourages, promotes and provides instructions for self-harm, suicide or eating disorders. Platforms are also now legally required to put in place measures to protect children from other types of harmful content, including abusive or hateful content, or bullying and violent content.

When we visited schools, we spoke to headteachers, teachers and parents about the real problem that schools have in trying to deal with the bullying effects of social media. According to Ofcom’s 4 December report that some hon. Members have referenced already, many services now deploy age checks, including the top 10 most popular pornographic sites, the UK’s most popular dating apps and a wide range of other services, including X, Telegram, Reddit, TikTok, Bluesky, Discord, Xbox and Steam. This represents a safer online experience for millions of children across the UK; we have heard that it is already having an impact.

The Government recognise, however, the importance of implementing the duties proportionately. That is why proportionality is a core principle of the Act and is built into many of the duties contained within it. Ofcom’s illegal content and child safety codes of practice set out recommended measures that are tailored to both size and risk to help providers to comply with their obligations —it is really important to emphasise that. When recommending steps that providers can take to comply with their duties, Ofcom must consider the size and risk level of different types and kinds of services.

Let me just concentrate on that for a minute. For instance, Ofcom recommends user blocking and muting measures to help to protect children from harmful content, including bullying, violent content and other harmful materials, and those recommendations are tailored to services’ size and risk profile. Specifically, Ofcom recommends that all services that are high risk for this content need to implement those measures in full. However, for services that are medium risk for this content, Ofcom suggests that they need to implement the measures only if they have more than 700,000 users.

However, while many services carry low risks of harm, risk assessment duties are key to ensuring that risky services of all sizes do not slip through the net of regulation. For example, the Government are very concerned about small platforms that host the most harmful content, such as forums dedicated to encouraging suicide or self-harm. Exempting all small services from duties requiring them to tackle that type of content would mean that those forums would not be subject to the Act’s enforcement powers, which is why we reject the petitioner’s views. Even forums that might seem harmless carry potential risks, such as where adults can engage directly with child users.

The Government recognise the importance of ensuring that low-risk services do not have unnecessary regulatory burdens placed upon them, which I hope reassures the shadow Minister. That is why, in the statement of strategic priorities issued on 2 July, the Government set out our expectation that Ofcom should continue focusing its efforts on safety improvements among services that pose the highest risk of harm to users, including small but risky services. The Government also made it explicitly clear that Ofcom should ensure that expectations on low-risk services are proportionate.

Alongside proportionate implementation of the Act, the Government also understand the need to communicate the new regulations effectively, and to work with companies within its scope to ensure that compliance is as easy as possible. To deliver that, Ofcom is providing support to online service providers of all sizes to make it easier for them to understand and comply with their responsibilities under the UK’s new online safety laws. For example, Ofcom has already launched a regulation checker to help firms to check whether they are covered by the new rules, as well as a number of quick guides for them.

I will address some of the issues raised by Members. My right hon. Friend the Member for Oxford East (Anneliese Dodds) started by raising the issue of pornography and other harmful content. User-to-user services that allow pornographic content, and content that promotes, provides instructions for or encourages suicide, self-harm or eating disorders, must use highly effective age assurance to prevent all children under 18 from accessing that type of content.

Services must take proportionate steps to minimise the risk of children encountering that type of content when using them, and they must also put in place age assurance measures to protect children from harmful content, such as bullying and violent content. Ofcom’s “Protection of Children Codes of Practice” set out what steps services can take to comply, and Ofcom has robust enforcement powers available to use against companies that fail to fulfil those important duties. We are already seeing that enforcement happening, with 6,000 sites having taken action to stop children from seeing harmful content, primarily via age checks. That shows the scale of the issue.

Virtual private networks have also been mentioned by a number of Members, including the shadow Minister. Following the introduction of the child safety duties in July, Ofcom reported that UK daily active users of VPN apps temporarily doubled to around 1.5 million—the average is normally about 750,000. Since then, usage has dropped, falling back down to around 1 million daily users by the end of September. That was expected, and it has also happened in other jurisdictions that have introduced age checks. According to an Ofcom rule, services should

“take appropriate steps to mitigate against methods of circumvention that are easily accessible to children”.

If a provider is not complying with the age assurance duties, by promoting VPN usage to bypass age assurance methods, Ofcom can and should take enforcement action. The use of VPNs does not protect platforms from not complying with the Act itself.

Jim McMahon Portrait Jim McMahon
- Hansard - -

The Minister has done a huge amount of work on this issue, which I am sure is appreciated by everyone in this House. It cannot be beyond the wit of man to find a way for these VPN companies to bridge between the service user and the ultimate website or platform that they are viewing, so why are VPNs not in scope of the legislation to ensure that they are compliant with the age verification measures? Presumably, it is more difficult for the end website to know the origins of the user, if they have bypassed via a VPN. Surely the onus should be on the VPN company to comply with the law also.

Ian Murray Portrait Ian Murray
- Hansard - - - Excerpts

My hon. Friend makes a good point; let me come back to him in detail on the VPN issue, as his question relates to what we are planning to do in our review of the Online Safety Act, including both what was written into the legislation and what was not.

My hon. Friend the Member for Darlington (Lola McEvoy), who is no longer in her place, highlighted the really important issue of chatbots, which has also been mentioned by a number of other Members. Generative AI services including chatbots that allow users to share content with one another or search live websites to provide search engines are already regulated under the Online Safety Act. Those services must protect users from illegal content and children from harmful and age-inappropriate content.

--- Later in debate ---
Ian Murray Portrait Ian Murray
- Hansard - - - Excerpts

I thank my hon. Friend for the work that she does on that Committee. Of course, the Government have to respond in detail to such reports and we look forward to the recommendations it brings forward. Often we see conspiracy theories in the online world, but there is no conspiracy theory here: the Government are not trying to defend a position against what evidence might come forward.

We have just signed a memorandum of understanding with Australia to look at their experiences of protecting children online and whether there are things that we can do in this country. It has to be evidence-based, and if the evidence base is there, we will certainly make sure to act, because it is non-negotiable that we protect young people and children online.

Jim McMahon Portrait Jim McMahon
- Hansard - -

I think there is no disagreement on the protection of children and there is no disagreement on what we have legislated to be illegal content. There is more debate needed on harmful but not illegal content and where that line is and what we enforce, and the protections for those who are not children, particularly vulnerable users and those who are being exploited and drawn into some quite extreme behaviours.

I will be honest about where some of these tensions are. How confident will the UK Government be in entering into negotiations on this when we are in the position we are in on trade with the US? The US has also made it clear that it sees any further regulation on social media platforms to be an infringement on trade and freedom of speech. When it comes to making that call, where will the UK Government be?

Ian Murray Portrait Ian Murray
- Hansard - - - Excerpts

My hon. Friend makes an important point, because freedom of expression is guaranteed in the Act. Although we are regulating to make sure that children and young people are protected online, he is right to suggest that that does not mean we are censoring stuff for adult content. The internet is a place where people can access content if they are age-verified to do so, but it cannot be illegal content. The list of issues in schedule 7 to the Act that I read out at the start of my speech is pretty clear on what someone is not allowed to do online, so any illegal content online still remains illegal. We need to work clearly with the online platforms to make sure that that is not being purveyed through them.

We have seen strong examples of this issue in recent months. If we reflect back to Southport, the public turned to local newspapers—we have discussed this many times before—because they wanted fast and regular but trustworthy news. They turned away from social media channels to get the proper story, and they knew they could trust the local newspaper that they were able to pick up and read. I think the public have a very strong understanding of where we are, but I take the point about people who are not as tech-savvy or are impaired in some way, and so may need further protections. My hon. Friend makes the argument very strongly.

I want to turn to AI chatbots, because they were mentioned in terms of mental health. We are clear that AI must not replace trained professionals. The Government’s 10-year health plan lays foundations for a digital front door for mental health care. Last month, the Secretary of State for Science, Innovation and Technology urged Ofcom to use existing powers to protect children from the potential harms of AI chatbots. She is clear that she is considering what more needs to be done. The Department of Health and Social Care is looking at mental health through the 10-year plan, but the Secretary of State for Science, Innovation and Technology has also been clear that she will not allow AI chatbots to affect young people’s mental health, and will address their development, as mentioned by the Liberal Democrat spokesperson, the hon. Member for Harpenden and Berkhamsted (Victoria Collins).

Let me touch on freedom of expression, because it is important to balance that out. It is on the other side of the shadow Minister’s ledger, and rightly so, because safeguards to protect freedom of expression and privacy are built in throughout the Online Safety Act. Services must consider how to protect users’ rights when applying safety measures, including users’ rights to express themselves freely. Providers do not need to take action on content that is beneficial to children—only against content that poses a risk of harm to children on their services. The Act does not prevent adults from seeking out legal content, and does not restrict people posting legal content that others of opposing views may find offensive. There is no removing of freedom of speech. It is a cornerstone of this Government, and under the Act, platforms have duties to protect freedom of speech. It is written into legislation.

Let me reiterate: the Online Safety Act does not limit freedom of speech. In fact, it protects it. My hon. Friend the Member for Worcester (Tom Collins) was clear when he said in his wonderful speech that making the internet a safe space promotes freedom of speech. Indeed it does, because it allows us to have the confidence that we can use online social media platforms, trust what we are reading and seeing, and know that our children are exposed to age-appropriate content.

I will address age assurance, which was mentioned by the hon. Member for Dewsbury and Batley (Iqbal Mohamed). Ofcom is required to produce a report on the use of age assurance technologies, including the effectiveness of age assurance, due in July 2026—so in seven months’ time. That allows sufficient time for these measures to embed in before considering further action, but the Government continue to monitor the impact of circumvention techniques such as VPNs and the effectiveness of the Act in protecting children. We will not hesitate to go further if necessary, but we are due that report in July 2026, which will be 12 months from the implementation of the measures.

The Liberal Democrat spokesperson asked about reviewing the Act. My previous comments covered some of that, but it is critical that we understand how effective the online safety regime is, and monitoring and evaluating that is key. My Department, Ofcom and the Home Office have developed a framework to monitor the implementation of the Act and evaluate the core outcomes from it.

--- Later in debate ---
Ian Murray Portrait Ian Murray
- Hansard - - - Excerpts

I take that point about the amendment that the Liberal Democrats tabled.

The hon. Lady also asked for a cross-party Committee to take action. I have already talked about the review of the implementation of the regulations that will happen in July and the other stages after that, as well as the post-implementation review. Of course, setting up a new Committee is a matter for the House. I have no objections to the House setting up Committees to look at these big and important issues that we all care about, if that is what it decides to do.

My hon. Friend the Member for Worcester talked about the issue of Parliament and engagement. He asked whether the Department would engage with the group of academics he mentioned, who are looking at technical safety standards for social media, including considering what role those academics could play in relation to these provisions. I welcome his invitation and I am sure that the Minister responsible for this area—the Under-Secretary of State for Science, Innovation and Technology, my hon. Friend the Member for Vale of Glamorgan (Kanishka Narayan)—would be delighted to participate in those talks. I am sure that he will be in touch with my hon. Friend the Member for Worcester to take him up on that offer.

We have heard about algorithms, so it is worth focusing concentrating on them. Hon. Friends have talked about the algorithms that serve harmful content. The Government have been clear that algorithms can impact on the risk of harm for children, which is why the legislation comprehensively covers them. The legislation requires providers to consider, via risk assessment, how algorithms could impact children’s exposure to illegal or harmful content, and providers must then take steps to mitigate those risks. If they do not do so, Ofcom has powers that it can use.

Jim McMahon Portrait Jim McMahon
- Hansard - -

There needs to be a tie-in here with the Cabinet Office and the review of electoral law. If a kind donor in my constituency owned a big billboard and gave me absolute free use of it during an election period, but made an offer to any other party that they could put a Post-it note on the back of it that nobody would see, I would have been expected to declare that as a gift in kind, or a donation in kind. That is not the case with algorithms that are posting and promoting generally right-wing and far-right content during the regulated period. Surely there has to be a better join-up here of election law and online law.

Ian Murray Portrait Ian Murray
- Hansard - - - Excerpts

This is a huge issue and all of us in this House are very concerned about misinformation and disinformation, and the impact on our democracy. Indeed, I am sure that in the time that I have been speaking here in Westminster Hall, my own social media will have been full of bots and all sorts of other things that try to encourage people to get involved in this debate, in order to influence the algorithm. That can fundamentally disturb our democracy, and is something we are looking at very closely. The Cabinet Office and ourselves are looking at the misinformation and disinformation issue, as is the Department for Culture, Media and Sport in terms of the media outlook and how elections are run in this country. We should all be very clear about not having our democratic processes undermined by such algorithmic platforms that serve up the kind of content that provides misinformation and disinformation to the public.