Online Safety Act 2023: Repeal Debate
Full Debate: Read Full DebateVictoria Collins
Main Page: Victoria Collins (Liberal Democrat - Harpenden and Berkhamsted)Department Debates - View all Victoria Collins's debates with the Department for Digital, Culture, Media & Sport
(1 day, 20 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
Victoria Collins (Harpenden and Berkhamsted) (LD)
It is a pleasure to serve under your chairmanship, Sir John. I congratulate the hon. Member for Sunderland Central (Lewis Atkinson), who made a very eloquent opening speech, and Members from across the Chamber, who have touched on really important matters.
As the hon. Member mentioned, the online space gives us great opportunities for connection and knowledge gathering, but also opportunities for greater harms. What has come across today is that we have addictive algorithms that are pushed in furtherance of commercial and malevolent interests—security interests, for example, although not the security of Great Britain—with no regard for the harm or impact they have on individuals or society.
When it comes to the Online Safety Act, we must get the balance right. Its protections for children and the vulnerable are vital. Of course, it is important to maintain freedom of speech and access to information. The Act is a step in the right direction in protecting children from extreme content, and we have seen changes in pornographic content. However, there are areas where it has not gone far enough, and it is not ready for the changes that are coming at a fast pace. There are websites that serve a public good that are age-gated, and forums for hobbies and communities that are being blocked. As the Liberal Democrats have said, we have to get the balance right. We also have to look at introducing something like a digital Bill of Rights with agile standards in the face of fast-paced changes, to embed safety by design at the base.
The harms that we need to protect children and vulnerable people from online are real. The contributions to this debate from hon. Members from across the House have been, as always, eye-opening and a reminder of how important this issue is. On pornographic content, we heard from the hon. Members for Morecambe and Lunesdale (Lizzi Collinge) and for Milton Keynes Central (Emily Darlington) sickening reminders of the horrific content online that young people see—and not by choice. We must never forget that, as has also been said, people are often not seeking this content, but it comes through, whether on X, which was Twitter, or other platforms. The Molly Rose Foundation highlighted that
“children using TikTok and X were more than twice as likely to have encountered…high risk content compared to users of other platforms.”
The online world coming to life has been mentioned in this debate. One of my constituents in Harpenden wrote to me, horrified that her daughter had been strangled on a dancefloor, because it showed how violent, graphic content is becoming normalised. That struck me to my core. Other content that has been mentioned: suicidal content, violent content and eating disorder misinformation, which the hon. Member for Worcester (Tom Collins) talked so eloquently about. The Molly Rose Foundation also highlighted that one in 10 harmful videos on TikTok have been viewed more than 1 million times, so we have young people seeing that ex content.
Even beyond extreme content, we are starting to see the addictive nature of social media, and the insidious way that this short-form content is becoming such a normalised part of many of our lives. Recent polling by the Liberal Democrats revealed that 80% of parents reported negative behaviours in their child due to excess phone usage, including skipping meals, having difficulty sleeping, or reporting physical discomforts such as eye strain or headaches. Parents and teachers know the real harms that are coming through, but young people themselves do too. I carried out a safer screens tour in my constituency in which I spoke to young people. Many of them said that they are seeing extreme content that they do not want to see, and that, although they have blocked the content, it comes back. The Online Safety Act is helping to change that, but it has not gone far enough. The addictive element of social media is important. In our surveys, two quotes from young people stood out. One sixth-former said that social media is
“as addictive as a drug”,
and that they felt its negative effects every day. Another young person simply wrote, “Help, I can’t stop.” Young people are asking for help and protection; we need to hold social media giants and online spaces to account.
It is welcome that some of those harms have been tackled by the Online Safety Act. On pornography, Pornhub has seen a 77% reduction in visitors to its website; Ofcom has launched 76 investigations into pornography providers and issued one fine of £50,000 for failing to introduce age checks, but we need to ask whether that goes far enough. It has come across loud and clear in this debate that the Online Safety Act has not gone far enough. Analysis has shown that Instagram and TikTok have started to introduce new design features that comply with the Online Safety Act, but game the system to still put forward content that is in those companies’ commercial interests, and not in the interests of young people.
Other extremely important harms include the new harms from AI. Many more people are turning to AI for mental health support. Generative AI is creating graphic content, and the Internet Watch Foundation found that
“reports of AI-generated child sexual abuse material have more than doubled in the past year”
and the IWF says it is at the point where it cannot tell the difference any more—it is horrific.
The hon. Lady is making a very important point. It really concerns me to see just how desensitised young people or adults can become when they see that type of content, and that inhumane content is directly linked to misogyny and racism. While I know no Member of this House would say such a thing, outside this place I could imagine an argument being made that harm depicted in AI-generated content is not real harm, because the content in itself is not real and no real abuse has been carried out. However, does the hon. Lady share my concern that such content is incredibly harmful, and that there is a real danger that it entraps even more people down the very dark route to what is essentially child abuse and to further types of harm, which will then present in the real world in a way that I do not think even Parliament has yet registered? In a sense, this problem is becoming more and more of a public health crisis.
Victoria Collins
Absolutely. The insidious part of this issue is the normalisation of such harmful content. In a debate on Lords amendments to the then Data (Use and Access) Bill, on creatives and AI, I mentioned the fact that, in the two weeks since the previous vote, we had seen the release of Google Veo 3—the all-singing, all-dancing video creation software. We are moving so quickly that we do not see how good AI-generated content is becoming. Some content that we see online is probably AI-generated, but we do not realise it. On top of that, as the hon. Gentleman said, AI normalises extreme content and produces content that people think is real, but is not. That is very dangerous for society.
My next point concerns deepfakes, which are undermining trust. Some deepfakes are obvious; some Members of Parliament and news presenters have been targeted through deepfakes. Just as important, however, is the fact that much deepfake content seems normal, but is undermining trust in what we see—we do not know what is real and what is not any more. That is going to be very dangerous not only in terms of extreme content, but for our democracy, and that argument has been made by other Members in this debate.
It is also worrying that social media platforms do not seem to see that problem. To produce its risk assessment report, Ofcom analysed 104 platforms and asked them to put in submissions: not a single social media platform classified itself as high risk for suicide, eating disorder or depression—yet much of what we have heard during this debate, including statistics and anecdotal stories, shows that that is just not true.
On the other hand, while there are areas where the Online Safety Act has not gone far enough, in other areas it has overstepped the mark. When the children’s code came into place, Lord Clement-Jones and I wrote to Secretary of State to outline some of our concerns, including political content being age-gated, educational sites such as Wikipedia being designated as category 1, and important forums about LGBTQ+ rights, sexual health or potentially sensitive topics being age-gated, despite being important for many who are learning about the world.
Jamie from Harpenden, a young person who relies on the internet heavily for education, found when he was looking for resources that a lot of them were flagged as threatening to children and blocked, and felt that that prevented his education. Age assurance systems also pose a problem to data protection and privacy. The intention behind this legislation was never to limit access to political or educational content, and it is important that we support access to the content that many rely on—but we must protect our children and vulnerable people online, and we must get that balance right.
I have a few questions for the Minister. Does he agree with the Liberal Democrats that we should have a cross-party Committee of both Houses of Parliament to review the Online Safety Act? Will he confirm what resources Ofcom has been given? Has analysis been conducted to ensure that Ofcom has enough resources to tackle these issues? What are the Government doing about AI labelling and watermarking? What are they doing to tackle deepfakes? Does the Minister agree that it is time to support the wellbeing of our children, rather than the pockets of big tech? Will the Minister support Liberal Democrat calls to increase the age of data consent and ban social media giants from collecting children’s data to power the addictive algorithms against them? We are calling for public health warnings on addictive social media for under-18s and for a doomscroll cap. Most important is a digital bill of rights and standards that, in light of the fast pace of change, need to be agile.
Our young people deserve better. We need to put children, young people and vulnerable people before the profits of big tech. We will not stop fighting until that change is made.
My hon. Friend makes a good point; let me come back to him in detail on the VPN issue, as his question relates to what we are planning to do in our review of the Online Safety Act, including both what was written into the legislation and what was not.
My hon. Friend the Member for Darlington (Lola McEvoy), who is no longer in her place, highlighted the really important issue of chatbots, which has also been mentioned by a number of other Members. Generative AI services including chatbots that allow users to share content with one another or search live websites to provide search engines are already regulated under the Online Safety Act. Those services must protect users from illegal content and children from harmful and age-inappropriate content.
Victoria Collins
Ofcom has said, and my understanding is, that in certain circumstances AI chatbots are covered, but certain new harms—such as emotional dependence—are not. That is an area where the House and many people are asking for clarity.
I do not disagree with the hon. Lady. There are a whole host of issues around porn bots and AI-generated bots that have now also sprung up. We know that we are committed to the Online Safety Act and its review as its being implemented. As technology moves on quickly, we have to keep pace with what the harms are and how we are able to deal with them. I thank the hon. Lady for raising those particular issues.
We will act on the evidence that comes forward. It is clear that if the evidence shows us that we have to act in various areas, including chatbots, we will do so. The Secretary of State announced plans to support a child safety summit in 2026, which will bring together tech companies, civil society and young people to shape how AI can benefit children and look at online harms and the movements on those.
Sir John, you are indeed very kind. My hon. Friend gave two examples during his speech. First, he mentioned brakes that were available only for high-end and expensive cars, and are now on all cars. Secondly, he mentioned building regulations, and how we would not build a balcony without a barrier. Those examples seem fairly obvious and almost flippant, but it seems strange that we would regulate heavily to make sure that people are safe physically—nobody would ever argue that it would be a complete disregard of people’s freedom to have a barrier on an 18th-floor balcony—but not online. We do that to keep people safe, and particularly to keep children safe. As my hon. Friend said, if we are keeping adults safe, we are ultimately keeping children safe too.
We have to continue to monitor and evaluate. I was just about to come on to the post-implementation review of the Act, which I am sure my hon. Friend will be very keen to have an input into. The Secretary of State must complete a review of the online safety regime two to five years after part 3 of the Act, which is about duties of care, fully comes into force. The review will therefore be completed no sooner than 2029. These are long timescales, of course, and technology is moving, so I understand the point that he is making. I recall that in the Parliament from 2010 to 2015, we regulated for the telephone, so we move slowly, although we understand that we also have to be nimble to legislate.
The Lib Dem spokesperson, the hon. Member for Harpenden and Berkhamsted, asked whether the Act has gone far enough. Ofcom, the regulator, is taking an iterative approach and will strengthen codes of practice as online harms, technology and the evidence evolve. We are already making improvements, for example strengthening the law to tackle self-harm, cyber-flashing and strangulation. The hon. Lady also asked whether Ofcom has received an increase in resources. It has—Ofcom spending has increased by nearly 30% in the past year, in recognition of its increased responsibilities. She also asked about a digital age of consent. As I mentioned, we have signed a memorandum of understanding with Australia and will engage with Australia to understand its approach. Any action will be based, of course, on robust evidence.
Victoria Collins
I would just like to clarify that I made a call for an age of data consent. We put that forward earlier this year as an amendment to the Act. A very first step is to stop social media companies harvesting data and using it to power these addictive algorithms against young people. It is about data consent to 16. Then of course, there is the wider discussion about what is happening with social media in general, but it is that age of data consent that is our first call to action.