Draft Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2025

Tuesday 18th November 2025

(1 day, 13 hours ago)

General Committees
Read Hansard Text Read Debate Ministerial Extracts
The Committee consisted of the following Members:
Chair: Martin Vickers
† Baines, David (St Helens North) (Lab)
† Bloore, Chris (Redditch) (Lab)
† Buckley, Julia (Shrewsbury) (Lab)
† Collins, Victoria (Harpenden and Berkhamsted) (LD)
Craft, Jen (Thurrock) (Lab)
† Curtis, Chris (Milton Keynes North) (Lab)
† Edwards, Lauren (Rochester and Strood) (Lab)
† Fortune, Peter (Bromley and Biggin Hill) (Con)
† Gill, Preet Kaur (Birmingham Edgbaston) (Lab/Co-op)
† Glindon, Mary (Newcastle upon Tyne East and Wallsend) (Lab)
† Jopp, Lincoln (Spelthorne) (Con)
† Narayan, Kanishka (Parliamentary Under-Secretary of State for Science, Innovation and Technology)
Sabine, Anna (Frome and East Somerset) (LD)
† Spencer, Dr Ben (Runnymede and Weybridge) (Con)
† Stuart, Graham (Beverley and Holderness) (Con)
† Vaughan, Tony (Folkestone and Hythe) (Lab)
† Wakeford, Christian (Lord Commissioner of His Majesty’s Treasury)
Kevin Maddison, Committee Clerk
† attended the Committee
Second Delegated Legislation Committee
Tuesday 18 November 2025
[Martin Vickers in the Chair]
Draft Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2025
12:45
Kanishka Narayan Portrait The Parliamentary Under-Secretary of State for Science, Innovation and Technology (Kanishka Narayan)
- Hansard - - - Excerpts

I beg to move,

That the Committee has considered the draft Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2025.

It is a pleasure to serve under your chairmanship, Mr Vickers. The draft regulations were laid before the House on 21 October. Before I proceed, I draw the Committee’s attention to the correction slip that was issued for the regulations in October. It relates to minor drafting changes in respect of the date of the Sexual Offences Act 2003 in the explanatory memorandum and the order of the words in the title of the offence inserted by paragraph (2) of regulation 2.

The Government have committed to taking decisive action against the most severe and damaging online harms. Through this statutory instrument, we are strengthening the Online Safety Act 2023 by creating new priority offences to tackle cyber-flashing and self-harm. This will ensure that platforms take stronger, more proactive steps to protect users from these harms.

There is compelling evidence that cyber-flashing and content encouraging self-harm are widespread and cause serious harm to individuals. The frequency of these harms is significantly higher among young age groups: of those aged 18 to 24, 9% had experienced cyber-flashing and 7% had experienced content encouraging self-harm. That means that across the country around 530,000 people in that age group have seen cyber-flashing and around 450,000 have seen self-harm content. That is clearly unacceptable.

Some 27% of UK users who were exposed to cyber-flashing reported significant emotional discomfort, and exposure to self-harm content has been shown to worsen mental health. A 2019 study found that 64% of Instagram users in the US who were exposed to self-harm content were deeply emotionally disturbed by it, and a 2018 study found that 8% of adults and 26% of children aged eight to 18 who were hospitalised after self-harming had encountered self-harm or suicide-related content online. Those figures demonstrate that the content is not isolated but widespread. It affects a significant portion of the online population.

As Members will know, the Online Safety Act, which received Royal Assent on 26 October 2023, places strong duties on platforms and services to protect users. Providers must assess how likely their services are to expose users to illegal content or to be used to commit or facilitate priority offences. Providers then need to take steps to mitigate the identified risks, including by implementing safety-by-design measures to reduce risks and content moderation systems to remove illegal content when it appears. The Act sets out a list of priority offences for the purposes of providers’ illegal content duties. Those relate primarily to the most serious and prevalent online illegal content and activity. Platforms need to take additional steps to tackle such illegal activity under their illegal content duties.

The draft regulations will add cyber-flashing and content encouraging self-harm to the list of priority offences under the Act. The offences are currently covered under the Act’s general illegal content duties, but without priority status. Without that status, platforms are not obliged to carry out specific risk assessments for harm to users that derives from this kind of harmful content or to put in place measures to prevent users from seeing such content in the first place. Stakeholders have welcomed the additions. Charities such as the Molly Rose Foundation and Samaritans have long campaigned for strengthened protections for vulnerable users.

The changes to the Act will take effect 21 days after the regulations are made, which can be done after the regulations are approved by both Houses. Ofcom, as the online safety regulator, sets out in codes of practice the measures that providers can take to fulfil their statutory illegal-content duties. The safety duties on providers to prioritise tackling self-harm and cyber-flashing will fully take effect when Ofcom makes the relevant updates to its codes on the measures that can be taken to fulfil the duties.

We anticipate that Ofcom will recommend that providers should take action in a number of areas. It could include content moderation, reporting and complaints procedures, and safety-by-design steps, such as providers testing algorithm systems to see whether illegal content is being recommended to users. Where providers fail to meet the duties, such as by not having proportionate measures to remove and proactively prevent this vile material from appearing on their platforms, Ofcom has robust powers to take enforcement action against them, including a power to impose fines of up to £18 million or 10% of qualifying worldwide revenue, whichever is the higher.

The statutory instrument upgrades cyber-flashing and self-harm content to priority status, thereby strengthening the impact of the Online Safety Act and protecting users from such content. Service providers will be required to take more proactive and robust action to protect, remove and limit exposure to this kind of illegal content. That will ensure that platforms take stronger steps to protect users, reduce the prevalence of these behaviours online and help to make the internet a safer place for everyone.

14:34
Ben Spencer Portrait Dr Ben Spencer (Runnymede and Weybridge) (Con)
- Hansard - - - Excerpts

It is a pleasure to serve under your chairmanship, Mr Vickers.

This statutory instrument represents an important development in the obligations on platforms regulated under the Online Safety Act to protect people from encountering illegal content online. The OSA was enacted by the last Government with the primary aim of safeguarding children and removing serious illegal material from the internet. Tackling the most harmful content, such as that which is the subject of today’s discussion, goes to the heart of the Online Safety Act’s aims. His Majesty’s Opposition therefore welcome and support the draft regulations.

The experiences and opportunities offered by the online world change rapidly. It is right that legislators are responsive when new risks emerge or when certain types of unlawful content proliferate on the internet. Under the last Government, the OSA amended the Sexual Offences Act 2003 to criminalise several forms of sexual misconduct and abusive behaviour online. The new offences included cyber-flashing and the sharing of or threatening to share intimate images without consent. The amendments were made to keep pace with novel threats and forms of abuse, the victims of which are too often women and girls.

Baroness Bertin’s independent review of pornography, which was published in February this year, highlighted the damaging impact on victims of intimate image abuse, ranging from physical illness to mental health effects such as anxiety, depression, post-traumatic stress disorder and suicidal thoughts. The effects of cyber-flashing and intimate image abuse on victims is severe. It is therefore right that this statutory instrument brings cyber-flashing within the scope of the priority offences in schedule 7 to the Online Safety Act, while retaining as a priority offence the sharing of or threatening to share intimate images.

We also strongly support the addition as a priority offence of encouraging or assisting serious self-harm, which is the other important component of this statutory instrument. Desperate people who contemplate self-harm need early intervention and support, not encouragement to self-harm. Under this SI, regulated services will be obliged to proactively remove the material when they become aware of it on their platforms and take measures to prevent it from appearing in the first place. One can only wonder why it has taken so long to get to this position. I am sure we will have a unanimous view not only in the House but in society of the importance of removing such material.

The regulations will work only if they are adopted by the industry and subject to rigorous oversight, coupled with enforcement when platforms fail in their obligations. That is a necessity, and why we had to introduce the Online Safety Act in the first place. It is right that Government regulators should look to identify obstacles to the implementation of the OSA and take action where necessary. Since the introduction of Ofcom’s protection of children codes in the summer, important questions have arisen around the use of virtual private networks to circumvent age verification, as well as data security and privacy in the age-verification process.

Peter Fortune Portrait Peter Fortune (Bromley and Biggin Hill) (Con)
- Hansard - - - Excerpts

On that point, does my hon. Friend the shadow Minister agree that we need to give some thought to the rise of chatbots and their nefarious activity, especially where they encourage self-harm or encourage children to do worse?

Ben Spencer Portrait Dr Spencer
- Hansard - - - Excerpts

I thank my hon. Friend for his question on a very important point, which was raised just last week in Department for Science, Innovation and Technology questions by my hon. Friend the Member for Harrow East (Bob Blackman) and others. The Lib Dem spokesperson, the hon. Member for Harpenden and Berkhamsted, also raised questions about the importance of the scope of regulations for chatbots.

The Government seem all over the place as to whether the large language models, as we understand them, regulate the content that comes into scope. Given the response we received last week, it would be helpful to have some clarity from the Minister. Does he believe that LLMs are covered by the OSA when it comes to encouraging self-harm material? If there is a gap, what is he going to do about it? I recognise that he is commissioning Ofcom to look at the issue, but in his view, right now, is there a gap that will need someone to fix it? What are his reflections on that? This is increasingly becoming a priority area that we need to resolve. If there is a gap in legislation, we need to get on and sort it.

14:39
Victoria Collins Portrait Victoria Collins (Harpenden and Berkhamsted) (LD)
- Hansard - - - Excerpts

It is a pleasure to serve under your chairmanship, Mr Vickers. The Liberal Democrats support this statutory instrument, which updates the Online Safety Act’s priority offences to reflect changes in intimate image abuse law. It is absolutely right to tackle the non-consensual sharing of intimate photographs and films, and to tackle self-harm.

However, this is also an important opportunity to say that the Act must go further still. The Internet Watch Foundation reminds us that it is not currently illegal to retain, re-upload or trade abusive intimate image material long after initial distribution. The Molly Rose Foundation and Samaritans have raised the issue of self-harm, and I am pleased to hear that being addressed today, but the point about AI chatbots is really important. As I mentioned in DSIT questions, the legislation on user to user and search seems pretty clear, but what about one-to-one chatbots when there is a single user? It is not clear who is accountable when self-harm content comes through chatbots that are not user to user. I appreciate that the Minister said the Department is looking into that issue with Ofcom.

The Act must also go further to address emerging online threats. The Internet Watch Foundation also reports that intimate images online are increasingly generated by deepfake AI, and that expert analysis now struggles to distinguish AI-generated content from real images or videos. At the beginning of this year alone, the IWF found 1,200 photorealistic videos of child sexual abuse material online. The Online Safety Act must do more to hold big tech companies to account, and to protect users from intimate image abuse at source, both real and AI-generated. Importantly, it must also tackle self-harm that is linked to AI chatbots, which are increasingly used by people of all ages.

Although this statutory instrument is a step forward, we need regulation that keeps pace with the rapidly evolving technology, not just changes in statute. We must ensure that Ofcom is sufficiently equipped and resourced to deal with emerging technologies. Will the Minister confirm what assessment has been done of the adequacy of Ofcom’s resourcing to ensure that this statutory instrument and the Online Safety Act can be applied and enforced in this fast-moving environment? When can we expect updates on AI chatbots and the scope of regulation? Will the Minister also confirm what the Government are doing to effectively regulate deepfake intimate content? What steps are being taken to hold tech companies to account for the continued harm facing children, vulnerable people and, given that experts can no longer differentiate between deepfake and real images, all internet users?

14:42
Kanishka Narayan Portrait Kanishka Narayan
- Hansard - - - Excerpts

I thank Committee members for their valuable contributions to the debate. The update in the regulations will bring us closer to achieving the Government’s commitments to improve online safety and strengthen protection for women and girls online. We believe that updating the priority offences list with the new cyber-flashing and self-harm content offences is the correct, proportionate and evidence-led approach to tackling this type of content, and it will provide stronger protections for online users.

I will now respond to the questions asked in the debate; I thank Members for the tone and substance of their contributions. The shadow Minister, the hon. Member for Runnymede and Weybridge, raised the use of VPNs. As I mentioned previously in the House, apart from an initial spike we have seen a significant levelling-off in the usage of VPNs, which points to the likely effectiveness of the age-assurance measures. We have commissioned further evidence on that front, and I hope to bring that to the House’s attention at the earliest opportunity.

The question of chatbots was raised by the shadow Minister, by the hon. Member for Bromley and Biggin Hill, and by the Liberal Democrat spokesperson, the hon. Member for Harpenden and Berkhamsted. Let me first clarify what I previously mentioned in the House: the legislation covers not only chatbots that allow user-to-user engagement but those that involve one-to-AI engagement and live search. That is extensive coverage of chatbots—both those types are within scope of the Online Safety Act.

There may be further gaps in the Act that pertain to aspects of the risks that Members have raised, and the Secretary of State has commissioned further work to ensure that we keep up with fast-changing technology. A number of the LLMs in question are covered by the Act, given the parameters that I have just defined. Of course, we will continue to review the situation, as both scope and risk need to evolve together.

Ben Spencer Portrait Dr Spencer
- Hansard - - - Excerpts

I hope the Minister takes this in a constructive spirit. Concerns have been raised across the House as to the scope of the OSA when it comes to LLMs and the different types and variations of chatbots, which are being used by many people right now. Is he not concerned that he as the Minister, and his Department, are not able to say at the Dispatch Box whether they believe LLMs are completely covered in the scope of the OSA? Has he received legal advice or other advice? How quickly will he be able to give a definitive response? Clearly, if there is a gap, we need to know about it and we need to take action. It surely puts the regulator and the people who are generating this technology in an invidious position if even Her Majesty’s Government think there is a lack of clarity, as he put it, on the scope of the applicability of the OSA to new technologies.

Kanishka Narayan Portrait Kanishka Narayan
- Hansard - - - Excerpts

Let me be clear: there is no lack of clarity in the scope of the Bill. It is extremely clear to a provider whether they are in scope or not. If they have user-to-user engagement on the platform, they are in scope. If they have live search, which is the primary basis in respect of many LLMs at the moment, they are in scope. There is no lack of clarity from a provider point of view. The question at stake is whether the further aspects of LLMs, which do not involve any of those areas of scope, pose a particular risk.

A number of incidents have been reported publicly, and I will obviously not comment on individual instances. The Online Safety Act does not focus on individual content-takedown instances and instead looks at a system. Ofcom has engaged firms that are very much in scope of the Act already. If there are further instances of new risks posed by platforms that are not currently within the scope of the Online Safety Act, we will of course review its scope and make sure we are moving fast in the light of that information.

The hon. Member for Harpenden and Berkhamsted asked about child sexual abuse material. I was very proud that we introduced amendments last week to the Crime and Policing Bill to make sure that organisations such as the Internet Watch Foundation are engaged, alongside targeted experts, particularly the police, in spotting CSAM content and risk way before AI models are released. In that context, we are ensuring that the particular risks posed by AI to children’s safety are countered before they escalate.

On the question about Ofcom’s spending and capacity more generally to counter the nature of the risk, the spending cap at Ofcom allows it to enforce against the offences that we deem to be priority offences. In part, when we make the judgment about designating offences as a priority, we make a proportionate assessment about whether we believe there is both severity and the capacity context for robust enforcement. I will continue to review that situation as the nature of the offences changes.

Finally, I am glad that the Government have committed throughout to ensure that sexually explicit non-consensual images, particularly deepfakes, are robustly enforced against. That remains the position. I hope the Committee agrees with me on the importance of updating the priority offences in the Online Safety Act as swiftly as possible. I commend the regulations to the Committee.

Question put and agreed to.

14:47
Committee rose.