Draft Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2025 Debate

Full Debate: Read Full Debate
Department: Department for Science, Innovation & Technology

Draft Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2025

Kanishka Narayan Excerpts
Tuesday 18th November 2025

(1 day, 12 hours ago)

General Committees
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Kanishka Narayan Portrait The Parliamentary Under-Secretary of State for Science, Innovation and Technology (Kanishka Narayan)
- Hansard - -

I beg to move,

That the Committee has considered the draft Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2025.

It is a pleasure to serve under your chairmanship, Mr Vickers. The draft regulations were laid before the House on 21 October. Before I proceed, I draw the Committee’s attention to the correction slip that was issued for the regulations in October. It relates to minor drafting changes in respect of the date of the Sexual Offences Act 2003 in the explanatory memorandum and the order of the words in the title of the offence inserted by paragraph (2) of regulation 2.

The Government have committed to taking decisive action against the most severe and damaging online harms. Through this statutory instrument, we are strengthening the Online Safety Act 2023 by creating new priority offences to tackle cyber-flashing and self-harm. This will ensure that platforms take stronger, more proactive steps to protect users from these harms.

There is compelling evidence that cyber-flashing and content encouraging self-harm are widespread and cause serious harm to individuals. The frequency of these harms is significantly higher among young age groups: of those aged 18 to 24, 9% had experienced cyber-flashing and 7% had experienced content encouraging self-harm. That means that across the country around 530,000 people in that age group have seen cyber-flashing and around 450,000 have seen self-harm content. That is clearly unacceptable.

Some 27% of UK users who were exposed to cyber-flashing reported significant emotional discomfort, and exposure to self-harm content has been shown to worsen mental health. A 2019 study found that 64% of Instagram users in the US who were exposed to self-harm content were deeply emotionally disturbed by it, and a 2018 study found that 8% of adults and 26% of children aged eight to 18 who were hospitalised after self-harming had encountered self-harm or suicide-related content online. Those figures demonstrate that the content is not isolated but widespread. It affects a significant portion of the online population.

As Members will know, the Online Safety Act, which received Royal Assent on 26 October 2023, places strong duties on platforms and services to protect users. Providers must assess how likely their services are to expose users to illegal content or to be used to commit or facilitate priority offences. Providers then need to take steps to mitigate the identified risks, including by implementing safety-by-design measures to reduce risks and content moderation systems to remove illegal content when it appears. The Act sets out a list of priority offences for the purposes of providers’ illegal content duties. Those relate primarily to the most serious and prevalent online illegal content and activity. Platforms need to take additional steps to tackle such illegal activity under their illegal content duties.

The draft regulations will add cyber-flashing and content encouraging self-harm to the list of priority offences under the Act. The offences are currently covered under the Act’s general illegal content duties, but without priority status. Without that status, platforms are not obliged to carry out specific risk assessments for harm to users that derives from this kind of harmful content or to put in place measures to prevent users from seeing such content in the first place. Stakeholders have welcomed the additions. Charities such as the Molly Rose Foundation and Samaritans have long campaigned for strengthened protections for vulnerable users.

The changes to the Act will take effect 21 days after the regulations are made, which can be done after the regulations are approved by both Houses. Ofcom, as the online safety regulator, sets out in codes of practice the measures that providers can take to fulfil their statutory illegal-content duties. The safety duties on providers to prioritise tackling self-harm and cyber-flashing will fully take effect when Ofcom makes the relevant updates to its codes on the measures that can be taken to fulfil the duties.

We anticipate that Ofcom will recommend that providers should take action in a number of areas. It could include content moderation, reporting and complaints procedures, and safety-by-design steps, such as providers testing algorithm systems to see whether illegal content is being recommended to users. Where providers fail to meet the duties, such as by not having proportionate measures to remove and proactively prevent this vile material from appearing on their platforms, Ofcom has robust powers to take enforcement action against them, including a power to impose fines of up to £18 million or 10% of qualifying worldwide revenue, whichever is the higher.

The statutory instrument upgrades cyber-flashing and self-harm content to priority status, thereby strengthening the impact of the Online Safety Act and protecting users from such content. Service providers will be required to take more proactive and robust action to protect, remove and limit exposure to this kind of illegal content. That will ensure that platforms take stronger steps to protect users, reduce the prevalence of these behaviours online and help to make the internet a safer place for everyone.

--- Later in debate ---
Kanishka Narayan Portrait Kanishka Narayan
- Hansard - -

I thank Committee members for their valuable contributions to the debate. The update in the regulations will bring us closer to achieving the Government’s commitments to improve online safety and strengthen protection for women and girls online. We believe that updating the priority offences list with the new cyber-flashing and self-harm content offences is the correct, proportionate and evidence-led approach to tackling this type of content, and it will provide stronger protections for online users.

I will now respond to the questions asked in the debate; I thank Members for the tone and substance of their contributions. The shadow Minister, the hon. Member for Runnymede and Weybridge, raised the use of VPNs. As I mentioned previously in the House, apart from an initial spike we have seen a significant levelling-off in the usage of VPNs, which points to the likely effectiveness of the age-assurance measures. We have commissioned further evidence on that front, and I hope to bring that to the House’s attention at the earliest opportunity.

The question of chatbots was raised by the shadow Minister, by the hon. Member for Bromley and Biggin Hill, and by the Liberal Democrat spokesperson, the hon. Member for Harpenden and Berkhamsted. Let me first clarify what I previously mentioned in the House: the legislation covers not only chatbots that allow user-to-user engagement but those that involve one-to-AI engagement and live search. That is extensive coverage of chatbots—both those types are within scope of the Online Safety Act.

There may be further gaps in the Act that pertain to aspects of the risks that Members have raised, and the Secretary of State has commissioned further work to ensure that we keep up with fast-changing technology. A number of the LLMs in question are covered by the Act, given the parameters that I have just defined. Of course, we will continue to review the situation, as both scope and risk need to evolve together.

Ben Spencer Portrait Dr Spencer
- Hansard - - - Excerpts

I hope the Minister takes this in a constructive spirit. Concerns have been raised across the House as to the scope of the OSA when it comes to LLMs and the different types and variations of chatbots, which are being used by many people right now. Is he not concerned that he as the Minister, and his Department, are not able to say at the Dispatch Box whether they believe LLMs are completely covered in the scope of the OSA? Has he received legal advice or other advice? How quickly will he be able to give a definitive response? Clearly, if there is a gap, we need to know about it and we need to take action. It surely puts the regulator and the people who are generating this technology in an invidious position if even Her Majesty’s Government think there is a lack of clarity, as he put it, on the scope of the applicability of the OSA to new technologies.

Kanishka Narayan Portrait Kanishka Narayan
- Hansard - -

Let me be clear: there is no lack of clarity in the scope of the Bill. It is extremely clear to a provider whether they are in scope or not. If they have user-to-user engagement on the platform, they are in scope. If they have live search, which is the primary basis in respect of many LLMs at the moment, they are in scope. There is no lack of clarity from a provider point of view. The question at stake is whether the further aspects of LLMs, which do not involve any of those areas of scope, pose a particular risk.

A number of incidents have been reported publicly, and I will obviously not comment on individual instances. The Online Safety Act does not focus on individual content-takedown instances and instead looks at a system. Ofcom has engaged firms that are very much in scope of the Act already. If there are further instances of new risks posed by platforms that are not currently within the scope of the Online Safety Act, we will of course review its scope and make sure we are moving fast in the light of that information.

The hon. Member for Harpenden and Berkhamsted asked about child sexual abuse material. I was very proud that we introduced amendments last week to the Crime and Policing Bill to make sure that organisations such as the Internet Watch Foundation are engaged, alongside targeted experts, particularly the police, in spotting CSAM content and risk way before AI models are released. In that context, we are ensuring that the particular risks posed by AI to children’s safety are countered before they escalate.

On the question about Ofcom’s spending and capacity more generally to counter the nature of the risk, the spending cap at Ofcom allows it to enforce against the offences that we deem to be priority offences. In part, when we make the judgment about designating offences as a priority, we make a proportionate assessment about whether we believe there is both severity and the capacity context for robust enforcement. I will continue to review that situation as the nature of the offences changes.

Finally, I am glad that the Government have committed throughout to ensure that sexually explicit non-consensual images, particularly deepfakes, are robustly enforced against. That remains the position. I hope the Committee agrees with me on the importance of updating the priority offences in the Online Safety Act as swiftly as possible. I commend the regulations to the Committee.

Question put and agreed to.