Online Safety Bill: Scope Debate
Full Debate: Read Full DebateNadine Dorries
Main Page: Nadine Dorries (Conservative - Mid Bedfordshire)Department Debates - View all Nadine Dorries's debates with the Department for Digital, Culture, Media & Sport
(2 years, 3 months ago)
Written StatementsThe Online Safety Bill will deliver vital protections for children, ensure there are no safe spaces for criminals online and protect and promote free speech.
All services in scope of the Bill must tackle criminal activity online, and all services likely to be accessed by children will have duties to protect them from harmful content. The major platforms will have additional responsibilities to set out clearly what content harmful to adults they allow on their service, and to enforce their own policies consistently. Nothing in the Bill requires services to remove legal content from their platform and users will continue to be able to hold robust discussions of controversial issues, including those which might cause offence, online.
The Bill sets a threshold for harmful content, which brings into scope content of a kind which presents a material risk of significant harm to an appreciable number of children or adults in the UK. Disagreement online will not meet the threshold of harm in the Bill, including on issues of scientific debate.
A key feature of the online safety regulatory framework will be the designation of priority harmful content for children and adults. Services in scope of the Bill which are likely to be accessed by children will be required to prevent them from encountering “primary priority content that is harmful to children”, and to protect children in age groups at risk of harm from “priority content that is harmful to children”.
The largest and most high risk, category 1, services will also need to be clear in their terms of service how “priority content that is harmful to adults” is addressed by the service. Services will be able to set their own tolerance for legal content for adult users. Category 1 services will need to assess the risk of priority harmful content to adults, set out clearly in terms of service how such content is treated and enforce their terms of service consistently. This could include specifying that the content will be removed or deprioritised in news feeds, but could also include the platform stating that such content is allowed freely or that it will be recommended or promoted to other users. In addition, all services will need to have regard to freedom of expression when implementing their safety duties.
Final details of the types of content covered by the three categories—primary priority content for children, priority harmful content for children and priority harmful content for adults—will be designated in secondary legislation following consultation with Ofcom. This will ensure the types of designated content are based on the most recent evidence and emerging harms can be added quickly, future-proofing the legislation. However, the Government recognise the interest from parliamentarians and stakeholders in the identity of priority harmful content. To provide more detail on the harms that we intend to designate, the Government are publishing a proposed list of the types of content that it expects to be listed as primary priority and priority harmful content for children and priority harmful content for adults.
The Government consider that the types of content on the indicative list meet the threshold for priority harmful content set out in the Bill. This threshold is important to ensure that the online safety framework focuses on content and activity which poses the most significant risk of harm to UK users online. It is important for the framework to distinguish in this way between strongly felt debate on the one hand, and unacceptable acts of abuse, intimidation and violence on the other. British democracy has always been robust and oppositional. Free speech within the law can involve the expression of views that some may find offensive, but a line is crossed when disagreement mutates into abuse or harassment, which refuses to tolerate other opinions and seeks to deprive others from exercising their free speech and freedom of association.
This may not be an exhaustive list of the content which will be designated as priority harmful content under the Bill. We will continue to engage extensively with stakeholders, parliamentarians and Ofcom, including on some of the most harmful content online, ahead of designating the details of the three categories of priority harmful content in secondary legislation.
Indicative list of priority harmful content
Adults:
Priority content (category 1 services need to address in their terms and conditions):
Online abuse and harassment. Mere disagreement with another’s point of view would not reach the threshold of harmful content, and so would not be covered by this.
Circulation of real or manufactured intimate images without the subject's consent
Content promoting self-harm
Content promoting eating disorders
Legal suicide content
Harmful health content that is demonstrably false, such as urging people to drink bleach to cure cancer. It also includes some health and vaccine misinformation and disinformation, but is not intended to capture genuine debate.
Children:
Primary priority content (children must be prevented from encountering altogether):
Pornography
Content promoting self-harm (with some content which may be designated as priority content, e.g. content focused on recovery from self-harm)
Content promoting eating disorders (with some content which may be designated as priority content, e.g. content focused on recovery from an eating disorder)
Legal suicide content (with some content which may be designated as priority content, e.g. content focused on recovery)
Priority content (companies need to ensure content is age appropriate for their child users):
Online abuse, cyberbullying and harassment
Harmful health content (including health and vaccine misinformation and disinformation) Content depicting or encouraging violence
[HCWS194]