Baroness Morgan of Cotes Portrait Baroness Morgan of Cotes (Con)
- View Speech - Hansard - - - Excerpts

My Lords, I will speak very briefly to Amendments 55 and 182. We are now at the stage of completely taking the lead from the Minister and the noble Lords opposite—the noble Lords, Lord Stevenson and Lord Clement-Jones—that we have to accept these amendments, because we need now to see how this will work in practice. That is why we all think that we will be back here talking about these issues in the not too distant future.

My noble friend the Minister rightly said that, as we debated in Committee, the Government made a choice in taking out “legal but harmful”. Many of us disagree with that, but that is the choice that has been made. So I welcome the changes that have been made by the Government in these amendments to at least allow there to be more empowerment of users, particularly in relation to the most harmful content and, as we debated, in relation to adult users who are more vulnerable.

It is worth reminding the House that we heard very powerful testimony during the previous stage from noble Lords with personal experience of family members who struggle with eating disorders, and how difficult these people would find it to self-regulate the content they were looking at.

In Committee, I proposed an amendment about “toggle on”. Anyone listening to this debate outside who does not know what we are talking about will think we have gone mad, talking about toggle on and toggle off, but I proposed an amendment for toggle on by default. Again, I take the Government’s point, and I know my noble friend has put a lot of work into this, with Ministers and others, in trying to come up with a sensible compromise.

I draw attention to Amendment 55. I wonder if my noble friend the Minister is able say anything about whether users will be able to have specific empowerment in relation to specific types of content, where they are perhaps more vulnerable if they see it. For example, the needs of a user might be quite different between those relating to self-harm and those relating to eating disorder content or other types of content that we would deem harmful.

On Amendment 182, my noble friend leapt immediately to abusive content coming from unverified users, but, as we have heard, and as I know, having led the House’s inquiry into fraud and digital fraud last year, there will be, and already is, a prevalence of scams. The Bill is cracking down on fraudulent advertisements but, as an anti-fraud measure, being able to see whether an account has been verified would be extremely useful. The view now is that, if this Bill is successful—and we hope it is—in cracking down on fraudulent advertising, then there will be even more reliance on what is called organic reach, which is the use of fake accounts, where verification therefore becomes more important. We have heard from opinion polling that the public want to see which accounts are or are not verified. We have also heard that Amendment 182 is about giving users choice, in making clear whether their accounts are verified; it is not about compelling people to say whether they are verified or not.

As we have heard, this is a direction of travel. I understand that the Government will not want to accept these amendments at this stage, but it is useful to have this debate to see where we are going and what Ofcom will be looking at in relation to these matters. I look forward to hearing what my noble friend the Minister has to say about these amendments.

Lord Allan of Hallam Portrait Lord Allan of Hallam (LD)
- View Speech - Hansard - -

My Lords, I speak to Amendment 53, on the assessment duties, and Amendment 60, on requiring services to provide a choice screen. It is the first time we have seen these developments. We are in something of a see-saw process over legal but harmful. I agree with my noble friend Lord Clement-Jones when he says he regrets that it is no longer in the Bill, although that may not be a consistent view everywhere. We have been see-sawing backwards and forwards, and now, like the Schrödinger’s cat of legal but harmful, it is both dead and alive at the same time. Amendments that we are dealing with today make it a little more alive that it was previously.

In this latest incarnation, we will insist that category 1 services carry out an assessment of how they will comply with their user-empowerment responsibility. Certainly, this part seems reasonable to me, given that it is limited to category 1 providers, which we assume will have significant resources. Crucially, that will depend on the categorisations—so we are back to our previous debate. If we imagine category 1 being the Meta services and Twitter, et cetera, that is one thing, but if we are going to move others into category 1 who would really struggle to do a user empowerment tool assessment—I have to use the right words; it is not a risk assessment—then it is a different debate. Assuming that we are sticking to those major services, asking them to do an assessment seems reasonable. From working on the inside, I know that even if it were not formalised in the Bill, they would end up having to do it as part of their compliance responsibilities. As part of the Clause 8 illegal content risk assessment, they would inevitably end up doing that.

That is because the categories of content that we are talking about in Clauses 12(10) to (12) are all types of content that might sometimes be illegal and sometimes not illegal. Therefore, if you were doing an illegal content risk assessment, you would have to look at it, and you would end up looking at types of content and putting them into three buckets. The first bucket is that it is likely illegal in the UK, and we know what we have to do there under the terms of the Bill. The second is that it is likely to be against your terms of service, in which case you would deal with it there. The third is that it is neither against your terms of service nor against UK law, and you would make a choice about that.

I want to focus on what happens once you have done the risk assessment and you have to have the choice screen. I particularly want to focus on services where all the content in Clause 12 is already against their terms of service, so there is no gap. The whole point of this discussion about legal but harmful is imagining that there is going to be a mixed economy of services and, in that mixed economy, there will be different standards. Some will wish to allow the content listed in Clause 12—self-harm-type content, eating disorder content and various forms of sub-criminal hate speech. Some will choose to do that—that is going to be their choice—and they will have to provide the user empowerment tools and options. I believe that many category 1 providers will not want to; they will just want to prohibit all that stuff under their terms of service and, in that case, offering a choice is meaningless. That will not make the noble Lord, Lord Moylan, or the noble Baroness, Lady Fox, very happy, but that is the reality.

Most services will just say that they do not want that stuff on their platform. In those cases, I hope that what we are going to say is that, in their terms of service, when a user joins a service, they can say that they have banned all that stuff anyway, so they are not going to give the user a user empowerment tool and, if the user sees that stuff, they should just report it and it will be taken down under the terms of service. Throughout this debate I have said, “No more cookie banners, please”. I hope that we are not going to require people, in order for them to comply with this law, to offer a screen that people then click through. It is completely meaningless and ineffective. For those services that have chosen under their terms of service to restrict all the content in Clause 12, I hope that we will be saying that their version of the user empowerment tool is not to make people click anything but to provide education and information and tell them where they can report the content and have it taken down.

Then there are those who will choose to protect that content and allow it on their service. I agree with the noble Lord, Lord Moylan, that this is, in some sense, Twitter-focused or Twitter-driven legislation, because Twitter tends to be more in the freedom of speech camp and to allow hate speech and some of that stuff. It will be more permissive than Facebook or Instagram in its terms, and it may choose to maintain that content and it will have to offer that screen. That is fine, but we should not be making services do so when they have already prohibited such content.

The noble Lord, Lord Moylan, mentioned services that use community moderators to moderate part of the service and how this would apply there. Reddit is the obvious example, but there are others. If you are going to have user empowerment—and Reddit is more at the freedom of expression end of things—then if there are some subreddits, or spaces within Reddit that allow hate speech or the kind of speech that is in Clause 12, it would be rational to say that user empowerment in the context of Reddit is to be told that you can join these subreddits and you are fine or you can join those subreddits and you are allowing yourself to be exposed to this kind of content. What would not make sense would be for Reddit to do it individual content item by content item. When we are thinking about this, I hope that the implementation would say that, for a service with community-moderated spaces, and subspaces within the larger community, user empowerment means choosing which subspaces you enter, and you would be given information about them. Reddit would say to the moderators of the subreddits, “You need to tell us whether you have any Clause 12-type content”—I shall keep using that language—“and, if you are allowing it, you need to make sure that you are restricted”. But we should not expect Reddit to restrict every individual content item.

Finally, as a general note of caution, noble Lords may have detected that I am not entirely convinced that these will be hugely beneficial tools, perhaps other than for a small subset of Twitter users, for whom they are useful. There is an issue around particular kinds of content on Twitter, and particular Twitter users, including people in prominent positions in public life, for whom these tools make sense. For a lot of other people, they will not be particularly meaningful. I hope that we are going to keep focused on outcomes and not waste effort on things that are not effective.

As I say, many companies, when they are faced with this, will look at it and say, “I have limited engineering time. I could build all these user empowerment tools or I could just ban the Clause 12 stuff in my terms of service”. That would not be a great outcome for freedom of expression; it might be a good outcome for the people who wanted to prohibit legal but harmful in the first place. You are going to do that as a really hard business decision. It is much more expensive to try to maintain these different regimes and flag all this content and so on. It is simpler to have one set of standards.

--- Later in debate ---
Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- View Speech - Hansard - - - Excerpts

My Lords, I am grateful for the broad, if not universal, support for the amendments that we have brought forward following the points raised in Committee. I apologise for anticipating noble Lords’ arguments, but I am happy to expand on my remarks in light of what they have said.

My noble friend Lord Moylan raised the question of non-verified user duties and crowdsourced platforms. The Government recognise concerns about how the non-verified user duties will work with different functionalities and platforms, and we have engaged extensively on this issue. These duties are only applicable to category 1 platforms, those with the largest reach and influence over public discourse. It is therefore right that such platforms have additional duties to empower their adult users. We anticipate that these features will be used in circumstances where vulnerable adults wish to shield themselves from anonymous abuse. If users decide that they are restricting their experience on a particular platform, they can simply choose not to use them. In addition, before these duties come into force, Ofcom will be required to consult effective providers regarding the codes of practice, at which point they will consider how these duties might interact with various functionalities.

My noble friend and the noble Lord, Lord Allan of Hallam, raised the potential for being bombarded with pop-ups because of the forced-choice approach that we have taken. These amendments have been carefully drafted to minimise unnecessary prompts or pop-ups. That is why we have specified that the requirement to proactively ask users how they want these tools to be applied is applicable only to registered users. This approach ensures that users will be prompted to make a decision only once, unless they choose to ignore it. After a decision has been made, the provider should save this preference and the user should not be prompted to make the choice again.

The noble Lord, Lord Clement-Jones, talked further about his amendments on the cost of user empowerment tools as a core safety duty in the Bill. Category 1 providers will not be able to put the user empowerment tools in Clause 12 behind a pay wall and still be compliant with their duties. That is because they will need to offer them to users at the first possible opportunity, which they will be unable to do if they are behind a pay wall. The wording of Clause 12(2) makes it clear that providers have a duty to include user empowerment features that an adult user may use or apply.

Lord Allan of Hallam Portrait Lord Allan of Hallam (LD)
- Hansard - -

The Minister may not have the information today, but I would be happy to get it in writing. Can he clarify exactly what will be expected of a service that already prohibits all the Clause 12 bad stuff in their terms of service?

Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay
- Hansard - - - Excerpts

I will happily write to the noble Lord on that.

Clause 12(4) further sets out that all search user empowerment content tools must be made available to all adult users and be easy to access.

The noble Lord, Lord Clement-Jones, on behalf of the noble Baroness, Lady Finlay, talked about people who will seek out suicide, self-harm or eating-disorder content. While the Bill will not prevent adults from seeking out legal content, it will introduce significant protections for adults from some of the most harmful content. The duties relating to category 1 services’ terms of service are expected hugely to improve companies’ own policing of their sites. Where this content is legal and in breach of the company’s terms of service, the Bill will force the company to take it down.

We are going even further by introducing a new user empowerment content-assessment duty. This will mean that where content relates to eating disorders, for instance, but which is not illegal, category 1 providers need fully to assess the incidence of this content on their service. They will need clearly to publish this information in accessible terms of service, so users will be able to find out what they can expect on a particular service. Alternatively, if they choose to allow suicide, self-harm or eating content disorder which falls into the definition set out in Clause 12, they will need proactively to ask users how they would like the user empowerment content features to be applied.

My noble friend Lady Morgan was right to raise the impact on vulnerable people or people with disabilities. While we anticipate that the changes we have made will benefit all adult users, we expect them particularly to benefit those who may otherwise have found it difficult to find and use the user empowerment content features independently—for instance, some users with types of disabilities. That is because the onus will now be on category 1 providers proactively to ask their registered adult users whether they would like these tools to be applied at the first possible opportunity. The requirement also remains to ensure that the tools are easy to access and to set out clearly what tools are on offer and how users can take advantage of them.