None Portrait The Chair
- Hansard -

May I just ask you, for the benefit of Hansard, to try to speak up a little? The sound system is not all that it might be in this room, and the acoustics certainly are not.

Alex Davies-Jones Portrait Alex Davies-Jones (Pontypridd) (Lab)
- Hansard - -

Q Thank you to our witnesses for joining us this afternoon. Quite bluntly, I will get into it, because what is frustrating for us, as Parliamentarians, and for our constituents, is the fact that we need this legislation in the first place. Why are you, as platforms, allowing harmful and illegal content to perpetuate on your platforms? Why do we need this legislation for you to take action? It is within your gift to give, and despite all the things I am sure you are about to tell me that you are doing to prevent this issue from happening, it is happening and we are needing to legislate, so why?

None Portrait The Chair
- Hansard -

Mr Earley, I will go left to right to start with, if that is all right with you, so you have drawn the short straw.

Richard Earley: No worries, and thank you very much for giving us the opportunity to speak to you all today; I know that we do not have very much time. In short, we think this legislation is necessary because we believe that it is really important that democratically elected Members of Parliament and Government can provide input into the sorts of decisions that companies such as ours are making, every day, about how people use the internet. We do not believe that it is right for companies such as ours to be taking so many important decisions every single day.

Now, unfortunately, it is the case that social media reflects the society that we live in, so all of the problems that we see in our society also have a reflection on our services. Our priority, speaking for Meta and the services we provide—Facebook, Instagram and WhatsApp—is to do everything we can to make sure our users have as positive an experience as possible on our platform. That is why we have invested more than $13 billion over the past five years in safety and security, and have more than 40,000 people working at our company on safety and security every day.

That said, I fully recognise that we have a lot more areas to work on, and we are not waiting for this Bill to come into effect to do that. We recently launched a whole range of updated tools and technologies on Instagram, for example, to protect young people, including preventing anyone under the age of 18 from being messaged by a person they are not directly connected to. We are also using new technology to identify potentially suspicious accounts to prevent young people from appearing in any search results that those people carry out. We are trying to take steps to address these problems, but I accept there is a lot more to do.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q Before I bring in Becky and Katie to answer that, I just want to bring you back to something you said about social media and your platforms reflecting society like a mirror. That analogy is used time and again, but actually they are not a mirror. The platforms and the algorithms they use amplify, encourage and magnify certain types of content, so they are not a mirror of what we see in society. You do not see a balanced view of two points of an issue, for example.

You say that work is already being done to remove this content, but on Instagram, for example, which is a platform predominantly used by women, the Centre for Countering Digital Hate has exposed what they term an “epidemic of misogynistic abuse”, with 90% of misogynistic abuse being sent via direct messaging. It is being ignored by the platform even when it is being reported to the moderators. Why is that happening?

Richard Earley: First, your point about algorithms is really important, but I do not agree that they are being used to promote harmful content. In fact, in our company, we use algorithms to do the reverse of that. We try to identify content that might break our policies—the ones we write with our global network of safety experts—and then remove those posts, or if we find images or posts that we think might be close to breaking those rules, we show them lower in people’s feeds so that they have a lower likelihood of being seen. That is why, over the past two years, we have reduced the prevalence of harmful posts such as hate speech on Facebook so that now only 0.03% of views of posts on Facebook contain that kind of hate speech—we have almost halved the number. That is one type of action that we take in the public parts of social media.

When it comes to direct messages, including on Instagram, there are a range of steps that we take, including giving users additional tools to turn off any words they do not want to see in direct messages from anyone. We have recently rolled out a new feature called “restrict” which enables you to turn off any messages or comments from people who have just recently started to follow you, for example, and have just created their accounts. Those are some of the tools that we are trying to use to address that.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q So the responsibility is on the user, rather than the platform, to take action against abuse?

Richard Earley: No, the responsibility is absolutely shared by those of us who offer platforms, by those who are engaged in abuse in society, and by civil society and users more widely. We want to ensure we are doing everything we can to use the latest technology to stop abuse happening where we can and give people who use our services the power to control their experience and prevent themselves from encountering it.

None Portrait The Chair
- Hansard -

We must allow the other witnesses to participate.

Becky Foreman: Thank you for inviting me to give evidence to you today. Online safety is extremely important to Microsoft and sits right at the heart of everything we do. We have a “safety by design” policy, and responsibility for safety within our organisation sits right across the board, from engineers to operations and policy people. However, it is a complicated, difficult issue. We welcome and support the regulation that is being brought forward.

We have made a lot of investments in this area. For example, we introduced PhotoDNA more than 10 years ago, which is a tool that is used right across the sector and by non-governmental organisations to scan for child sexual abuse material and remove it from their platforms. More recently, we have introduced a grooming tool that automates the process of trying to establish whether there is a conversation for grooming taking place between an adult and a child. That can then be flagged for human review. We have made that available at no charge to the industry, and it has been licensed by a US NGO called Thorn. We take this really seriously, but it is a complicated issue and we really welcome the regulation and the opportunity to work with the Government and Ofcom on this.

Katie O’Donovan: Thank you so much for having me here today and asking us to give evidence. Thank you for your question. I have worked at Google and YouTube for about seven years and I am really proud of our progress on safety in those years. We think about it in three different ways. First, what products can we design and build to keep our users safer? Similar to Microsoft, we have developed technology that identifies new child sex abuse material and we have made that available across the industry. We have developed new policies and new ways of detecting content on YouTube, which means we have really strict community guidelines, we identify that content and we take it down. Those policies that underlie our products are really important. Finally, we work across education, both in secondary and primary schools, to help inform and educate children through our “Be Internet Legends” programme, which has reached about 4 million people.

There is definitely much more that we can do and I think the context of a regulatory environment is really important. We also welcome the Bill and I think it is really going to be meaningful when Ofcom audits how we are meeting the requirements in the legislation—not just how platforms like ours are meeting the requirements in the Bill, but a wide spectrum of platforms that young people and adults use. That could have a really positive additive effect to the impact.

It is worth pausing and reflecting on legislation that has passed recently, as well. The age-appropriate design code or the children’s code that the Information Commissioner’s Office now manages has also helped us determine new ways to keep our users safe. For example, where we have long had a product called SafeSearch, which you can use on search and parents can keep a lock on, we now also put that on by default where we use signals to identify people who we think are under 18.

We think that is getting the right balance between providing a safer environment but also enabling people to access information. We have not waited for this regulation. This regulation can help us do more, and it can also level the playing field and really make sure that everyone in the industry steps up and meets the best practice that can exist.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q Thank you, both, for adding context to that. If I can bring you back to what is not being done and why we need to legislate, Richard, I come back to you. You mentioned some of the tools and systems that you have put in place so users can stop abuse from happening. Why is it that that 90% of abuse on Instagram in direct messages is being ignored by your moderators?

Richard Earley: I do not accept that figure. I believe that if you look at our quarterly transparency report, which we just released last week, you can see that we find more than 90% of all the content that we remove for breaking our policies ourselves. Whenever somebody reports something on any of our platforms, they get a response from us. I think it is really important, as we are focusing on the Bill, to understand or make the point that, for private messaging, yes, there are different harms and different risks of harm that can apply, which is why the steps that we take differ from the steps that we take in public social media.

One of the things that we have noticed in the final draft of the Bill is that the original distinction between public social media and private messaging, which was contained in the online harms White Paper and in earlier drafts of the Bill, has been lost here. Acknowledging that distinction, and helping companies recognise that there is different risk and then different steps that can be taken in private messaging to what is taken on public social media, would be a really important thing for the Committee to consider.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q Quite briefly, because I know we are short on time, exactly how many human moderators do you have working to take down disinformation and harmful illegal content on your platforms?

Richard Earley: We have around 40,000 people in total working on safety and security globally and, of those, around half directly review posts and content.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q How many of those are directly employed by you and how many are third party?

Richard Earley: I do not have that figure myself but I know it is predominantly the case that, in terms of the safety functions that we perform, it is not just looking at the pieces of content; it is also designing the technology that finds and surfaces content itself. As I said, more than 90% of the time—more than 95% in most cases—it is our technology that finds and removes content before anyone has to look at it or report it.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q On that technology, we have been told that you are not doing enough to remove harmful and illegal content in minority languages. This is a massive gap. In London alone, more than 250 languages are spoken on a regular basis. How do you explain your inaction on this? Can you really claim that your platform is safe if you are not building and investing in AI systems in a range of languages? What proactive steps are you taking to address this extreme content that is not in English?

Richard Earley: That group of 40,000 people that I mentioned, they operate 24 hours, 7 days a week. They cover more than 70 languages between them, which includes the vast majority of the world’s major spoken languages. I should say that people working at Meta, working on these classifiers and reviewing content, include people with native proficiency in these languages and people who can build the technology to find and remove things too. It is not just what happens within Meta that makes a difference here, but the work we do with our external partners. We have over 850 safety partners that we work with globally, who help us understand how different terms can be used and how different issues can affect the spread of harm on our platforms. All of that goes into informing both the policies we use to protect people on our platform and the technology we build to ensure those policies are followed.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q Finally, which UK organisations that you use have quality assured any of their moderator training materials?

Richard Earley: I am sorry, could you repeat the question?

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

The vast majority of people are third party. They are not employed directly by Meta to moderate content, so how many of the UK organisations you use have been quality assured to ensure that the training they provide in order to spot this illegal and harmful content is taken on board?

Richard Earley: I do not believe it is correct that for our company, the majority of moderators are employed by—

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

You do not have the figures, so you cannot tell me.

Richard Earley: I haven’t, no, but I will be happy to let you know afterwards in our written submission. Everyone who is involved in reviewing content at Meta goes through an extremely lengthy training process that lasts multiple weeks, covering not just our community standards in total but also the specific area they are focusing on, such as violence and incitement. If it is hate speech, of course, there is a very important language component to that training, but in other areas—nudity or graphic violence—the language component is less important. We have published quite a lot about the work we do to make sure our moderators are as effective as possible and to continue auditing and training them. I would be really happy to share some of that information, if you want.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q But that is only for those employed directly by Meta.

Richard Earley: I will have to get back to you to confirm that, but I think it applies to everyone who reviews content for Meta, whether they are directly employed by Meta or through one of our outsourced-in persistent partners.

None Portrait The Chair
- Hansard -

Thank you very much. Don’t worry, ladies; I am sure other colleagues will have questions that they wish to pursue. Dean Russell, please.

--- Later in debate ---
None Portrait The Chair
- Hansard -

Good afternoon. We now hear oral evidence from Professor Clare McGlynn, professor of law at Durham University, Jessica Eagleton, policy and public affairs manager at Refuge, and Janaya Walker, public affairs manager at End Violence Against Women. Ladies, thank you very much for taking the trouble to join us this afternoon. We look forward to hearing from you.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q Thank you, Sir Roger, and thank you to the witnesses for joining us. We hear a lot about the negative experiences online of women, particularly women of colour. If violence against women and girls is not mentioned directly in the Bill, if misogyny is not made a priority harm, and if the violence against women and girls code of practice is not adopted in the Bill, what will that mean for the experience of women and girls?

Janaya Walker: Thank you for the opportunity to speak today. As you have addressed there, the real consensus among violence against women and girls organisations is for VAWG to be named in the Bill. The concern is that without that, the requirements that are placed on providers of regulated services will be very narrowly tied to the priority illegal content in schedule 7, as well as other illegal content.

We are very clear that violence against women and girls is part of a continuum in which there is a really broad manifestation of behaviour; some reaches a criminal threshold, but there is other behaviour that is important to be understood as part of the wider context. Much of the abuse that women and girls face cannot be understood by only looking through a criminal lens. We have to think about the relationship between the sender and the recipient—if it is an ex-partner, for example—the severity of the abuse they have experienced, the previous history and also the reach of the content. The worry is that the outcome of the Bill will be a missed opportunity in terms of addressing something that the Government have repeatedly committed to as a priority.

As you mentioned, we have worked with Refuge, Clare McGlynn, the NSPCC and 5Rights, bringing together our expertise to produce this full code of practice, which we think the Bill should be amended to include. The code of practice would introduce a cross-cutting duty that tries to mitigate this kind of pocketing of violence against women and girls into those three categories, to ensure that it is addressed really comprehensively.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q To what extent do you think that the provisions on anonymity will assist in reducing online violence against women and girls? Will the provisions currently in the Bill make a difference?

Janaya Walker: I think it will be limited. For the End Violence Against Women Coalition, our priority above all else is having a systems-based approach. Prevention really needs to be at the heart of the Bill. We need to think about the choices that platforms make in the design and operation of their services in order to prevent violence against women and girls in the first instance.

Anonymity has a place in the sense of providing users with agency, particularly in a context where a person is in danger and they could take that step in order to mitigate harm. There is a worry, though, when we look at things through an intersectional lens—thinking about how violence against women and girls intersects with other forms of harm, such as racism and homophobia. Lots of marginalised and minoritised people rely very heavy on being able to participate online anonymously, so we do not want to create a two-tier system whereby some people’s safety is contingent on them being a verified user, which is one option available. We would like the focus to be much more on prevention in the first instance.

None Portrait The Chair
- Hansard -

Professor McGlynn and Ms Eagelton, you must feel free to come in if you wish to.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q My final question is probably directed at you, Professor McGlynn. Although we welcome the new communications offence of cyber-flashing, one of the criticisms is that it will not actually make a difference because of the onus on proving intent to cause harm, rather than the sender providing consent to receive the material. How do you respond to that?

Professor Clare McGlynn: I think it is great that the Government have recognised the harms of cyber-flashing and put that into the Bill. In the last couple of weeks we have had the case of Gaia Pope, a teenager who went missing and died—an inquest is currently taking place in Dorset. The case has raised the issue of the harms of cyber-flashing, because in the days before she went missing she was sent indecent images that triggered post-traumatic stress disorder from a previous rape. On the day she went missing, her aunt was trying to report that to the police, and one of the police officers was reported as saying that she was “taking the piss”.

What I think that case highlights, interestingly, is that this girl was triggered by receiving these images, and it triggered a lot of adverse consequences. We do not know why that man sent her those images, and I guess my question would be: does it actually matter why he sent them? Unfortunately, the Bill says that why he sent them does matter, despite the harm it caused, because it would only be a criminal offence if it could be proved that he sent them with the intention of causing distress or for sexual gratification and being reckless about causing distress.

That has two main consequences. First, it is not comprehensive, so it does not cover all cases of cyber-flashing. The real risk is that a woman, having seen the headlines and heard the rhetoric about cyber-flashing being criminalised, might go to report it to the police but will then be told, “Actually, your case of cyber-flashing isn’t criminal. Sorry.” That might just undermine women’s confidence in the criminal justice system even further.

Secondly, this threshold of having to prove the intention to cause distress is an evidential threshold, so even if you think, as might well be the case, that he sent the image to cause distress, you need the evidence to prove it. We know from the offence of non-consensual sending of sexual images that it is that threshold that limits prosecutions, but we are repeating that mistake here with this offence. So I think a consent-based, comprehensive, straightforward offence would send a stronger message and be a better message from which education could then take place.

None Portrait The Chair
- Hansard -

You are nodding, Ms Eagelton.

Jessica Eagelton: I agree with Professor McGlynn. Thinking about the broader landscape and intimate image abuse as well, I think there are some significant gaps. There is quite a piecemeal approach at the moment and issues that we are seeing in terms of enforcing measures on domestic abuse as well.

--- Later in debate ---
None Portrait The Chair
- Hansard -

We will now hear oral evidence from Lulu Freemont, head of digital regulation at techUK; Ian Stevenson, the chairman of OSTIA; and Adam Hildreth, chief executive officer of Crisp, who is appearing by Zoom—and it works. Thank you all for joining us. I will not waste further time by asking you to identify yourselves, because I have effectively done that for you. Without further ado, I call Alex Davies-Jones.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q Thank you, Sir Roger; thank you, witnesses. We want the UK to become a world leader in tech start-ups. We want those employment opportunities for the future. Does this legislation, as it currently stands, threaten that ability?

Lulu Freemont: Hi everybody. Thank you so much for inviting techUK to give evidence today. Just to give a small intro to techUK, so that you know the perspective I am coming from, we are the trade body for the tech sector. We have roughly 850 tech companies in our membership, the majority of which are small and medium-sized enterprises. We are really focused on how this regime will work for the 25,000 tech companies that are set to be in scope, and our approach is really on the implementation and how the Bill can deliver on the objectives.

Thank you so much for the question. There are some definite risks when we think about smaller businesses and the Online Safety Bill. Today, we have heard a lot of the names that come up with regard to tech companies; they are the larger companies. However, this will be a regime that impacts thousands of different tech companies, with different functionalities and different roles within the ecosystem, all of which contribute to the economy in their own way.

There are specific areas to be addressed in the Bill, where there are some threats to innovation and investment by smaller businesses. First, greater clarity is needed. In order for this regime to be workable for smaller businesses, they need clarity on guidelines and on definitions, and they also need to be confident that the systems and processes that they put in place will be sustainable—in other words, the right ones.

Certain parts of the regime risk not having enough clarity. The first thing that I will point to is around the definitions of harm. We would very much welcome having some definitions of harmful content, or even categories of harmful content, in primary legislation. It might then be for Ofcom to determine how those definitions are interpreted within the codes, but having things to work off and types of harmful content for smaller businesses to start thinking about would be useful; obviously, that will be towards children, given that they are likely to be category 2.

The second risk for smaller businesses is really around the powers of the Secretary of State. I think there is a real concern. The Secretary of State will have some technical powers, which are pretty much normal; they are what you would expect in any form of regulation. However, the Online Safety Bill goes a bit further than that, introducing some amendment powers. So, the Secretary of State can modify codes of practice to align with public policy. In addition to that, there are provisions to allow the Secretary of State to set thresholds between the categories of companies.

Smaller businesses want to start forming a strong relationship with Ofcom and putting systems and processes in place that they can feel confident in. If they do not have that level of confidence and if the regime could be changed at any point, they might not be able to progress with those systems and processes, and when it comes to kind of pushing them out of the market, they might not be able to keep up with some of the larger companies that have been very much referenced in every conversation.

So, we need to think about proportionality, and we need to think about Ofcom’s independence and the kind of relationship that it can form with smaller businesses. We also need to think about balance. This regime is looking to strike a balance between safety, free speech and innovation in the UK’s digital economy. Let us just ensure that we provide enough clarity for businesses so that they can get going and have confidence in what they are doing.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q Thank you, Lulu. Adam and Ian, if either of you want to come in at any point, please just indicate that and I will bring you in.

None Portrait The Chair
- Hansard -

May I just apologise before we go any further, because I got you both the wrong way round? I am sorry. It is Mr Stevenson who is online and it is Adam Hildreth who is here in body and person.

Adam Hildreth: I think we have evolved as a world actually, when it comes to online safety. I think that if you went back five or 10 years, safety would have come after your people had developed their app, their platform or whatever they were creating from a tech perspective. I think we are now in a world where safety, in various forms, has to be there by default. And moving on to your point, we have to understand what that means for different sizes of businesses. The risk assessment word or phrase for me is the critical part there, which is putting blocks in front of people who are innovating and creating entrepreneurial businesses that make the online world a better place. Putting those blocks in without them understanding whether they can compete or not in an open and fair market is where we do not want to be.

So, getting to the point where it is very easy to understand is important—a bit like where we got to in other areas, such as data protection and where we went with the GDPR. In the end, it became simplified; I will not use the word “simplified” ever again in relation to GDPR, but it did become simplified from where it started. It is really important for anyone developing any type of tech platform that the Online Safety Bill will affect that they understand exactly what they do and do not have to put in place; otherwise, they will be taken out just by not having a legal understanding of what is required.

The other point to add, though, is that there is a whole other side to online safety, which is the online safety tech industry. There are tons of companies in the UK and worldwide that are developing innovative technologies that solve these problems. So, there is a positive as well as an understanding of how the Bill needs to be created and publicised, so that people understand what the boundaries are, if you are a UK business.

None Portrait The Chair
- Hansard -

Mr Stevenson, you are nodding. Do you want to come in?

Ian Stevenson: I agree with the contributions from both Adam and Lulu. For me, one of the strengths of the Bill in terms of the opportunity for innovators is that so much is left to Ofcom to provide codes of practice and so on in the future, but simultaneously that is its weakness in the short term. In the absence of those codes of practice and definitions of exactly where the boundaries between merely undesirable and actually harmful and actionable might lie, the situation is very difficult. It is very difficult for companies like my own and the other members of the Online Safety Tech Industry Association, who are trying to produce technology to support safer experiences online, to know exactly what that technology should do until we know which harms are in scope and exactly what the thresholds are and what the definitions of those harms are. Similarly, it is very hard for anybody building a service to know what technologies, processes and procedures they will need until they have considerably more detailed information than they have at the moment.

I agree that there are certain benefits to having more of that in the Bill, especially when it comes to the harms, but in terms of the aspiration and of what I hear is the objective of the Bill—creating safer online experiences—we really need to understand when we are going to have much more clarity and detail from Ofcom and any other relevant party about exactly what is going to be seen as best practice and acceptable practice, so that people can put in place those measures on their sites and companies in the Online Safety Tech Industry Association can build the tools to help support putting those measures in place.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q Thank you all. Lulu, you mentioned concerns about the Secretary of State’s powers and Ofcom’s independence. Other concerns expressed about Ofcom include its ability to carry out this regulation. It is being hailed as the saviour of the internet by some people. Twenty-five thousand tech companies in the UK will be under these Ofcom regulations, but questions have been asked about its technical and administrative capacity to do this. Just today, there is an online safety regulator funding policy adviser role being advertised by the Department for Digital, Culture, Media and Sport. Part of the key roles and responsibilities are:

“The successful post holder will play a key role in online safety as the policy advisor on Funding for the Online Safety Regulator.”

Basically, their job is to raise money for Ofcom. Does that suggest concerns about the role of Ofcom going forward, its funding, and its resource and capacity to support those 25,000 platforms?

Lulu Freemont: It is a very interesting question. We really support Ofcom in this role. We think that it has a very good track record with other industries that are also in techUK’s membership, such as broadcasters. It has done a very good job at implementing proportionate regulation. We know that it has been increasing its capacity for some time now, and we feel confident that it is working with us as the trade and with a range of other experts to try to understand some of the detail that it will have to understand to regulate.

One of the biggest challenges—we have had this conversation with Ofcom as well—is to understand the functionalities of tech services. The same functionality might be used in a different context, and that functionality could be branded as very high risk in one context but very low risk in another. We are having those conversations now. It is very important that they are being had now, and we would very much welcome Ofcom publishing drafts. We know that is its intention, but it should bring everything forward in terms of all the gaps in this regulation that are left to Ofcom’s codes, guidance and various other documentation.

Adam Hildreth: One of the challenges that I hear a lot, and that we hear a lot at Crisp in our work, is that people think that the Bill will almost eradicate all harmful content everywhere. The challenge that we have with content is that every time we create a new technology or mechanism that defeats harmful or illegal content, the people who are creating it—they are referred to in lots of ways, but bad actors, ultimately—create another mechanism to do it. It is very unlikely that we will ever get to a situation in which it is eradicated from every platform forever—though I hope we do.

What is even harder for a regulator is to be investigating why a piece of content is on a platform. If we get to a position where people are saying, “I saw this bit of content; it was on a platform,” that will be a really dangerous place to be, because the funding requirement for any regulator will go off the charts—think about how much content we consume. I would much prefer to be in a situation where we think about the processes and procedures that a platform puts in place and making them appropriate, ensuring that if features are aimed at children, they do a risk assessment so that they understand how those features are being used and how they could affect children in particular—or they might have a much more diverse user group, whereby harm is much less likely.

So, risk assessments and, as Ian mentioned, technologies, processes and procedures—that is the bit that a regulator can do well. If your risk assessment is good and your technology, process and procedures are as good as they can be based on a risk assessment, that almost should mean that you are doing the best job you possibly can to stop that content appearing, but you are not eradicating it. It really worries me that we are in a position whereby people are going to expect that they will never see content on a platform again, even though billions of pieces of potentially harmful content could have been removed from those platforms.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q On that point, you mentioned that it is hard to predict the future and to regulate on the basis of what is already there. We have waited a long time for the Bill, and in that time we have had new platforms and new emerging technology appear. How confident are you that the Bill allows for future-proofing, in order that we can react to anything new that might crop up on the internet?

Adam Hildreth: I helped personally in 2000 and 2001, when online grooming did not even exist as a law, so I have been involved in this an awful long time, waiting for laws to exist. I do not think we will ever be in a situation in which they are future-proofed if we keep putting every possibility into law. There needs to be some principles there. There are new features launched every day, and assessments need to be made about who they pose a risk to and the level of risk. In the same way as you would do in all kinds of industries, someone should do an assessment from a health and safety perspective. From that, you then say, “Can we even launch it at all? Is it feasible? Actually, we can, because we can take this amount of risk.” Once they understand those risk assessments, technology providers can go further and develop technology that can combat this.

If we can get to the point where it is more about process and the expectations around people who are creating any types of online environments, apps or technologies, it will be future-proofed. If we start trying to determine exact pieces of content, what will happen is that someone will work out a way around it tomorrow, and that content will not be included in the Bill, or it will take too long to get through and suddenly, the whole principle of why we are here and why we are having this discussion will go out the window. That is what we have faced every day since 1998: every time the technology works out how to combat a new risk—whether that is to children, adults, the economy or society—someone comes along and works out a way around the technology or around the rules and regulations. It needs to move quickly; that will future-proof it.

None Portrait The Chair
- Hansard -

I have four Members plus the Minister to get in, so please be brief. I call Dean Russell.

--- Later in debate ---
None Portrait The Chair
- Hansard -

We will now hear from Rhiannon-Faye McDonald, victim and survivor advocate at the Marie Collins Foundation, and Susie Hargreaves, chief executive at the Internet Watch Foundation. Thank you for joining us this afternoon; first question, please.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q Thank you both for joining us this afternoon. One of the key objectives of the legislation is to ensure that a high level of protection for children and adults is in place. In your view, does the Bill in its current form achieve that?

Susie Hargreaves: Thank you very much for inviting me today. I think the Bill is working in the right direction. Obviously, the area that we at the IWF are concerned with is child sexual abuse online, and from our point of view, the Bill does need to make a few changes in order to put those full protections in place for children.

In particular, we have drafted an amendment to put co-designation on the face of the Bill. When it comes to child sexual abuse, we do not think that contracting out is an acceptable approach, because we are talking about the most egregious form of illegal material—we are talking about children—and we need to ensure that Ofcom is not just working in a collaborative way, but is working with experts in the field. What is really important for us at the moment is that there is nothing in the Bill to ensure that the good work that has been happening over 25 years in this country, where the IWF is held up as a world leader, is recognised, and that that expertise is assured on the face of the Bill. We would like to see that amendment in particular adopted, because the Bill needs to ensure that there are systems and processes in place for dealing with illegal material. The IWF already works with internet companies to ensure they take technical services.

There needs to be a strong integration with law enforcement—again, that is already in place with the memorandum of understanding between CPS, the National Police Chiefs’ Council and the IWF. We also need clarity about the relationship with Ofcom so that child sexual abuse, which is such a terrible situation and such a terrible crime, is not just pushed into the big pot with other harms. We would like to see those specific changes.

Rhiannon-Faye McDonald: Generally, we think the Bill is providing a higher standard of care for children, but there is one thing in particular that I would like to raise. Like the IWF, the Marie Collins Foundation specialises in child sexual abuse online, specifically the recovery of people who have been affected by child sexual abuse.

The concern I would like to raise is around the contextual CSA issue. I know this has been raised before, and I am aware that the Obscene Publications Act 1959 has been brought into the list of priority offences. I am concerned that that might not cover all contextual elements of child sexual abuse: for example, where images are carefully edited and uploaded to evade content moderation, or where there are networks of offenders who are able to gain new members, share information with each other, and lead other people to third-party sites where illegal content is held. Those things might not necessarily be caught by the illegal content provisions; I understand that they will be dealt with through the “legal but harmful” measures.

My concern is that the “legal but harmful” measures do not need to be implemented by every company, only those that are likely to be accessed by children. There are companies that can legitimately say that the majority of their user base is not children, and therefore would not have to deal with that, but that provides a space for this contextual CSA to happen. While those platforms may not be accessed by children as much as other platforms, it still provides a place for this to happen—the harm can still occur, even if children do not come across it as much as they would elsewhere.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q On that point, one of the concerns that has been raised by other stakeholders is about the categorisation of platforms—for example, category 1 and category 2B have different duties on them, as Ofcom is the regulator. Would you rather see a risk-based approach to platforms, rather than categorisation? What are your thoughts on that?

Susie Hargreaves: We certainly support the concept of a risk-based approach. We host very little child sexual abuse content in the UK, with the majority of the content we see being hosted on smaller platforms in the Netherlands and other countries. It is really important that we take a risk-based approach, which might be in relation to where the content is—obviously, we are dealing with illegal content—or in relation to where children are. Having a balance there is really important.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q A final question from me. We heard concerns from children’s charities and the Children’s Commissioner that the Bill does not account for breadcrumbing—the cross-platform grooming that happens on platforms. What more could the Bill do to address that, and do you see it as an omission and a risk?

Susie Hargreaves: I think we probably have a slightly different line from that of some of the other charities you heard from this morning, because we think it is very tricky and nuanced. What we are trying to do at the moment is define what it actually means and how we would have to deal with it, and we are working very closely with the Home Office to go through some of those quite intense discussions. At the moment, “harmful” versus “illegal” is not clearly defined in law, and it could potentially overwhelm certain organisations if we focus on the higher-level harms and the illegal material. We think anything that protects children is essential and needs to be in the Bill, but we need to have those conversations and to do some more work on what that means in reality. We are more interested in the discussions at the moment about the nuance of the issue, which needs to be mapped out properly.

One of the things that we are very keen on in the Bill as a whole is that there should be a principles-based approach, because we are dealing with new harms all the time. For example, until 2012 we had not seen self-generated content, which now accounts for 75% of the content we remove. So we need constantly to change and adapt to new threats as they come online, and we should not make the Bill too prescriptive.

None Portrait The Chair
- Hansard -

Ms McDonald?

Rhiannon-Faye McDonald: I was just thinking of what I could add to what Susie has said. My understanding is that it is difficult to deal with cross-platform abuse because of the ability to share information between different platforms—for example, where a platform has identified an issue or offender and not shared that information with other platforms on which someone may continue the abuse. I am not an expert in tech and cannot present you with a solution to that, but I feel that sharing intelligence would be an important part of the solution.

--- Later in debate ---
None Portrait The Chair
- Hansard -

Finally this afternoon, we will hear from Ellen Judson, who is the lead researcher at the Centre for the Analysis of Social Media at Demos, and Kyle Taylor, who is the founder and director of Fair Vote. Thank you for joining us this afternoon.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q Thank you both for joining us, and for waiting until the end of a very long day. We appreciate it.

There is a wide exemption in the Bill for the media and for journalistic content. Are you concerned that that is open to abuse?

Kyle Taylor: Oh, absolutely. There are aspects of the Bill that are extremely worrying from an online safety perspective: the media exemption, the speech of democratic importance exemption, and the fact that a majority of paid ads are out of scope. We know that a majority of harmful content originates from or is amplified by entities that meet one of those exceptions. What that means is that the objective of the Bill, which is to make the online world safer, might not actually be possible, because platforms, at least at present, are able to take some actions around these through their current terms and conditions, but this will say explicitly that they cannot act.

One real-world example is the white supremacist terror attack just last week in Buffalo, in the United States. The “great replacement” theory that inspired the terrorist was pushed by Tucker Carlson of Fox News, who would meet the media exemption; by right-wing blogs, which were set up by people who claim to be journalists and so would meet the journalistic standards exemption; by the third-ranking House Republican, who would meet the democratic importance exemption; and it was even run as paid ads by those candidates. In that one example, you would not be able to capture a majority of the way that harm spreads online.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q Is there a way in which the exemptions could be limited to ensure that the extremists you have mentioned cannot take advantage of them?

Ellen Judson: I think there are several options. The primary option, as we would see it, is that the exemptions are removed altogether, on the basis that if the Bill is really promoting a systems-based approach rather than focusing on individual small categories of content, then platforms should be required to address their systems and processes whenever those lead to an increased risk of harm. If that leads to demotion of media content that meets those harmful thresholds, that would seem appropriate within that response.

If the exemptions are not to be removed, they could be improved. Certainly, with regard to the media exemption specifically, I think the thresholds for who qualifies as a recognised news publisher could be raised to make it more difficult for bad actors and extremists, as Kyle mentioned, simply to set up a website, add a complaints policy, have an editorial code of conduct and then say that they are a news publisher. That could involve linking to existing publishers that are already registered with existing regulators, but I think there are various ways that could be strengthened.

On the democratic importance and journalism exemptions, I think the issue is that the definitions are very broad and vague; they could easily be interpreted in any way. Either they could be interpreted very narrowly, in which case they might not have much of an impact on how platforms treat freedom of expression, as I think they were intended to do; or they could be interpreted very broadly, and then anyone who thinks or who can claim to think that their content is democratically important or journalistic, even if it is clearly abusive and breaches the platform’s terms and conditions, would be able to claim that.

One option put forward by the Joint Committee is to introduce a public interest exemption, so that platforms would have to think about how they are treating content that is in the public interest. That would at least remove some of the concerns. The easiest way for platforms to interpret what is democratically important speech and what is journalistic speech is based on who the user is: are they a politician or political candidate, or are they a journalist? That risks them privileging certain people’s forms of speech over that of everyday users, even if that speech is in fact politically relevant. I think that having something that moves the threshold further away from focusing on who a user is as a proxy for whether their speech is likely to deserve extra protection would be a good start.

Kyle Taylor: It is basically just saying that content can somehow become less harmful depending on who says it. A systems-based approach is user-neutral, so its only metric is: does this potentially cause harm at scale? It does not matter who is saying it; it is simply a harm-based approach and a system solution. If you have exemptions, exceptions and exclusions, a system will not function. It suggests that a normal punter with six followers saying that the election was stolen is somehow more harmful than the President of the United States saying that an election is stolen. That is just the reality of how online systems work and how privileged and powerful users are more likely to cause harm.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q You are creating a two-tier internet, effectively, between the normal user and those who are exempt, which large swathes of people will be because it is so ambiguous. One of the other concerns that have been raised is the fact that the comments sections on newspaper websites are exempt from the Bill. Do you see an issue with that?

Ellen Judson: There is certainly an issue as that is often where we see a lot of abuse and harm, such that if that same content were replicated on a social media platform, it would almost certainly be within the scope of the Bill. There is a question, which is for Ofcom to consider in its risk profiles and risk registers, about where content at scale has the potential to cause the most harm. The reach of a small news outlet’s comments section would be much less than the reach of Donald Trump’s Twitter account, for instance. Certainly, if the risk assessments are done and comments sections of news websites have similar reach and scale and could cause significant harm, I think it would be reasonable for the regulator to consider that.

Kyle Taylor: It is also that they are publicly available. I can speak from personal experience. Just last week, there was a piece about me. The comments section simultaneously said that I should be at Nuremberg 2.0 because I was a Nazi, but also that I should be in a gas chamber. Hate perpetuates in a comments section just as it does on a social media platform. The idea that it is somehow less harmful because it is here and not there is inconsistent and incoherent with the regime where the clue is in the name: the Online Safety Bill. We are trying to make the online world safer.

On media I would add that we have to think about how easy it is, based on the criteria in the Bill, to become exempt as a media entity. We can think about that domestically, but what happens when a company is only meant to enforce their terms and conditions in that country, but can broadcast to the world? The UK could become the world’s disinformation laundromat because you can come here, meet the media exemption and then blast content to other places in the world. I do not think that is something that we are hoping to achieve through this Bill. We want to be the safest place in the world to go online and to set a global benchmark for what good regulation looks like.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q I suppose, yes. Under the current media carve-out, how do you see platforms being able to detect state actors that are quoting misinformation or perpetuating disinformation on their platforms?

Ellen Judson: I think it is a real challenge with the media exemptions, because it is a recognised tactic of state-based actors, state-aligned actors and non-state actors to use media platforms as ways to disseminate this information. If you can make a big enough story out of something, it gets into the media and that perpetuates the campaign of abuse, harassment and disinformation. If there are protections in place, it will not take disinformation actors very long to work out that if there are ways that they can get stories into the press, they are effectively covered.

In terms of platform enforceability, if platforms are asked to, for instance, look at their systems of amplification and what metrics they use to recommend or promote content to users, and to do that from a risk-based perspective and based on harm except when they are talking about media, it all becomes a bit fuzzy what a platform would actually be expected to do in terms of curating those sorts of content.

Kyle Taylor: As an example, Russia Today, until its broadcast licence was revoked about three months ago, would have qualified for the media exemption. Disinformation from Russia Today is not new; it has been spreading disinformation for years and years, and would have qualified for the media exemption until very recently.

Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - -

Q So as a result of these exemptions, the Bill as it stands could make the internet less safe than it currently is.

Kyle Taylor: The Bill as it stands could absolutely make the internet less safe than it currently is.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - - - Excerpts

Q You have done a really good job of explaining the concerns about journalistic content. Thinking about the rest of the Bill for a moment, do you think the balance between requiring the removal of content and the prioritisation of content is right? Do you think it will be different from how things are now? Do you think there is a better way it could be done in the Bill?

Ellen Judson: The focus at the moment is too heavily on content. There is a sort of tacit equation of content removal—sometimes content deprioritisation, but primarily content removal—as the way to protect users from harm, and as the threat to freedom of expression. That is where the tension comes in with how to manage both those things at once. What we would want from a Bill that was taking more of a systems approach is thinking: where are platforms making decisions about how they are designing their services, and how they are operating their services at all levels? Content moderation policy is certainly included, but it goes back to questions of how a recommendation algorithm is designed and trained, who is involved in that process, and how human moderators are trained and supported. It is also about what functionality users are given and what behaviour is incentivised and encouraged. There is a lot of mitigation that platforms can put in place that does not talk about directly affecting user content.

I think we should have risk assessments that focus on the risks of harms to users, as opposed to the risk of users encountering harmful content. Obviously there is a relationship, but one piece of content may have very different effects when it is encountered by different users. It may cause a lot of harm to one user, whereas it may not cause a lot of harm to another. We know that when certain kinds of content are scaled and amplified, and certain kinds of behaviour are encouraged or incentivised, we see harms at a scale that the Bill is trying to tackle. That is a concern for us. We want more of a focus on some things that are mentioned in the Bill—business models, platform algorithms, platform designs and systems and processes. They often take a backseat to the issues of content identification and removal.

Kyle Taylor: I will use the algorithm as an example, because this word flies around a lot when we talk about social media. An algorithm is a calculation that is learning from people’s behaviour. If society is racist, an algorithm will be racist. If society is white, an algorithm will be white. You can train an algorithm to do different things, but you have to remember that these companies are for-profit businesses that sell ad space. The only thing they are optimising for in an algorithm is engagement.

What we can do, as Ellen said, through a system is force optimisation around certain things, or drive algorithms away from certain types of content, but again, an algorithm is user-neutral. An algorithm does not care what user is saying what; it is just “What are people clicking on?”, regardless of what it is or who said it. An approach to safety has to follow the same methodology and say, “We are user-neutral. We are focused entirely on propensity to cause harm.”

The second piece is all the mitigation measures you can take once a post is up. There has been a real binary of “Leave it up” and “Take it down”, but there is a whole range of stuff—the most common word used is “friction”—to talk about what you can do with content once it is in the system. You have to say to yourself, “Okay, we absolutely must have free speech protections that exceed the platform’s current policies, because they are not implemented equally.” At the same time, you can preserve someone’s free expression by demonetising content to reduce the incentive of the company to push that content or user through its system. That is a way of achieving both a reduction in harm and the preservation of free expression.