Online Anonymity and Anonymous Abuse

Damian Collins Excerpts
Wednesday 24th March 2021

(3 years, 1 month ago)

Commons Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Damian Collins Portrait Damian Collins (Folkestone and Hythe) (Con) [V]
- Hansard - -

We should not have to tolerate online abuse targeting individuals with dehumanising language and threats of injury. Anonymity on social media has provided some people with a platform to abuse others in this way. There are clearly two parties at fault: the person who creates the hateful content, and the platform that both lets them do it and allows what they have posted to remain visible. Making people reveal their true identity to a tech company when creating a social media account would, I believe, make it more likely that they will treat other users with respect. These tech platforms also need to co-operate more effectively and transparently with investigations into the behaviour of their users.

Social media companies too often mistake harmful hate speech for legitimate freedom of expression. A recent report by The Guardian revealed internal moderator guidelines at Facebook, reportedly leaked to the newspaper, that say that public figures are considered to be permissible targets for certain types of abuse, including calls for their death. More needs to be done not just to take down harmful content, but to ensure that social media companies do not amplify it in their systems. No one has a freedom of expression right to be promoted on TikTok’s “For You” page or the Facebook news feed. An internal company report in 2016 told Facebook that 64% of people who joined groups sharing extremist content did so at the prompting of Facebook’s recommendation tools. Another report from August last year noted that 70% of the top 100 most active civic groups in the USA are considered non-recommendable for issues such as hate, misinformation, bullying and harassment.

The business model of social media companies is based around engagement, meaning that people who engage in and with abusive behaviour will see more of it, because that is what the platform thinks they are interested in. When we talk about regulating harmful content online, we are mainly talking about that model and the money these companies make from all user-generated content as long as it keeps people on their site. Content that uses dehumanising language to attack others is not only hurtful to the victim but more likely to encourage others to do the same.

When Parliament debates the online harms Bill later this year, we will have to remind ourselves what the real-world consequences are of abuse on social media. In Washington DC on 6 January, we saw an attempted insurrection in the US Capitol, fuelled by postings on Facebook, that caused the deaths of five people. In the UK, we have seen significant increases in recorded hate crimes over the past 10 years, suicide rates are at a 20-year high, and over the past six years the number of hospital admissions because of self-injury in pre-teens has doubled. Arrests for racist and indecent chanting at football grounds more than doubled between 2019 and 2020, even though hundreds of matches were cancelled or played behind closed doors. These issues are too serious to be left to the chief execs of the big tech companies alone. Those people need to recognise the harm that their systems can create in the hands of people and organisations intent on spreading hate and abuse. We need to establish the standards that we expect them to meet, and empower the regulatory institutions we will need to ensure that they are doing so.