All 1 Debates between Khalid Mahmood and Damian Collins

Sub-Committee on Disinformation

Debate between Khalid Mahmood and Damian Collins
Thursday 4th April 2019

(5 years, 1 month ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Damian Collins Portrait Damian Collins
- Hansard - - - Excerpts

The hon. Lady raises a number of very important issues. Co-operation with the authorities is important. We have seen too many cases where different social media companies have been criticised for not readily sharing information with the police as part of an investigation. Often the companies have very narrow terms of reference for when they would do that; sometimes if there is an immediate threat to life or if information might be related to a potential terror attack, they will act. However, we see hideous crimes that affect families in a grievous way and those families want the crimes to be investigated efficiently and speedily, and for the police to get access to any relevant information. I think we would have to say that the current system is not working effectively enough and that more should be done.

There should be more of an obligation on the companies to share proactively with the authorities information that they have observed. They might not have been asked for it yet, but it could be important or relevant to a police investigation. Part of, if you like, the duty of care of the tech companies should be to alert the relevant authorities to a problem when they see it and not wait to be asked as part of a formal investigation. Again, that sort of proactive intervention would be necessary.

I also share a general concern, in that I believe tech companies could do more to observe behaviour on their platforms that could lead to harm. That includes self-harm resulting from a vulnerable person accessing content that might lead them towards a pattern of self-harm. Indeed, one of the particular concerns that emerged from the Molly Russell case was the content she was engaging with on Instagram.

The companies should take a more proactive responsibility to identify people who share content that may lead to the radicalisation of individuals or encourage them to commit harmful acts against other citizens. I think the companies have the power to identify that sort of behaviour online, and there should be more of an obligation on them to share their knowledge of that content when they see it.

Khalid Mahmood Portrait Mr Khalid Mahmood (Birmingham, Perry Barr) (Lab)
- Hansard - -

It is always a pleasure to serve under your stewardship, Mr Gapes.

The Committee has produced an absolutely superb report—such detail—and it is to be welcomed. It raises serious issues in relation to the power of the platform providers, and their lack of usage of the powers they have to identify people and to do something with that information. That is very important. The Government should consider how to tackle the people who put this material on these platforms. We should get the providers to work through these issues with the Government and stop the false information that is being put up.

This issue affects huge numbers of people because, as the Chair of the Select Committee said, a lot of people take such information as gospel, as most of their media input is from social media, so it has a huge effect. I urge the Government to look at this issue seriously and to consider how we can push the social media platform providers to have a better response and remove false media reports that are put online.

Damian Collins Portrait Damian Collins
- Hansard - - - Excerpts

The hon. Gentleman is absolutely right. One of the issues at the heart of this—it comes up again and again throughout our report—is the obligations of the tech companies. A social media platform is not necessarily the publisher of content; it has been posted there by a user of the platform. However, the social media company can observe everything that is going on and it curates the content as well.

When someone goes on social media, if they just saw what their friends had posted most recently, that would be one thing, but because social media algorithms direct users towards particular content, we are concerned not only that harmful content can exist, but that when individuals start to engage with it, they are directed to even more of it. I think that we should not only consider the responsibilities of the tech companies to remove harmful content when it is posted, but question the ethics of algorithms and systems that can direct people towards harmful content.