Online Communication Offence Arrests Debate
Full Debate: Read Full DebateLord Sarfraz
Main Page: Lord Sarfraz (Conservative - Life peer)Department Debates - View all Lord Sarfraz's debates with the Cabinet Office
(1 day, 20 hours ago)
Grand CommitteeMy Lords, I, too, congratulate the noble Lord, Lord Lebedev, on securing this timely debate. I know that he has been a great champion of the media and free speech for many years.
At the heart of this debate lies the matter of ensuring that the police have the resources, tools and training to arrest the right people, without compromising freedom of speech or privacy online. We cannot expect the 45 territorial police forces in this country suddenly to get it right. There are more than 33 social platforms with over 100 million monthly active users. Each is very different, with different interfaces, community rules and approaches to content monitoring. To expect police officers to do their offline jobs while monitoring online non-threatening communication is very difficult. To meet the challenges of the future, the police need the tools of the future. I look forward to hearing what the Minister has to say about that.
What might it look like? First, we need to get the basic tech right. The police national database has not been upgraded since 2019. That is a lifetime in tech; their systems are pretty much obsolete. That is the database that records data on arrests that have not led to conviction, which goes to the very heart of the Question from the noble Lord, Lord Lebedev. If the police are not able to efficiently collect and manage data, they can hardly use it in a useful way.
One promising area is predictive policing. A number of trials are happening around the country, and the focus is on crime prevention—for example, trying to predict where a discussion in a group is heading before it escalates. Like all tech, it has great potential but must be deployed ethically to avoid overpolicing. Like all these things, the platforms have and will continue to have an important role to play.
Let us take, for example, basic content filtering. If you turn on Google’s SafeSearch, there is a pretty decent chance that you will not receive harmful content when you do a search, but that is really difficult to do on a messaging platform, for example. There is no setting on WhatsApp to block explicit unwanted photographs from coming in. The tech exists and is being trialled on a number of platforms, but these tools are still optional and require users to opt in. Perhaps they should be the defaults, requiring users to opt out instead of using opt-in filters.
One other big area of potential is AI-powered content moderation. This is real-time monitoring of content, analysing text, images and videos to identify non-threatening but potentially very harmful content. Several platforms are trialling this but we do not yet have the standards for deployment around transparency, accuracy and bias mitigation. Just as we are putting technology at the heart of our defence and national security strategy, we must facilitate innovation across all forces, not just within specialist units. Only then will we have arrests that lead to conviction and only then can we do a better job of ensuring a free and open internet.