4 Lord Vaux of Harrowden debates involving the Department for Digital, Culture, Media & Sport

Clearview AI Inc

Lord Vaux of Harrowden Excerpts
Tuesday 5th July 2022

(1 year, 10 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

I have seen those reports in the media. I know that your Lordships’ House takes great interest in ensuring that the companies whose hardware is purchased are those that the people of this country would want it to be purchased from.

Lord Vaux of Harrowden Portrait Lord Vaux of Harrowden (CB)
- Hansard - -

My Lords, has the Minister seen the report into biometrics generally by Matthew Ryder QC, on behalf of the Ada Lovelace Institute? Does he agree with the overriding recommendation that we need a new framework governing the use of biometrics?

Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

We have seen but not yet assessed all of Mr Ryder’s recommendations. However, the current framework offers strong protections, and the action taken by the ICO in this case is a demonstration of that.

Telecommunications (Security) Bill

Lord Vaux of Harrowden Excerpts
Lord Vaux of Harrowden Portrait Lord Vaux of Harrowden (CB)
- Hansard - -

My Lords, it is a pleasure to follow the noble Baroness, Lady Stroud. I find myself in agreement with everything that she said.

Anything that improves the security of our tele- communications systems must be welcome, so I support this Bill, but I think it misses a golden opportunity. Telecommunications security covers a wide range of risks: from the resilience of the system to risks such as weather or power outages, through resilience to malicious attacks from hostile states or criminals, to the misuse of systems to access, alter or destroy data. From a consumer point of view, all those are really important, but the one security risk that impacts on people’s daily lives the most is the misuse of telecommunications networks and services by criminals and, apparently, by certain states, to facilitate fraud.

I explained during Second Reading of the Online Safety Bill that fraud is so widespread because it is easy, and it is easy because there is no incentive for a whole range of service providers to take the necessary steps to stop it. Those service providers include the search engines and social media companies, web-hosting companies, banks and more, but the list also includes telecommunications companies, which in effect facilitate fraud through three key weaknesses.

First, the most serious weakness is when a criminal is able to convince the service provider to transfer someone’s phone number so that they can control it. This is known as sim-swap fraud, which gives the criminal complete access to the victim’s emails, bank accounts, one-time passwords, contacts and so on. Indeed, with the ever-growing list of things that we can access and control from our phones, it could also give access to our front-door locks, our burglar alarms, our cars, which can now be unlocked and started by phone, and more. In fact, imagine the possibilities for criminals once we have genuinely self-driving cars all connected by 5G.

The second security weakness that telecommunications companies are allowing is the falsifying of caller IDs, when a criminal is able to appear to be calling or texting from a legitimate number, such as a bank or HMRC. As a result, the victim, believing the call to be genuine, is persuaded to provide bank details or transfer money.

The third security issue is allowing criminals to send out bulk malicious texts and calls using the networks, often in conjunction with false caller IDs. We are all bombarded with these all the time. I received one that I had not heard before just this morning; apparently, my national insurance number is being used for criminal purposes, and I must call the number or I shall have my assets seized and be arrested—so there we go. The calls can lead to fraud being perpetrated, and texts can include links that result in malware being loaded on to the victim’s phone, which allows access to emails and bank accounts. As well as fraud, they cause very real anxiety, yet we seem to have to accept them as an irritant of modern life. I probably receive more fraud calls than genuine ones, which might be a reflection on my social life. I have not been able to find any reliable statistics, but it seems that at least a material proportion of all calls and texts made over the networks are fraudulent.

This Bill seems to be a perfect opportunity to try to make life harder for the criminals who are exploiting mobile phone networks and services to perpetrate fraud. The best way in which to do this is to provide a real incentive for the telecommunications providers to prevent it; they should be liable for the penalties—although I hesitate to use that word, given what is happening in an hour or so—and for the losses incurred as a result of allowing the service to be misused, unless they have taken reasonable action to prevent it. At the moment, it is arguably in the telecommunications companies’ interests to allow the activities to continue, as they are being paid by the criminals for all the calls and texts.

Reading the Bill, I find myself unsure as to whether it covers these types of risks or not. I understand from a letter that I received from the Minister earlier today that it is not intended to, although I think that it could with not much change. Her letter, for which I am grateful, only refers to the issue of fraudulent calls and texts; it does not cover the other risks that I have mentioned. Clause 1 introduces a duty on communications networks and service providers to take measures to identify and reduce the risks of security compromises occurring. It then goes on to define what a security compromise is, with a pretty wide range of definitions. Among them, new subsection (2)(f) refers to

“anything that occurs in connection with the network or service and causes any data stored by electronic means to be … lost … unintentionally altered; or … altered otherwise than by or with the permission of the person holding the data”.

As far as I can see, nothing in the Bill limits security compromise to those that come from hostile states, and that is a good thing, since security compromise could well come from criminals. The risks that I have described do occur in connection with the network or the service, and they may cause electronically stored data to be lost or altered. So on my first reading, it appears that the risks that I have described may be covered or could easily be covered in the Bill if a suitable code of practice was issued.

In passing, on that subject, I share the concerns raised by the Delegated Powers and Regulatory Reform Committee that the codes of practice will not be subject to meaningful parliamentary scrutiny.

If the security risks that I have described are not intended to be covered by the Bill, we are missing a golden opportunity to make it harder for criminals to use our communications networks and services to perpetrate fraud on consumers. The Government are planning to produce a fraud action plan, but not until after the spending review. In the meantime, people will continue to lose their money, with all the mental and personal impacts that brings. It may not currently be intended to do this, but this Bill with very little change could be used to cut off one of the major facilitators of fraud with very little delay. Would the Minister be willing to consider how the Bill could be amended to meet that goal, and would she be willing to meet to discuss what actions we can take to safeguard users of the services from criminal misuse of telecommunications networks or services?

Regulating in a Digital World (Communications Committee Report)

Lord Vaux of Harrowden Excerpts
Wednesday 12th June 2019

(4 years, 10 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Vaux of Harrowden Portrait Lord Vaux of Harrowden (CB)
- Hansard - -

My Lords, digital regulation is an incredibly complex subject, as we have heard, and it covers a wide range of diverse areas, so I am very grateful to the committee and to the noble Lord, Lord Gilbert, for producing this comprehensive report. I will focus tonight on data, and I apologise now to the noble Lord, Lord Inglewood, because I am going to get a little bit into the nuts and bolts. In doing so, I am going to concentrate principally on Google, but some of the issues that I raise apply to a greater or lesser extent to other platforms.

Google is the world’s largest digital advertising company but it also provides the world’s leading browser, Chrome; the leading mobile phone operating platform, Android; and the dominant search engine. Its Chrome- book operating system, while smaller, is growing fast, and it offers myriad other services to the consumer, such as Gmail, YouTube, Google+, Maps, Google Home and so on. These services are mostly provided to the consumer for free, and in return Google uses them to collect detailed information about people’s online and real-world behaviour, which it then uses to target them with paid advertising.

It collects data in two principal ways. First, active collection is where you are communicating directly with Google —for example, when you sign in to a Google account and use its applications. When you are signed in, the data collected is connected to your account, in your name. Secondly, Google applies a passive-collection approach. This happens when you are not signed in to a Google service but the data is collected through the use of the Google search engine, and through various advertising and publishing tools that use cookies and other techniques to track you wherever you go on the net—or indeed physically. It can still track your device location even if you are not an Android user. Do not think that avoiding all Google software will help; most websites have Google tools embedded in them and will place Google cookies on your device regardless.

The sheer quantity of data that Google collects every day is staggering. A recent study by Professor Douglas Schmidt of Vanderbilt University simulated the typical use of an Android phone and found that the phone communicated 11.6 megabytes of user data to Google per day—that is just one device in one day. As an aside, the phone is using your data allowance; you are paying for it to send all this data back to Google. The experiment further showed that even if a user does not interact with any key Google applications, Google is still able to collect considerable data through its tools and by using less visible tracking techniques.

The greatest safeguard over the collection and use of data has to be transparency. As users, we need to understand what is being collected, by whom and what for, and we need the ability to stop it and delete it if we wish. The GDPR and the Data Protection Act represent a step forward but it is already becoming clear that they may not be sufficient for the fast-moving digital world. How many people really understand what Google or indeed any other platform is collecting about them? This is going to become even more important as 5G and the internet of things take off.

As part of the right to be informed under the GDPR, websites now need to ask consent to use cookies. However, as we have all seen, the consent pop-ups usually say something general such as, “Cookies are used to improve and personalise our services”. It remains very difficult to find out precisely what data is being transferred, to whom and why. This is then complicated further by the fact that accepting cookies on a site usually means accepting not only the cookies for the site concerned, but also third-party cookies, including Google. Amazon, for example, lists 46 third parties that may set cookies when you use Amazon services, with no clear explanation of what each is doing, or what the relationship is. This is not transparent. Remember, cookies are only one way to collect the data. Others are less visible, such as browser fingerprinting. Blocking cookies does not stop data being collected.

GDPR also gives us the right to obtain the data that is held on us, but there are a number of problems. First, it is hard to know who has your data, because of the many third parties I have spoken about, with which you have no direct relationship but are collecting data on you. Secondly, only data deemed personally identifiable will be provided. In Google’s case, this includes only the data that it has collected using the active process I described earlier when you are logged into a Google service. However, as Professor Schmidt’s study showed, the majority of the data Google collects comes from the passive collection method. This data is described as user-anonymous, being linked to different identifiers, such as your device or browser ID; but if you log into a Google service from the same device or browser, either before or afterwards, Google is able to link it to your account.

Thirdly, as the committee’s report points out, the data that must be provided in response to a request does not include the behavioural information that derives from your data. I strongly agree with the committee’s conclusion that this behavioural information should be made available to the subject. I further urge the ICO to look more closely at whether cookie consent requests really meet the right to be informed, and to consider whether data that the platforms describe as user-anonymous are really anything of the sort. There should also be a requirement to provide details of any data that has been provided to third parties, and to provide details of third parties that have been allowed to collect data through one’s website. Does the Minister agree with these suggestions?

The second issue that arises from the way data is collected is one of conflict of interest and market power. I have described the volume of data collected by Google. This is hugely facilitated when the operating system and browser of your phone or computer is provided by Google. In effect, this means that your device is not working for you or protecting your interests; it is working for Google, helping it to obtain your data. Google’s dominance in both browser and phone operating systems strengthens a network effect that has assisted its rise as one of the data monopolies, making it hard for others to break into the market and compete. There has been talk of splitting up these data monopolies, and there must be an argument for somehow separating the activities of providing operating systems and browsers from those of data collection and advertising. At the very least, we should insist on mandatory standards of user protection and transparency to be built into such operating systems and browsers. Doing this would ensure that the software works to protect the interests of the user, not the interests of the advertiser. This would be a strong step towards,

“data protection by design and by default”.

I continue to agree that the CMA should look into the digital advertising market, as repeated in the report, and urge that this structural conflict I have just described is considered as a part of that. I am very sorry that the noble Lord, Lord Tyrie, has had to pull out of this debate. It would have been very good to have heard what he had to say on the subject. I urge the Minister to encourage the CMA to take a look.

In conclusion, I have suggested that the ICO should look into one element and that the CMA should review another—both elements are related. I think this emphasises the need for an expert digital authority, as the committee recommends, if only to act as gatekeeper and make sure that issues do not fall between the cracks.

Social Media: News

Lord Vaux of Harrowden Excerpts
Thursday 11th January 2018

(6 years, 3 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Vaux of Harrowden Portrait Lord Vaux of Harrowden (CB)
- Hansard - -

My Lords, I add my congratulations to my noble friend Lady Kidron on securing this important and timely debate. The social media industry is evolving very quickly and, as we have heard already, reality has overtaken the traditional ways of looking at news and publishing.

The large social media companies have become an important source of news for many people—indeed, for younger people, it seems they have already become the main source of news. A small handful of social media companies now have a dominant position and are driving advertising away from traditional news outlets. This dominance has been strengthened further as a result of consolidation amongst the big players, such as Google’s acquisition of YouTube or Facebook’s acquisitions of Instagram and WhatsApp. Indeed, Google and Facebook now command a level of dominance over the media industry and advertising revenues that Rupert Murdoch could only ever dream about—Facebook has more than 2 billion active users.

Increasingly, these social media companies are actually determining what news we see. Whether this is purely by algorithm or by human intervention makes no difference—they are still choosing the stories that we read. We have to question the extent to which advertising, both overt and covert, influences what the social media companies show us. Sensationalist “fake news” stories generate more hits and therefore more advertising revenues. There is little commercial incentive for these largely unregulated companies to police this, and the record of them doing so, so far, is very poor. That said, there are some welcome signs that the big players are starting to understand that they have responsibilities. The arguments made by social media that they are simply technology companies with no responsibility for what happens on their platforms are looking increasingly threadbare. Some regulation is, unfortunately, now necessary.

However, there is a spectrum here. Should a closed family WhatsApp group be regulated in the same way as a curated newsfeed? Should a small specialist chatroom, run by enthusiasts, discussing, say, hockey, be regulated as a newspaper? I would suggest not. We need to find a balance. Regulation as news and content publishers only solves part of the problem. For example, filter bubbles exist just as strongly in traditional media: many people read just one newspaper of a particular political colour. Education to encourage young people to question what they are reading is therefore, I believe, of fundamental importance. Social media can be a force for good here and provide access to a greater variety of sources, if done properly.

A key question from my point of view is why many people seem willing to behave online in a way that they would never do to people’s faces—bullying, hate speech, trolling, even death threats. I suspect that this may be due, at least in part, to the culture of anonymity that pervades social media. I do not have an easy answer to that. There is a strong argument that anonymity is important for freedom of speech, particularly in situations where dissent is dangerous. But it seems to me that, in addition to recognising the reality that the social media giants have become, in part, publishing companies, we also need to look very closely at the question of online anonymity.