Data Protection and Digital Information Bill Debate

Full Debate: Read Full Debate
Department: Department for Digital, Culture, Media & Sport

Data Protection and Digital Information Bill

Marcus Fysh Excerpts
On the whole, there is far too much in the Bill, and far too little time to interrogate it all properly. Removing some of the most pernicious clauses or making amendments here and there would fundamentally do little to reduce the many risks that the Bill presents to individuals’ rights to privacy and to have their data protected from prying eyes—in Government or elsewhere—and the costs and pressures on businesses and third-sector organisations trying to comply with the regime. The Government have tabled significant new powers at the last minute through new clauses and schedules that, by definition, cannot have had the scrutiny they deserve. As such, although in principle we can support sensible amendments from both sides of the House, we will oppose many of the Government’s new clauses and schedules, and—especially given that the House has decided not to recommit the Bill for further scrutiny—I expect we will also oppose it on Third Reading.
Marcus Fysh Portrait Mr Marcus Fysh (Yeovil) (Con)
- View Speech - Hansard - -

It is a pleasure to follow the hon. Members who have spoken in this very important debate. I declare an interest: I am the chair of the all-party parliamentary group on digital identity, so I have a particular interest in the ramifications of data as it relates to identity, but also in wider concepts—some of which we have heard about today—such as artificial intelligence and how our data might be used in the future.

I share quite a lot of the concerns that we have heard from both sides of the House. There is an awful lot more work to be done on the detail of the Bill, thinking about its implications for individuals and businesses; how our systems work and how our public services interact with them; and how our security and police forces interact with our data. I hope that noble Members of the other place will think very hard about those things, and I hope my right hon. Friend the Minister will meet me to discuss some of the detail of the Bill and any useful new clauses or amendments that the Government might introduce in the other place. I completely agree that we do not have much time today to go through all the detail, with a substantial number of new clauses having been added in just the past few days.

I will speak specifically to some of the amendments that stand in my name. Essentially, they are in two groupings: one group deals with the operations of the trust framework for the digital verification service, which I will come back to, and the other general area is the Henry VIII-style powers that the Bill gives to Ministers. Those powers fundamentally alter the balance that has been in place since I was elected as a Member of Parliament in terms of how individuals and their data relate to the state.

On artificial intelligence, we are at a moment in human evolution where the decisions that we make—that scientists, researchers and companies make about how they use data—are absolutely fundamental to the operation of so many areas of our lives. We need to be incredibly careful about what we do to regulate AI and think about how it operates. I am concerned that we have large tech companies whose business model for decades has been nothing other than to use people’s data to create products for their own benefit and that of their shareholders. During the passage of the Online Safety Act 2023, we debated very fully in this House what the implications of the algorithms they develop might be for our children’s health, for example.

I completely agree with the Government that we should be looking for ways to stamp out fraud, and should think about how harms of various kinds are addressed. However, we need to be mindful of the big risk that fears and beliefs that are not necessarily true about different potential harms might lead us to regulate, or to guide the operations of companies and others, in such a way that we create real problems. We are talking about very capable artificial intelligence systems, and also about artificial intelligence systems that claim to be very capable but are inherently flawed. The big tech companies are almost all championing and sponsoring large language models for artificial intelligence systems that are trained on data. Those companies will lobby Ministers all the time, saying, “We want you to enable us to get more and more of people’s data,” because that data is of business value to them.

Given the Henry VIII powers that exist in the Bill, there is a clear and present danger that future Ministers— I would not cast aspersions on the current, eminent occupant of the Front Bench, who is a Wykehamist to boot—may be tempted or persuaded in the wrong direction by the very powerful data-generated interests of those big tech firms. As such, my amendments 278 and 279 are designed to remove from the Bill what the Government are proposing: effectively, that Ministers will have the power to totally recategorise what kinds of data can legitimately be shared with third parties of one kind or another. As I mentioned, that fundamentally changes the balance between individuals and the state.

Through amendment 280 and new schedule 3, I propose that when Ministers implement the trust framework within the digital verification service, that framework should be based on principles that have been accepted for the eight years since I was elected—in particular, those used by the Government in establishing the framework around its Verify online identity service for public services. That framework should be used in the context of the Bill to think about what decision-makers should be taking into account. It is a system of principles that has been through consultation and has been broadly accepted. It is something that the ICO accepts and champions, and it would be entirely right and not at all a divergence from our current system to put those principles in place.

What I would say about the legitimate interest recognition extension—the Henry VIII power—is that there are already indications in the Bill about what will be recategorised. It gives an idea of just how broad the categorisations could be, and therefore how potentially dangerous it will be if that process is not followed or is not correctly framed—for example, in relation to direct marketing. Direct marketing can mean all sorts of things, but it is essentially any type of direct advertising in any mode using personal data to target advertising, and I think it is really dangerous to take such a broad approach to it.

Before companies share data or use data, they should have to think about what the balance is between a legitimate interest and the data rights, privacy rights and all the other rights that people may have in relation to their data. We do not want to give them a loophole or a way out of having to think about that. I am very pro-innovation and pro-efficiency, but I do not believe it is inefficient for companies and users or holders of data to have to make those basic balancing judgments. It is no skin off their nose at all. This should be something we uphold because these interests are vital to our human condition. The last thing we want is an artificial intelligence model—a large language model—making decisions about us, serving us with things based on our personal data and even leaking that personal data.

I highlight that only yesterday or the day before, a new academic report was produced showing that some of the large language models were leaking personal data on which they had been trained, even though the companies say that that is impossible. The researchers had managed to get around the alignment guardrails that these AI companies said they had in place, so we cannot necessarily believe what the big tech companies say the behaviour of these things is going to be. At the end of the day, large language models, which are just about statistics and correlations, cannot tell us why they have done something or anything about the chain of causality behind such a situation, and they inherently get things wrong. Anyone making claims that they are reliable or can be relied on to handle personal data is, I think, completely wrong. I hope that noble Lords and Ladies will think carefully about that matter and re-table amendments similar to mine.

New clause 27 and the following new clauses that the Government have tabled on interface bodies show the extent to which these new systems—and decisions about new systems—and how they interface with different public services and other bodies are totally extensible within the framework of the Bill, without further regard to minorities or to law, except in so far as there may be a case for judicial review by an individual or a company. That really is the only safeguard that there will be under these Henry VIII clauses. The interface body provisions talk about authorised parties being able to share data. We have heard how the cookie system is very bad at the moment at effectively reflecting what individuals’ true preferences might or might not be about their personal data. It is worth highlighting the thoughtful comments we heard earlier about ways in which people can make more of a real-time decision about particular issues that may be relevant to them, but about which they may not have thought at all when they authorised such a decision in a dark or non-thinking moment, often some time before.