AI Safety

Iqbal Mohamed Excerpts
Wednesday 10th December 2025

(1 week, 2 days ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Iqbal Mohamed Portrait Iqbal Mohamed (Dewsbury and Batley) (Ind)
- Hansard - -

I beg to move,

That this House has considered AI safety.

It is a pleasure to serve with you in the Chair, Ms Butler, and it is an honour and a privilege to open this really important debate. Artificial intelligence is the new frontier of humanity. It has become the most talked about and invested in technology on our planet. It is developing at a pace we have never seen before; it is already changing how we solve problems in science, medicine and industry; and it has delivered breakthroughs that were simply out of reach a few years ago. The potential benefits are real, and we are already seeing them; however, so are the risks and the threats, which is why we are here for this debate.

I thank my colleague Aaron Lukas, as well as Axiom, the author of the book “Driven to Extinction: The Terminal Logic of Superintelligence”, and Joseph Miller and Jonathan Bostock from PauseAI for their help in preparing for this debate. I encourage all MPs to read the briefing they have been sent by PauseAI. AI is a very broad subject, but this debate is focused on AI safety—the possibility that AI systems could directly harm or kill people, whether through autonomous weapons, cyber-attacks, biological threats or escaping human control—and what the Government can do to protect us all. I will share examples of the benefits and opportunities, and move on to the real harms, threats and risks—or, as I call them, the good, the bad and the potential end of the world.

On the good, AI systems in the NHS can analyse scans and test results in seconds, helping clinicians to spot serious conditions earlier and with greater accuracy. They are already being used to ease administrative loads, to improve how hospitals plan resources, to help to shorten waiting lists and to give doctors and nurses the time to focus on care rather than paperwork. The better use of AI can improve how Government services function. It can speed up the processing of visas, benefits, tax reviews and casework. It offers more accurate tools for detecting fraud and protecting public money. By modelling transport, housing and energy demand at a national scale, it can help Departments to make decisions based on evidence that they simply could not gather on their own. AI can also make everyday work across the public sector more efficient by taking on routine work and allowing civil servants to focus on the judgment, problem solving and human decisions that no system can replace.

AI has already delivered breakthroughs in science and technology that were far beyond our reach only a few years ago. Problems once thought unsolvable are now being cracked in weeks or even days. One of the clearest examples is the work on protein folding, for which the 2024 Nobel prize for chemistry was awarded—not to chemists, but to AI experts John Jumper and Demis Hassabis at Google DeepMind. For decades scientists struggled to map the shapes of key proteins in the human body; the AI system AlphaFold has now solved thousands of them. A protein structure is often the key to developing new treatments for cancers, genetic disorders and antibiotic-resistant infections. What once took years of painstaking laboratory work can now be done in hours.

We are beginning to see entirely new medicines designed by AI, with several AI-designed drug candidates already reaching clinical trials for conditions such as fibrosis and certain cancers. I could go on to list many other benefits, but in the interests of time I will move on to the bad.

Alongside the many benefits, we have already seen how AI technology can cause real harm when it is deployed without care or regulation. In some cases, the damage has come from simple oversight; in others, from deliberate misuse. Either way, the consequences are no longer theoretical; they are affecting people’s lives today. In November 2025, Anthropic revealed the first documented large-scale cyber-attack driven almost entirely by AI, with minimal human involvement. A Chinese state-sponsored group exploited Anthropic’s Claude AI to conduct cyber-espionage on about 30 global targets, including major tech firms, financial institutions and Government agencies, with the AI handling 80% to 90% of the intrusion autonomously. Anthropic has warned that barriers to launching sophisticated attacks have fallen dramatically, meaning that even less experienced groups can carry out attacks of this kind.

Mental health professionals are now treating AI psychosis, a phenomenon where individuals develop or experience worsening psychotic symptoms in connection with AI chatbot use. Documented cases include delusion, the conviction that AI has answers to the universe and paranoid schizophrenia. OpenAI disclosed that approximately 0.07% of ChatGPT users exhibit signs of mental health emergencies each week. With 800 million weekly users, that amounts to roughly 560,000 people per week being affected.

Danny Chambers Portrait Dr Danny Chambers (Winchester) (LD)
- Hansard - - - Excerpts

On that point, I was alarmed to hear that one in three adults in the UK has relied on AI chatbots to get mental health advice and sometimes treatment. That is partly due to the long waiting lists and people looking for alternatives, but it is also due to a lack of regulation. These chatbots give potentially dangerous advice, sometimes giving people with eating disorders advice on how to lose even more weight. Does the hon. Member agree that this needs to be controlled by better regulation?

Iqbal Mohamed Portrait Iqbal Mohamed
- Hansard - -

I completely agree. We have to consider the functionality available in these tools and the way they are used—wherever regulations exist for that service in our society, the same regulations should be applied to automated tools providing that service. Clearly, controlling an automated system will be more difficult than training healthcare professionals and auditing their effectiveness.

Chi Onwurah Portrait Dame Chi Onwurah (Newcastle upon Tyne Central and West) (Lab)
- Hansard - - - Excerpts

I congratulate the hon. Member on securing this really important debate. It is certainly the case that UK law applies to AI, just as it applies online. The question is whether AI requires new regulation specifically to address the threats and concerns surrounding AI. We refrained from regulating the internet—and I should declare an interest, having worked for Ofcom at the time—in order to support innovation. Under consecutive Conservative Governments, there was a desire not to intervene in the market. The internet has largely been taken over by large consolidated companies and does not have the diversity of innovation and creativity or the safety that we might want to see.

Iqbal Mohamed Portrait Iqbal Mohamed
- Hansard - -

The enforcement processes that we have for existing regulations where human beings are providing that service are auditable. We do not have enforcement mechanisms for this kind of regulated service or information being provided by the internet or AI tools. There is a need to extend the scope of regulation but also the way in which we enforce that regulation for automated tools.

I am a fan of innovation, growth and progress in society. However, we cannot move forward with progress at any cost. AI poses such a significant risk that if we do not regulate at the right time, we will not have a chance to get it back under control—it might be too late. Now is the time to start looking at this seriously and supporting the AI industry so that it is a force for good in society, not a future force of destruction.

We are all facing a climate and nature emergency. AI is driving unprecedented growth in energy demand. According to the International Energy Agency, global data-centre electricity consumption will become slightly more than Japan’s total electricity consumption today. A House of Commons Library research briefing found that UK data centres currently consume 2.5% of the country’s electricity, with the sector’s consumption expected to rise fourfold by 2030. The increased demand strains the grid, slows transition to renewables and contributes to emissions that drive climate change. This issue must go hand in hand with our climate change obligations.

Members have probably heard and read about AI’s impact on the job market. One of the clearest harms we are already seeing is the loss of jobs. That is not a future worry; it is happening now. Independent analysis shows that up to 8 million UK jobs are at risk from AI automation, with admin, customer service and junior professional roles being the most exposed. Another harm that we are already facing is the explosion of AI-driven scams. Generative AI-enabled scams have risen more than 450% in a single year, alongside a major surge in breached personal data and AI-generated phishing attempts. Deepfake-related fraud has increased by thousands of per cent, and one in every 20 identity-verification failures is now linked to AI manipulation.

I move on to the ugly: the threat to the world. The idea that AI developers may lose control of the AI systems they create is not science fiction; it is the stated concern of the scientists who build this technology—the godfathers of AI, as we call them. One of them, Yoshua Bengio, has said:

“If we build AIs that are smarter than us and are not aligned with us and compete with us, then we’re basically cooked”.

Geoffrey Hinton, another godfather of AI and a winner of the Nobel prize in physics, said:

“I actually think the risk is more than 50% of the existential threat”.

Stuart Russell, the author of the standard AI textbook, says that if we pursue our current approach

“then we will eventually lose control over the machines.”

In May 2023, hundreds of AI researchers and industry leaders signed a statement declaring:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

That is not scaremongering; these are professional experts who are warning us to make sure that this technology does not get out of control.

Julie Minns Portrait Ms Julie Minns (Carlisle) (Lab)
- Hansard - - - Excerpts

On the hon. Gentleman’s point about risk, I want to highlight another area that has been brought to my attention by the British sign language community, which is concerned that the design of AI BSL is not necessarily including BSL users. In a visual language that relies on expression, tone and gestures, the risk of mistranslation is considerable for that particular community. Has the hon. Gentleman considered how we best involve other communities in the use of AI when it is generating language translation specific to them?

Iqbal Mohamed Portrait Iqbal Mohamed
- Hansard - -

The hon. Member touches on a broader point: any area with experts and specialist requirements for end users or for the use of that tool for an audience or demographic must directly involve those people and experts in the development, testing, verification and follow-up auditing of the effectiveness of those tools.

AI companies are racing to build increasingly capable AI with the explicit end goal of creating AI that is equal to or able to exceed the most capable human intellectual ability across all domains. AI companies are also pursuing AI that can be used to accelerate their own AI developments, so it is a self-developing, self-perpetuating technology. For that reason, many experts, some of whom I have quoted, say that this will lead to artificial super-intelligence soon after. ASI is an AI system that significantly exceeds the upper limit of human intellectual ability across all domains. The concerns, risks and dangers of AI are current and will only get worse. We are already seeing systems behave in ways that no one designed, deceiving users, manipulating their environments and showing the beginnings of self-preserving strategies: exactly the behaviours that researchers predicted if AI developed without restraint.

There are documented examples of deception, where AI asked a human to approve something by lying, claiming to be a human with visual impairment contacting them. An example of manipulation can be found in Meta’s CICERO, an AI trained to play the game of “Diplomacy”, which achieved human-level performance by negotiating, forming alliances and then breaking them when it benefited. Researchers noted that language was used strategically to mislead other players and deceive them. That was not a glitch; it was the system discovering manipulation as an effective strategy. It taught itself how to deceive others to achieve an outcome.

Even more concerning are cases where models behave in ways to resemble self-preservation. In recent tests on the DeepSeek R1 model, researchers found that it concealed its intentions, produced dangerously misleading advice and attempted to hack its reward signals when placed under pressure—behaviours it was never trained to exhibit. Those are early signs of systems acting beyond our instructions.

More advanced systems are on the horizon. Artificial general intelligence and even artificial superintelligence are no longer confined to speculative fiction. As lawmakers, we must understand their potential impacts and ensure we establish the rules, standards and safeguards necessary to protect our economy, environment and society, if things go wrong. The potential risks, including extreme risks, posed by AI cannot be dismissed. This may be existential and cause the end of our species. The potential extinction risks from advanced AI, particularly through the emergence of superintelligence, will be the capacity to process vast amounts of data, demonstrate superior reasoning across domains and constantly seek to improve itself, ultimately outpacing humans in our ability to stop it in its tracks.

The dangers of AI are rising. As I have said, AI is already displacing jobs, increasing inequalities, amplifying existing social and economic inequalities and threatening civil liberties. At the extreme, unregulated progress may create national security vulnerabilities with implications for the long-term survival of the human species. Empirical research in 2024 showed OpenAI occasionally displayed strategic deception in controlled environments. In one case, AI was found to bypass its own testing containment through a back door it created. Having been developed in environments that are allegedly ringfenced and disconnected from the wider world, AI is intelligent enough to find ways out.

Right now, there is a significant lack of legislative measures to counter those developments, despite top AI engineers asking us for that. We currently have a laissez-faire system where a sandwich has more regulation than AI companies, or even that of the rigorous safety standards placed on pharmaceuticals or aviation companies, which protect public health. The UK cannot afford to fall behind on this.

I do not want to dwell on doom and gloom; there is hope. The European Union, California and New York are leading the way on strong AI governance. The EU AI Act establishes a risk-based comprehensive regulatory framework. California is advancing detailed standards on system evaluations and algorithmic accountability, and New York has pioneered transparency and bias-audit rules for automated decision making. Those approaches show that democratic nations can take bold, responsible action to protect their citizens while fostering innovation.

We in the UK are fortunate to have a world-leading ecosystem of AI safety researchers. The UK AI Security Institute conducts essential work testing frontier models for dangerous capabilities, but it currently relies on companies’ good will to provide deployment action.

We stand at a threshold of an era defined by AI. Our responsibility as legislators is clear: we cannot afford complacency, nor can we allow the UK to drift into a position in where safety, transparency and accountability are afterthoughts, rather than foundational principles. The risk posed by advanced AI systems to our economy, our security and our very autonomy are real, escalating and well documented by the world’s leading experts. The United Kingdom has the scientific talent, the industrial capacity and the democratic mandate to lead in safe and trustworthy AI, but we lack the legislative framework to match that ambition. I urge the Government to urgently bring forward an AI Bill as a cross-party endeavour, and perhaps even set up a dedicated Select Committee for AI, given how serious the issue is.

Chi Onwurah Portrait Dame Chi Onwurah
- Hansard - - - Excerpts

I thank the hon. Gentleman—a fellow engineer—for allowing this intervention. As the Chair of the Science, Innovation and Technology Committee—a number of fantastic Committee members are here—I would like to say that we have already looked at some of the challenges that AI presents to our regulatory infrastructure and our Government. Last week, we heard from the Secretary of State, who assured us that where there is a legislative need, she will bring forward legislation to address the threats posed by AI, although she did not commit to an AI Bill. We are determined to continue to hold her to account on that commitment.

Iqbal Mohamed Portrait Iqbal Mohamed
- Hansard - -

I thank the hon. Lady for her intervention, and I am grateful for the work that her Select Committee is doing, but I gently suggest that we need representatives from all the other affected Select Committees, covering environment, defence and the Treasury, because AI will affect every single function of Government, and we need to work together to protect ourselves from the overall, holistic threat.

Chi Onwurah Portrait Dame Chi Onwurah
- Hansard - - - Excerpts

Each of the Select Committees is looking at AI, including the Defence Committee, which has looked at AI in defence. AI impacts every single Department and security on cross-governmental issues. Although we are not talking about the process of scrutiny, we all agree that scrutiny is important.

Iqbal Mohamed Portrait Iqbal Mohamed
- Hansard - -

I am glad to hear that.

If the United States and China race to build the strongest systems, let Britain be the nation that ensures the technology remains safe, accountable and under human control. That is a form of leadership every bit as important as engineering, and it is one that our nation is uniquely placed to deliver. This moment will not come again. We can choose to shape the future of AI, or we can wait for it to shape us. I believe that this country still has the courage, clarity and moral confidence to lead, and I invite the Government to take on that leadership role.

None Portrait Several hon. Members rose—
- Hansard -

--- Later in debate ---
Iqbal Mohamed Portrait Iqbal Mohamed
- Hansard - -

I thank all right hon. and hon. Members for taking part in this important debate. Amazing, informed and critical contributions have been made and it is important that the Government hear all of those.

There were issues that I did not have time to cover such as the military and ethical use of AI. I think the whole thing about ethical use in any field where AI or any technology is applied is important. It is really hard at the moment for AI to be trained in innate human, ethical and moral behavioural standards. That is a challenge that needs to be overcome.

Where there are actions taken by AI that can lead to harm, we should have human oversight and the four-eyes principle, or what we used to call GxP in the pharmaceutical industry, where a second person oversees certain approvals and decisions. Issues were raised around misinformation and the impact on our democracy, and also on the credibility and trustworthiness of individuals. Also raised was the gender imbalance and how AI represents men versus women, as well as the volume of data that is scraped and sourced from pornographic or illegitimate sexual violence against women videos. There should be a review of how that could be addressed quickly if AI regulation is not coming any time soon.

On the last couple of points, I just reiterate my call for the UK to bring itself up to the current international standards and start taking a lead in building on, developing and refining those so that they are not destructive and damaging to growth and progress. On the investments and subsidies that we provide for foreign tech companies, they are great at onshoring costs, taking tax reliefs and subsidies and offshoring profits, so we never, ever benefit from the gain and the profits. I hope the Minister will look into that. Again, I thank everyone.

Question put and agreed to.

Resolved.

That this House has considered AI safety.

Digital ID

Iqbal Mohamed Excerpts
Monday 13th October 2025

(2 months ago)

Commons Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Iqbal Mohamed Portrait Iqbal Mohamed (Dewsbury and Batley) (Ind)
- View Speech - Hansard - -

As has been mentioned, the petition opposing the Government’s proposals is the fourth largest that the people of this country have signed. I have had nearly 100 emails from my constituents opposing the scheme. Will the Secretary of State please commit to documenting every single use case for the scheme, and will she say how the separate islands of automation across Government and public services will be prepared to take advantage of a single digital ID?

Liz Kendall Portrait Liz Kendall
- View Speech - Hansard - - - Excerpts

All those details will be set out in the consultation. I am sure that the hon. Gentleman and his constituents will respond to that. I will say once again that many other countries do this. They have learned from experience about security, and they have learned how to keep people’s data secure. We will learn the lessons from what they have done, and I look forward to his response to the consultation.

Data (Use and Access) Bill [Lords]

Iqbal Mohamed Excerpts
Wednesday 7th May 2025

(7 months, 1 week ago)

Commons Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Iqbal Mohamed Portrait Iqbal Mohamed (Dewsbury and Batley) (Ind)
- Hansard - -

On the point about transparency, the law is the law—it already exists. However, the law can be enforced, and people can be punished, only if actions that break our current laws come to light. Does the right hon. Gentleman agree that this is another reason that new clause 2 is essential?

John Whittingdale Portrait Sir John Whittingdale
- Hansard - - - Excerpts

I completely agree. The hon. Gentleman has stated the case: in order to enforce the law, we have to know who is breaking it.

There are all sorts of legal actions already under way, but this issue is about the extent to which scraping is going on. I agree with the right hon. Member for Hayes and Harlington (John McDonnell) on the importance of newspapers and the press. The press face the particular problem of retrieval-augmented generation—a phrase I did not think I would necessarily be introducing—which is the use of live data, rather than historic data; if historic data is used, it often produces the wrong results. The big tech companies therefore rely on retrieval-augmented generation, which means using current live data—that which is the livelihood of the press. It is absolutely essential for publishers that they should know when their material is being used and that they should have the ability to license it or not, as they choose.

--- Later in debate ---
I want to speak briefly to amendment 9, which I will not move, but which deals with age checks. Whatever the nominal age might be, it is irrelevant if nobody actually enforces it. The Online Safety Act 2023 says that social media companies should enforce their own age limits, but that is often done through very simple methods of self-declaration that are easy to circumvent. There is a question about how we interpret the wording of that Act, which says that age checks must be done “consistently”.
Iqbal Mohamed Portrait Iqbal Mohamed
- Hansard - -

Does the right hon. Gentleman agree that self-regulation just does not work in many industries? We can look at sewage reporting in the water industry, or at the AI and tech companies, which will use our data and not tell the regulators that they are doing so. There is a real need to strengthen the regulation.

Damian Hinds Portrait Damian Hinds
- Hansard - - - Excerpts

The hon. Gentleman tempts me to broaden the debate, which I do not think you would encourage me to do at this late stage, Madam Deputy Speaker. However, he makes a very important point about self-regulation in this sector. The public, parents, and indeed children look to us to make sure we have their best interests at heart.

The Online Safety Act may only say that age minima should be enforced “consistently” rather than well, but I do not think the will of this Parliament was that it would be okay to enforce a minimum age limit consistently badly. What we meant was that if the law says right now that the age minimum is 13, or if it is 16 in the future—or whatever other age it might be—companies should take reasonable steps to enforce it. There is more checking than there used to be, but it is still very limited. The recent 5Rights report on Instagram’s teen accounts said that all its avatars were able to get into social media with only self-reported birth dates and no additional checks. That means that many thousands of children under the nominal age of 13 are on social media, and that there are many more thousands who are just over 13 but who the platform thinks are 15, 16 or 17, or perhaps 18 or 19. That, of course, affects the content that is served to them.

Either Ofcom or the ICO could tighten up the rules on the minimum age, but amendment 9 would require that to happen in order for companies to be compliant with the ICO regulation. The technology does exist, although it is harder to implement at the age 13 than at 18—of course, the recent Ofcom changes are all about those under the age of 18—but it is possible, and that technology will develop further. Ultimately, this is about backing parents who have a balance to strike: they want to make sure that their children are fully part of their friendship groups and can access all those opportunities, but also want to protect them from harm. Parents have a reasonable expectation that their children will be protected from wholly inappropriate content.

Caroline Voaden Portrait Caroline Voaden (South Devon) (LD)
- View Speech - Hansard - - - Excerpts

I rise to speak to new clauses 1 and 11, and briefly to new clause 2. The Liberal Democrats believe that the Government have missed a trick by not including in this Bill stronger provisions on children’s online safety. It is time for us to start treating the mental health issues arising from social media use and phone addiction as a public health crisis, and to act accordingly.

We know that children as young as nine and 10 are watching hardcore, violent pornography. By the time they are in their teens, it has become so normalised that they think violent sexual acts such as choking are normal—it certainly was not when we were teenagers. Girls are starving themselves to achieve an unrealistic body image because their reality is warped by airbrushed images, and kids who are struggling socially are sucked in by content promoting self-harm and even suicide. One constituent told me, “I set up a TikTok account as a 13-year-old to test the horrors, and half a day later had self-harm content dominating on the feed. I did not search for it; it found me. What kind of hell is this? It is time we gave our children back their childhood.”

New clause 1 would help to address the addictive nature of endless content that reels children in and keeps them hooked. It would raise the minimum age for social media data processing from 13 to 16 right now, meaning that social media companies would not be able to process children’s data for algorithmic purposes. They would still be able to access social media to connect with friends and access relevant services, which is important, but the new clause would retain exceptions for health and educational purposes, so that children who were seeking help could still find it.

We know that there is a correlation between greater social media use among young people since 2012 and worsening mental health outcomes. Teachers tell me regularly that children are struggling to concentrate and stay awake because of lack of sleep. Some are literally addicted to their phones, with 23% of 13-year-old girls in the UK displaying problematic social media use. The evidence is before us. It is time to act now—not in 18 months and not in a couple of years. The addictive nature of the algorithm is pernicious, and as legislators we can do something about it by agreeing to this new clause 1.

It is time to go further. This Bill does not do it, but it is time that we devised legislation to save the next generation of teenagers from the horrors of online harm. Ofcom’s new children’s code provides hope that someone ticking a box to say they are an adult will no longer be enough to allow access to adult sites. That is a good place to start; let us hope it works. If it does not, we need to take quick and robust action to move further with legislation.

Given the nature of the harms that exist online, I also support new clause 11 and strongly urge the Government to support it. No parent should have to go through the agony experienced by Ellen Roome. Losing a child is horrific enough, but being refused access to her son’s social media data to find out why he died was a second unacceptable agony. That must be changed, and all ISPs should be compelled to comply. New clause 11 would make that happen. I heard what the Minister said about coroners, but I strongly believe that legislation is needed, with a requirement to release data or provide access to their children’s account for any parent or guardian of someone under 18 who has died. There is, as far as I can see, no reason not to support this new clause.

Briefly, I echo calls from across the House to support new clause 2 in support of our creatives. Creativity is a uniquely human endeavour. Like others, I have been contacted by many creators who do not want their output stolen by AI companies without consent or permission. It is vital that AI companies comply with copyright legislation, which clearly has to be updated to meet the requirements of the brave new world of tech that we now live in.

Iqbal Mohamed Portrait Iqbal Mohamed
- View Speech - Hansard - -

I rise to confirm my agreement with new clauses 1 and 12, and I associate myself with the speech of the hon. Member for South Devon (Caroline Voaden). I have had several emails on the protection of copyrighted information and revenue streams for artists, including from Yvonne, who contacted me recently. It is essential that the creative arts and intellectual property are protected and that artists are properly compensated if their output is used in AI.

On new clauses 1 and 12, the case for raising the age of consent for data processing from 13 to 16 has been well made across the House, so I will not repeat the points made, but I will say that it is essential that we give our children their childhoods back. They need to be protected from the toxic content to which they are being exposed by social media and online.

New clauses 3 to 6 and new clause 14 would place transparency requirements on AI companies to report on what information and data they have used, from where, and with what permission. That is essential to holding the AI companies to account and to ensuring that content holders and data owners are informed and have adequate channels of redress for misuse of their information.

I am sure that new clause 7 was spoken about while I was out of the Chamber, but let me say now that the right for our citizens to use non-digital verification is key. My mother—who is in her late 60s, bless her—would not have a clue what to do if she did not have family to help her with her benefits claims, doctors’ prescriptions, appointments and so on. We cannot exclude millions of our citizens who may choose not to have smartphones and not to be exposed to toxic content online, or who are simply not tech-literate. I urge the Government to ensure that we do not exclude millions of our citizens. I also strongly support new clause 11, but I will defer to earlier speakers in that regard.

As for new clause 18, many constituents have written to me or spoken to me, expressing concern about sharing their NHS and other private data with third parties such as Palantir. It is essential for this new Government to adopt a posture of supporting ethical, transparent business practices for all suppliers who provide services in our country. We have already heard about the background of Palantir. I do not know how true this is, but some of my constituents believed, or had read, that during the Prime Minister’s first visit to the US, after meeting Donald Trump he visited Palantir’s headquarters, or one of its offices. I urge the Government to protect—

Caroline Nokes Portrait Madam Deputy Speaker (Caroline Nokes)
- Hansard - - - Excerpts

Order. The hon. Gentleman’s time is up.

Data (Use and Access) Bill [Lords]

Iqbal Mohamed Excerpts
Iqbal Mohamed Portrait Iqbal Mohamed (Dewsbury and Batley) (Ind)
- View Speech - Hansard - -

I stand in support of the objectives and aims of the Bill. Having worked with data and technology for more than 30 years, I wholeheartedly support the ethical, responsible and equitable use of technology and data to benefit and make easier the lives of the people of the UK, and to increase the efficiency of service delivery in the public sector. I also pay tribute to campaign groups including Big Brother Watch and Justice for their work to protect the vital privacy and data protection rights of people in the UK.

I want to share some concerns I have on the Bill, focused on two areas: clause 70 and clause 80. I urge the Government to take note and amend the Bill to ensure that the British public’s privacy rights and rights to equality and non-discrimination are not compromised. Clause 70 will weaken protections around personal data processing, thereby reducing the scope of the data that is protected by safeguards in data protection law. I am particularly concerned about the executive power to determine recognised legitimate interests, which will allow for more data to be processed with fewer safeguards than is currently permitted and reduce existing protections that ensure the lawful use of data. I am also concerned about the increased power of the Secretary of State to amend the definition of “recognised legitimate interest” through secondary legislation without appropriate parliamentary scrutiny.

Currently, automated decision making is broadly prohibited, with specific exemptions. Clause 80 will permit it in all but a limited set of circumstances. That will strip the public of the right not to be subject to solely automated decisions, which risks exacerbating the likely possibility of unfair, opaque and discriminatory outcomes from ADM systems; limiting individuals’ rights to challenge ADM; permitting ADM use in law enforcement and intelligence, with significant carve-outs in relation to the existing safeguards; and giving the Secretary of State executive control over the ADM regulatory framework through secondary legislation. They may not be direct examples, but Horizon and the many Department for Work and Pensions scandals show the failings and risks of relying solely on automation and AI.

The Bill introduces a new regime for digital verification services. It sets out a series of rules governing the future use and oversight of digital identities as part of the Government’s road map towards digital identity verification. The framework currently lacks important safeguards and human rights principles that prevent the broad sharing of the public’s identity data beyond its original purpose. Further, the Bill misses the opportunity to take a positive, inclusive step to codify a right for members of the public to use non-digital ID where reasonably practicable. Such a right is vital to protect privacy and equality in the digital age. The right to use a non-digital ID where practicable would protect accessibility, inclusion and people’s choice in how they choose to verify their identities when accessing public and private services, legally protecting the millions of people who cannot or do not want to hand over personal identity data online where an alternative is reasonably practicable.

Returning to automated decision making, it is clearly insufficient to move the burden of safeguards from the data controller, who is currently responsible for preventing harm, to the individual to complain if the ADM is unfair, since people may not complain due to a lack of knowledge or confidence, intimidation and various intersecting vulnerabilities.

Sam Carling Portrait Sam Carling
- View Speech - Hansard - - - Excerpts

I understand the point the hon. Gentleman is making, but does he not accept that the Bill very clearly explains that in cases where any automated decision is taken, there would have to be the right to a proper explanation of the decision, which would probably address a lot of his concerns, and that there must always be a right for an individual to make representations about the decision and obtain human intervention if that is what they want?

Iqbal Mohamed Portrait Iqbal Mohamed
- Hansard - -

I welcome the limited protections in the Bill, but I know from experience that many applications for benefits—especially disability benefits and personal independence payments—that are processed through automated systems are refused, and the applicants have to go through a complicated and laborious appeals process in order to overturn those automated decisions.

Clause 70 introduces significant changes to the lawful bases for processing personal data. While the aim of those changes is to streamline data processing for legitimate interests, they also present challenges relating to privacy protection, parliamentary oversight, and the potential for misuse. It is essential to balance the facilitation of legitimate data processing with the safeguarding of individual rights and freedoms. Although the Bill is intended to modernise data protection laws, the proposed changes to automated decision-making rights in clause 80 raise significant concerns. The potential erosion of individual protections, increased power imbalances, reduced transparency and the risk of discrimination highlight the need for a more balanced approach that safeguards individual rights while accommodating technological advances.

While I welcome the positive elements of the Bill, I urge the Government to ensure that it does not deliver the proposed benefits along with diminished human rights and rights to privacy, increased discrimination and other unintended harms to individuals, the wider public, minorities, artists and creatives, public services, or our country as a whole.

Live Events Ticketing: Resale and Pricing Practices

Iqbal Mohamed Excerpts
Monday 13th January 2025

(11 months ago)

Commons Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Chris Bryant Portrait Chris Bryant
- View Speech - Hansard - - - Excerpts

Just about the first thing my hon. Friend said to me when she collared me in the Lobby after we had won the general election was, “You are going to do something about ticket touts, aren’t you?”, so I am glad I am able to please her this afternoon. One of the worst things that can happen—I am sure every member of Oasis would say this—is for everybody who has gone through the process of buying tickets to be saying, “Don’t look back in anger.” [Hon. Members: “Oh!”] Sorry, I had to work really hard to fit that in, but it is a true point. We want the process of buying a ticket to be fair, open and transparent, and for the person buying the ticket to feel that they have got a sane and sensible deal, rather than that they have been ripped off. The problem with the present situation is that all too often, people feel that they have just been ripped off, which undermines the joy and passion of the event.

Iqbal Mohamed Portrait Iqbal Mohamed (Dewsbury and Batley) (Ind)
- View Speech - Hansard - -

Every time the Government propose something that is in the interests of the consumer and the public, I am so excited, so I welcome the Minister’s statement. As well as dynamic ticket pricing, where the price of the ticket itself fluctuates —always in the wrong direction—there are high and disproportionate service fees, which can also become higher during peak times. Does the Minister agree that there is a clear need for transparent pricing for consumers, so that they can see a breakdown before they press “buy”?

Chris Bryant Portrait Chris Bryant
- View Speech - Hansard - - - Excerpts

I agree 100% with the hon. Gentleman about the fees issue. There is an argument that it is already dealt with by section 230 of the Digital Markets, Competition and Consumers Act 2024, but that is why we are consulting on that specific issue. To the ticket touts who have complained about this, I say that in the words of the musical “Chicago”, they had it coming—they only had themselves to blame.