AI Scams: Consumer Protection

Dean Russell Excerpts
Monday 22nd January 2024

(3 months, 2 weeks ago)

Commons Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Dean Russell Portrait Dean Russell (Watford) (Con)
- View Speech - Hansard - -

I am grateful for the opportunity to raise this important topic of protecting consumers from artificial intelligence scams, or AI scams as I will refer to them. I understand that this topic has not been debated specifically in this House before, but it has been referenced in multiple debates. I can understand why this topic is new. At one point it may well have been science fiction, but now it is science fact. Not only that, it is probably a matter of fact that society is increasingly at risk of technology-driven crime and criminality. A new category, which I call AI-assisted criminals and AI-assisted crime, is emerging. They can operate anywhere in the world, know everything about their chosen victim and be seemingly invisible to detection. This AI-assisted crime is growing and becoming ever more sophisticated. I will share some examples in my speech, but let us address the bigger picture before I begin.

First, I appreciate that this entire debate may be new to many. What exactly is an AI scam? Why do consumers even need to be protected from something that many would argue does not yet exist? Let us step back slightly to explain the bigger picture. We live in a world where social media is everywhere: in our lives, our homes and our pockets. Social media has connected communities in ways we never thought possible. But for all the positives, it is also, as I saw as a member of the Online Safety Public Bill Committee, full of risk and harms. We share our thoughts, our connections and, most notably, our data. I am confident that if any Government asked citizens to share the same personal data that many give away for free to social media platforms, there would be uproar and probably marches on the streets; but every day, for the benefit of free usage, relevant advertisements and, ultimately, convenience, our lives are shared by us, in detail, with friends and family and, in some cases, the entire world.

We have, ultimately, become data sources, and my fear is that this data—this personal data—will be harvested increasingly for use with AI for criminal purposes. When I say “data”, I do not just mean a person’s name or birth date, the names of friends, family and colleagues, their job or their place of work, but their face, their voice, their fears and their hopes, their very identity.

Jim Shannon Portrait Jim Shannon (Strangford) (DUP)
- Hansard - - - Excerpts

I congratulate the hon. Gentleman on raising this issue. There were 5,400 cases of fraud in Northern Ireland last year, which cost us some £23.1 million. There is the fraud experienced by businesses when fraudsters pose as legitimate organisations seeking personal or financial details, there is identity theft, and now there are the AI scams that require consumer protection. Does the hon. Gentleman agree that more must be done to ensure that our vulnerable and possibly older constituents are aware of the warning signs to look out for, in order to protect them and their hard-earned finances from scammers and now, in particular, the AI scamming that could lead to a tragedy for many of those elderly and vulnerable people?

Dean Russell Portrait Dean Russell
- Hansard - -

I absolutely agree with the hon. Gentleman. I fear that this is yet another opportunity for criminals to scam the most vulnerable, and that it will reach across the digital divide in ways that we cannot even imagine. As I have said, this concerns the very identity that we have online. This data can ultimately be harvested by criminals to scam, to fool, to threaten or even to blackmail. The victims send their hard-earned cash to the criminals before the criminals disappear into the ether-net.

Some may argue that I am fearmongering and that I am somehow against progress, but I am not. I see the vast benefits of AI. I see the opportunities in healthcare for early diagnosis, improving patients’ experience, enabling a single-patient view across health and social care so that disparate systems can work together and treatment involves not just individual body parts, but individuals themselves. AI will improve efficiencies in business through customer service and personalisation, and will do so many other wonderful things. It will, for instance, create a new generation of jobs and opportunities. However, we must recognise that AI is like fire: it can be both good and bad. Fire can warm our home and keep us safe, or, unwatched, can burn it down. The rapidly emerging harms that I am raising are so fast-moving that we may be engulfed by them before we realise the risks.

I am not a lone voice on this. Back in 2020, the Dawes Centre for Future Crime at UCL produced a report on AI-enabled future crime. It placed audio/visual impersonation at the top of the list of for “high concern” crimes, along with tailored phishing and large-scale blackmail. More recently, in May 2023, a McAfee cybersecurity artificial intelligence report entitled “Beware the Artificial Impostor” shared the risks of voice clones and deepfakes, and revealed how common AI voice scams were, attacking many more people in their lives and their homes. Only a quarter of adults surveyed had shared experiences of such a scam, although that will increase over time, and only 36% of the adults questioned had even heard of voice-enabled scams. The practice is growing more rapidly than the number of people who are aware that it exists in the first place. I will share my thoughts on education and prevention later in my speech.

Increasingly online there are examples of deepfakes and AI impersonation being used both for entertainment and as warnings. Many will now have heard of a deepfake, from a “Taylor Swift” supposedly selling kitchenware, to various actors being replaced by deepfakes in famous roles—Jim Carrey in “The Shining”, for example. Many may be viewed as a bit of fun to watch, until one realises the dangers and risks that AI such as deepfakes and cloned audio can pose. An example is the frightening deepfake video of Volodymyr Zelensky that was broadcast on hacked Ukrainian TV falsely ordering the country’s troops to surrender to Russia. Thankfully, people spotted it and knew that it was not real. We also know that there are big risks for the upcoming elections here, in the US and elsewhere in the world, and for democracy itself. The challenge is that the ease with which convincing deepfakes and cloned voices can be made is rapidly opening up scam opportunities on an unprecedented scale, affecting not only politicians and celebrities but individuals in their own homes.

The challenge we face is that fraudsters are often not necessarily close to home. A recent report by Which? pointed out that the City of London police estimates that over 70% of fraud experienced by UK victims could have an international component, either involving offenders in the UK and overseas working together or the fraud being driven solely by a fraudster based outside the UK. Which? also shared how AI tools such as ChatGPT and Bard can be used to create convincing corporate emails from the likes of PayPal that could be misused by unscrupulous fraudsters. In this instance, such AI-assisted crime is simply an extension of the existing email fraud and scams we are already used to. If we imagine that it is not emails from a corporation but video calls or cloned voice messages from loved ones, we might suddenly see the scale of the risk.

I am aware that I have been referring to various reports and stories, but let me please give some context to what these scams can look like in real life. Given the time available, I shall give just a couple of recent examples reported by the media. Perhaps one of the most extreme was reported in The Independent. In the US, a mother from Arizona shared her story with a local news show on WKYT. She stated that she had picked up a call from an unknown number and heard what she believed to be her 15-year-old daughter “sobbing”. The voice on the other end of the line said, “Mom, I messed up”, before a male voice took over and made threatening demands. She shared that

“this man gets on the phone, and he’s like, ‘Listen here, I’ve got your daughter’.”

The apparent kidnapper then threatened the mother and the daughter. In the background, the mother said she could hear her daughter saying:

“Help me, mom, please help me,”

and crying. The mother stated:

“It was 100% her voice. It was never a question of who is this? It was completely her voice, it was her inflection, it was the way she would have cried—I never doubted for one second it was her. That was the freaky part that really got me to my core.”

The apparent kidnapper demanded money for the release of the daughter. The mother only realised that her daughter was safe after a friend called her husband and confirmed that that was the case. This had been a deepfake AI cloning her daughter’s voice to blackmail and threaten.

Another example was reported in the Daily Mail. A Canadian couple were targeted by an AI voice scam and lost 21,000 Canadian dollars. This AI scam targeted parents who were tricked by a convincing AI clone of their son’s voice telling them that he was in jail for killing a diplomat in a car crash. The AI caller stated that they needed 21,000 Canadian dollars for legal fees before going to court, and the frightened parents collected the cash from several banks and sent the scammer the money via Bitcoin. In this instance, the report shared that the parents filed a police report once they realised that they had been scammed. They said:

“The money’s gone. There’s no insurance. There’s no getting it back. It’s gone.”

These examples, in my view, are the canary in the mine.

I am sure that, over recent years, we have all received at least one scam text message. They are usually pretty unconvincing, but that is because they are dumb messages, in the sense that there is no context. But let us imagine that, like the examples I have mentioned, the message is not a text but a phone call or even a video call and that we can see a loved one’s face or hear their voice. The conversation could be as real as it would be if we were speaking to that loved one in person. Perhaps they will ask how we are. Perhaps they will mention something we recently did together, an event we attended, a nickname we use or even a band that we are a fan of—something that we would think only a friend or family member would know. On the call, they might say that they were in trouble and ask us to send £10 or perhaps £100 as they have lost their bank card, or ask for some personal banking information because it is an emergency. I am sure that many people would not think twice about helping a loved one, only to find out that the person they spoke to was not real but an AI scam, and that the information the person spoke about with an AI-cloned voice was freely available on the victim’s Facebook page or elsewhere online.

Imagine that this scam happens not to one person but to hundreds of thousands of people within the space of a few minutes. These AI-assisted criminals could make hundreds of thousands of pounds, perhaps millions of pounds, before anyone worked out that they had been scammed. The AI technology to do this is already here and will soon be unleashed, so we need to protect consumers now, before it arrives on everyone’s phone, and before it impacts our constituents and even our economy in ways that we cannot imagine.

Because of the precise topic of the debate, I will not stray too far into how this technology raises major concerns for the upcoming election. We could easily debate for hours the risk of people receiving a call from a loved one on the day of the election convincing them to vote a different way, or not to vote at all.

Everything that I have said today is borne out by the evidence and predictions. The Federal Trade Commission has already warned that AI is being used to “turbocharge” scams, so it is just a matter of time, and time is running out. How do we protect consumers from AI scams? First, I am aware that the Government are on the front foot with AI. I was fortunate to attend the Prime Minister’s speech on AI last year—a speech that I genuinely believe will be considered in decades to come to be one of the most important made by a Prime Minister because, amid all the global challenges we face, he was looking to a long-term challenge that we did not know we were facing.

I appreciate that the Government have said that they expect to have robust mechanisms in place to stop the spread of AI-powered disinformation before the general election, but the risks of deepfakes go far and wide, and the economic impact of AI scams is already predicted by some media outlets to run into the billions. The Daily Hodl reports that the latest numbers from the US Federal Trade Commission show that imposter scams accounted for $2.6 billion of losses in 2022.

The Secretary of State for Science, Innovation and Technology has said that the rise of generative AI, which can be used to create written, audio and video content, has “made it easier” for people to create “more sophisticated” misleading content and “amplifies an existing risk” around online disinformation.

With the knowledge that the Government are ahead of the game on AI, I ask that the Minister, who knows this topic inside out, considers some simple measures. First, will he consider legislation, guidelines or simple frameworks to create a “Turing clause”? Everyone knows that Turing said technology would one day be able to fool humans, and that time seems to be here. The principle of a Turing clause would be that any application or use of AI where the intention is to pretend to be a human must be clearly labelled. I believe we can begin this by encouraging all Government Departments, and all organisations that work with the Government, to have clear labelling. A simple example would be chatbots. It must be clearly identified where a person is speaking to an AI, not to a real human being.

Secondly, I believe there is a great opportunity for the Government to support research and development within the industry to create accredited antivirus-style AI detection for use in phones, computers and other technology. This would be similar to the rise of antivirus software in the early days of the world wide web. The technology’s premise would be to help to identify the risk that AI is being used in any communication with an individual. For example, the technology could be used to provide a contextual alert that a phone call, text message or other communication might be AI generated or manipulated, such as a call from a supposed family member received from an unknown phone number. In the same way as anti-virus software warns of computer users of malware risks, that could become a commonplace system that allows the public to be alerted to AI risks, and it could position the UK as a superpower in policing AI around the world. We could create the technologies that other countries use to protect their citizens by, in effect, creating AI policing and alert systems.

Thirdly, I would like to find out what, if any, engagement is taking place with insurance companies and banks to make sure they protect consumers affected by AI scams. I am conscious that the AI scams that are likely to convince victims will most likely get them to do things willingly, so it is much harder for consumers to be protected because before they even realise they have been fooled by what they believe is a loved one but is in fact an AI voice clone or video deepfake, they will have already given over their money. I do not want insurance companies and banks to use that against our consumers and the public, when they have been fooled by something that is incredibly sophisticated.

A further ask relates to the fact that prevention is better than cure. We therefore need to help the public to identify AI scams, for example, by suggesting that they use a codeword when speaking to loved ones on the phone or via video calls, so that they know they are real. The public should be cautious about unknown callers; we need to make them aware that that is the most likely way of getting a phone call that is a deepfake or is by a cloned voice and that puts them at risk. We should also encourage people not to act too quickly when asked to transfer money. As stated by the hon. Member for Strangford (Jim Shannon), the most vulnerable will be the older people in society—those who are most worried about these things. We need to make sure they are aware of what is possible and to make it clear that this is about not science fiction, but science fact.

Finally, I appreciate that this falls under a Department different from the Minister’s, but I would like to understand what mechanisms, both via policing and through the courts, are being explored to both deter and track down AI-assisted crime and criminals, so that we can not only find the individuals who are pushing and creating this technology—they will, no doubt, be those in serious and organised crime gangs—but shut down their technologies at source.

To conclude, unlike some, I do not subscribe to the belief that “The end of the world is nigh,” or even that “The end of the world is AI.” I hope Members excuse the pun. However, it would be wrong not to be wary of the risks that we know about and the fact that there are many, many unknown unknowns in this space. Our ability to be nimble in the face of growing risks is a must, and spotting early warning signs, several of which I have outlined today, is essential. We may not see this happen every day now, but there is a real risk that in the next year or two, and definitely within a decade, we will see it on a very regular basis, in ways that even I have not been able to predict today. So we need to look beyond the potential economic and democratic opportunities, to the potential economic and democratic harms that AI could inflict on us all.

Scams such as those I have outlined could ruin people’s lives—mentally, financially and in so many other ways. If it is not worth doing all we can now to avoid that, I do not know when the right time is. So, along with responding to my points, will the Minister recommend that colleagues throughout the House become familiar with the risk of AI scams so that they can warn their constituents? I ask Members also to consider joining the fantastic all-party group on artificial intelligence, which helps these things—the scams, the opportunity and much more—to be discussed regularly. I thank the Minister for his time and look forward to hearing his response.