(2 weeks, 2 days ago)
Lords ChamberMy Lords, I am conscious that I might be accused of preferring quill and pen than the latest technology in Amendment 66. In recognising how artificial intelligence is emerging, I thought I would put down a blunt amendment to allow us at least to have a debate. Inevitably, in a variety of legal and health situations, we will start to see artificial intelligence being used routinely. There was a recent legal ruling in which it turns out a judge had completely relied on AI and gave a completely inaccurate ruling based on it. This is not simply about what would be considered by medical practitioners.
I worry about judgment. We have already heard, reasonably, that trying to predict when somebody will pass away due to a terminal illness involves a bit of science but is largely an art. Perhaps I am being ungenerous in that regard. Certainly, in the DWP, we moved accelerated access to benefits from a six-month consideration to 12 months simply because, routinely, the NHS does not require its practitioners to assess six months; it is much more accurate at assessing 12 months. It is interesting that this Bill is focused on six months when, routinely, the NHS does not use that period. However, I am diverting slightly from the point of artificial intelligence.
I was somewhat interested in the previous debate, because there seemed to be a majority—I will not say a consensus—who felt that face to face was an important part of this happening in practice. But there are still a significant number of people who seem happy that we use a variety of technology for some of the interactions.
Forgive me for fast forwarding, but I see this whole issue becoming pretty routine. What I want to avoid is outsourcing. It strikes me how much people rely on Wikipedia and think that they are actually dealing with the Encyclopaedia Britannica, even though a lot of what is on Wikipedia is a complete load of garbage. What is even more worrying is that many of the AI mechanisms use sources such as Wikipedia, or simply put two and two together and come up with 22. I saw this, not that long ago, when I was trying to find something from when I had been on the Treasury Committee and interrogated the FCA about something. The first thing that came out of ChatGPT was that, somehow, I had become a non-executive director of the FCA—if only. That certainly was not the case. I am concerned that an overreliance on AI might start to happen in this regard.
I want to avoid a world of chatbots that removes the human element. That is why I keep coming back to the themes of face to face, being in this country and this having a personal element. I am conscious that the NHS and other practitioners, including legal practitioners, will continue to evolve—I am not stuck in some dinosaur age—but I feel that the issues that those of us concerned about the Bill have will continue. We completely understand why people might want to do this, but we want to make sure that the safeguards, particularly around coercion, are as safe as possible. That is why I have raised for debate the consideration of whether, as a matter of principle, artificial intelligence should not be used in the deployment of the future Act.
As I said, there may be evolution in medicine; we see that that is already happening. I do not know to what extent the Government have confidence in the use of AI in the diagnosis of lifespans. A new evolution in government is that AI is now starting to handle consultations. That might get tested in court at some point, to see whether it is a meaningful way to handle consultations—it is certainly a cost-efficient way to do so. My point is that, according to the Wednesbury rule, there is supposed to be proper consultation, not just a tick-box exercise.
I will not dwell on this, but I would be very interested to hear, from not only the sponsor but the Government, their consideration of artificial intelligence in relation to the practicality and operability of the Bill if it were to become law. I beg to move.
My Lords, I have put my name to Amendment 66, in the name of the noble Baroness, Lady Coffey. At present, the Bill makes no allowance for any restriction on the possibility of the use of non-human assessment and automated administration devices during the application and decision-making process for assisted death. Obviously, AI will be used for recording meetings and stuff like that—I am not a quill and paper person to that extent—but AI has already been proposed for use in killing patients in the Netherlands, where doctors are unwilling to participate.
The Data (Use and Access) Act 2025 established a new regulatory architecture for automated decision-making and data interoperability in the NHS. It provides that meaningful human involvement is maintained for significant decisions—decisions which may affect legal status, rights or health outcomes. Of course, assisted death would come within that definition.
That reflects the purpose of the NHS. We have talked about its constitution. I looked at the constitution and the guidance. It says that the purpose of the NHS is
“to improve our health and wellbeing, supporting us to keep mentally and physically well, to get better when we are ill and, when we cannot fully recover, to stay as well as we can to the end of our lives”.
I know that the noble and learned Lord, Lord Falconer, is going to put down an amendment suggesting that the constitution and guidance will have to be amended, but the current situation is that that is the purpose of the NHS. The assisted suicide of patients is certainly not provided for in the NHS, nor should AI be used in the crucial assessment and decision-making process for assisted dying, given the extreme difficulties in identifying coercion and assessing nuanced capacity, and the irreversible nature of death. What plans does the noble and learned Lord have to address these issues?
In the Commons, amendments were passed allowing the Secretary of State to regulate devices for self-administration. The amendment was not put to a vote; in fact, only seven votes were permitted by the Speaker on the more than 80 non-Leadbeater amendments. The Commons have accepted that devices will be used for self-administration. Of course, the assisted suicide Bill requires self-administration. Nothing in the Bill prohibits a device that uses AI to verify identity or capacity at the final moment. If a machine makes the final go/no-go decision based on an eye blink or a voice command, have we not outsourced the most lethal decision-making in a person’s life to technology? I have to ask: is this safe?
Public education campaigns on assisted suicide are explicitly allowed for in Clause 43. The Government have said that there will be an initial education campaign to ensure that health and social care staff are aware of the changes, and that there would likely be a need to provide information to a much wider pool of people, including all professionals who are providing or have recently provided health or social care to the person, as well as family members, friends, unpaid carers, other support organisations and charities. That controls only government activity. The other observation I would make is that I presume the public education campaign will inform families that they have no role in a person’s decision to choose assisted death, and that the first they may know of an assisted death is when they receive the phone call telling them that the person is dead. It is profoundly important that people know this.
There is nothing to prevent an AI chatbot or search algorithm helpfully informing a patient about assisted dying services and prioritising assisted dying over palliative care search results. By legalising this service, the Bill will feed the training data that makes these AIs suggest death as a solution. I would ask the noble and learned Lord, Lord Falconer, how he intends to police that situation.
There is also a risk of algorithmic bias. If prognostic AI is trained on biased datasets—we know the unreliability of the prognosis of life expectancy—it could disproportionately label certain demographics as terminal, subtly influencing the care options, including assisted dying, presented to them. The National Commission into the Regulation of AI in Healthcare established by the MHRA in 2025 is currently reviewing these risks to ensure that patient safety is at the heart of regulatory innovation. I ask the Minister: will that work cover assisted dying?
The AI Security Institute’s Frontier AI Trends Report in December 2025 highlights that:
“The persuasiveness of Al models is increasing with scale”,
and:
“Targeted post-training can increase persuasive capabilities further”.
In a healthcare context, this raises the risk of automated coercion, where the person interacting with a chatbot or an AI voice agent might be subtly persuaded towards certain end-of-life choices. The AISI has said that safeguards will not prevent all AI misuse. We have to remember that there will be financial incentives to provide assisted suicide; after all, the CEO of Marie Stopes received between £490,000 and £499,000 in 2024. There is big money, even though this will be charitable or NHS work. Clause 5 allows doctors to direct the person to where they can obtain information and have the preliminary discussion. That sort of information could be an AI or a chatbot at the present time.
Dr Sarah Hughes, giving evidence to the Lords Select Committee, said there was a real risk of “online coercion”. With newly developed AI functions and chatbots, we already know there are cases all around the world of individuals being coerced into all sorts of different behaviours, practices and decision-making. There is also an issue of misinformation around diagnosis and prognosis. Hannah van Kolfschooten questioned who has ultimate responsibility if the technology fails. She said:
“In traditional euthanasia settings, a doctor is accountable, but in AI-driven scenarios, accountability could become ambiguous, potentially resting between manufacturers, healthcare providers, and even the patient”.
AIs also have a record of encouraging suicide. We know that, and we have seen terrible cases among young people; they have no regard for human life.
Evidence shows that doctors suspect only 5% of elder abuse cases. Detecting subtle coercion requires, as was said in the previous group, professional judgment to interpret things such as non-verbal cues, body language and discomfort. AI systems are ill-equipped to handle these nuanced, non-quantifiable elements. It is imperative for trust in the system that the individual circumstances of each request for assisted death are recorded and are available for interrogation, or even potentially a criminal investigation, by the panel or another regulatory authority. The only insight as to what happened in the consulting room will come from these records. The patient will be dead. The current provision in the Bill does not provide any protection against the use of AI, which has algorithmic bias, to protect an individual in these circumstances. Can the noble and learned Lord, Lord Falconer, explain how he proposes to deal with these concerns?