(5 days, 18 hours ago)
Lords ChamberMy Lords, I have put my name to Amendment 66, in the name of the noble Baroness, Lady Coffey. At present, the Bill makes no allowance for any restriction on the possibility of the use of non-human assessment and automated administration devices during the application and decision-making process for assisted death. Obviously, AI will be used for recording meetings and stuff like that—I am not a quill and paper person to that extent—but AI has already been proposed for use in killing patients in the Netherlands, where doctors are unwilling to participate.
The Data (Use and Access) Act 2025 established a new regulatory architecture for automated decision-making and data interoperability in the NHS. It provides that meaningful human involvement is maintained for significant decisions—decisions which may affect legal status, rights or health outcomes. Of course, assisted death would come within that definition.
That reflects the purpose of the NHS. We have talked about its constitution. I looked at the constitution and the guidance. It says that the purpose of the NHS is
“to improve our health and wellbeing, supporting us to keep mentally and physically well, to get better when we are ill and, when we cannot fully recover, to stay as well as we can to the end of our lives”.
I know that the noble and learned Lord, Lord Falconer, is going to put down an amendment suggesting that the constitution and guidance will have to be amended, but the current situation is that that is the purpose of the NHS. The assisted suicide of patients is certainly not provided for in the NHS, nor should AI be used in the crucial assessment and decision-making process for assisted dying, given the extreme difficulties in identifying coercion and assessing nuanced capacity, and the irreversible nature of death. What plans does the noble and learned Lord have to address these issues?
In the Commons, amendments were passed allowing the Secretary of State to regulate devices for self-administration. The amendment was not put to a vote; in fact, only seven votes were permitted by the Speaker on the more than 80 non-Leadbeater amendments. The Commons have accepted that devices will be used for self-administration. Of course, the assisted suicide Bill requires self-administration. Nothing in the Bill prohibits a device that uses AI to verify identity or capacity at the final moment. If a machine makes the final go/no-go decision based on an eye blink or a voice command, have we not outsourced the most lethal decision-making in a person’s life to technology? I have to ask: is this safe?
Public education campaigns on assisted suicide are explicitly allowed for in Clause 43. The Government have said that there will be an initial education campaign to ensure that health and social care staff are aware of the changes, and that there would likely be a need to provide information to a much wider pool of people, including all professionals who are providing or have recently provided health or social care to the person, as well as family members, friends, unpaid carers, other support organisations and charities. That controls only government activity. The other observation I would make is that I presume the public education campaign will inform families that they have no role in a person’s decision to choose assisted death, and that the first they may know of an assisted death is when they receive the phone call telling them that the person is dead. It is profoundly important that people know this.
There is nothing to prevent an AI chatbot or search algorithm helpfully informing a patient about assisted dying services and prioritising assisted dying over palliative care search results. By legalising this service, the Bill will feed the training data that makes these AIs suggest death as a solution. I would ask the noble and learned Lord, Lord Falconer, how he intends to police that situation.
There is also a risk of algorithmic bias. If prognostic AI is trained on biased datasets—we know the unreliability of the prognosis of life expectancy—it could disproportionately label certain demographics as terminal, subtly influencing the care options, including assisted dying, presented to them. The National Commission into the Regulation of AI in Healthcare established by the MHRA in 2025 is currently reviewing these risks to ensure that patient safety is at the heart of regulatory innovation. I ask the Minister: will that work cover assisted dying?
The AI Security Institute’s Frontier AI Trends Report in December 2025 highlights that:
“The persuasiveness of Al models is increasing with scale”,
and:
“Targeted post-training can increase persuasive capabilities further”.
In a healthcare context, this raises the risk of automated coercion, where the person interacting with a chatbot or an AI voice agent might be subtly persuaded towards certain end-of-life choices. The AISI has said that safeguards will not prevent all AI misuse. We have to remember that there will be financial incentives to provide assisted suicide; after all, the CEO of Marie Stopes received between £490,000 and £499,000 in 2024. There is big money, even though this will be charitable or NHS work. Clause 5 allows doctors to direct the person to where they can obtain information and have the preliminary discussion. That sort of information could be an AI or a chatbot at the present time.
Dr Sarah Hughes, giving evidence to the Lords Select Committee, said there was a real risk of “online coercion”. With newly developed AI functions and chatbots, we already know there are cases all around the world of individuals being coerced into all sorts of different behaviours, practices and decision-making. There is also an issue of misinformation around diagnosis and prognosis. Hannah van Kolfschooten questioned who has ultimate responsibility if the technology fails. She said:
“In traditional euthanasia settings, a doctor is accountable, but in AI-driven scenarios, accountability could become ambiguous, potentially resting between manufacturers, healthcare providers, and even the patient”.
AIs also have a record of encouraging suicide. We know that, and we have seen terrible cases among young people; they have no regard for human life.
Evidence shows that doctors suspect only 5% of elder abuse cases. Detecting subtle coercion requires, as was said in the previous group, professional judgment to interpret things such as non-verbal cues, body language and discomfort. AI systems are ill-equipped to handle these nuanced, non-quantifiable elements. It is imperative for trust in the system that the individual circumstances of each request for assisted death are recorded and are available for interrogation, or even potentially a criminal investigation, by the panel or another regulatory authority. The only insight as to what happened in the consulting room will come from these records. The patient will be dead. The current provision in the Bill does not provide any protection against the use of AI, which has algorithmic bias, to protect an individual in these circumstances. Can the noble and learned Lord, Lord Falconer, explain how he proposes to deal with these concerns?
My Lords, I will add only a very short sentence to my noble friend’s excellent speech, and it is what AI says about AI. It says: “AI is technically capable of providing advice or information relating to suicide, but it is critically dangerous to rely on it for this purpose”. Enough said.