Terminally Ill Adults (End of Life) Bill Debate
Full Debate: Read Full DebateBaroness Gerada
Main Page: Baroness Gerada (Crossbench - Life peer)Department Debates - View all Baroness Gerada's debates with the Department of Health and Social Care
(1 day, 7 hours ago)
Lords Chamber
Baroness Gerada (CB)
My Lords, I assure the noble Baroness, Lady Coffey, that this issue will never be a routine tick-box exercise. Being in Tenerife rather than Torbay is the choice of the patient. If they want to spend that time there before they return to the UK and die, it is not our choice. Videos allow patients and their families to be together for those assessments. There is no ethical or clinical reason why an assisted dying request, or aspects of care included in the clauses laid out, must be face to face. What matters is capacity, choice and informed consent, not physical proximity.
During Covid, I assessed thousands of patients’ capacity, consent and safeguarding issues remotely, with no evidence of increased coercion or harm. Patients can already refuse life-sustaining treatments such as renal dialysis, have feeding withdrawn or make advanced decisions to remove treatment without face-to-face legal requirements. Face-to-face assessment requirements, as laid out in these amendments, are a policy choice, not a clinical or ethical necessity. What protects patients is careful assessment, independence, documentation and review, not the distance between two chairs.
To follow on from that, as my noble friend said right at the beginning, the amendment was put down in such a blunt fashion absolutely to stimulate this sort of debate. What has been really useful in this debate is finding that there is a broad degree of consensus that AI can be valuable as an input to decision-making, but it should not be used as the output: as the final decision-maker. As mentioned, AI can detect the progression of cancers and can probably do better prognosis or improve, especially over the time that we are looking at here, so that you can get better assessments of how long someone is likely to live.
On the AI in the chat box, there are very many instances where it could be very useful in terms of detecting coercion if it is talking to someone over quite a long period of time. Therefore, in all of this we see that, with inputs to the decision-making process, AI has a valuable part to play, but I think we would also absolutely agree that the final decision-maker in terms of an output clearly has to be a human; obviously they will be armed with the inputs from AI, but the human will make the final decision. I think that is what the Bill does, if I am correct, in that it is very clear that the decision-makers, the panels, the doctors and everything are those people, but at the same time—although I guess the Bill is silent on this—obviously it enables AI as an input.
I hope this debate is useful in that it shows a degree of consensus and that in this instance we probably have the right balance, but, again, I would be interested to hear from the Bill sponsor in his response whether that is the case.
Baroness Gerada (CB)
My Lords, under this amendment as it stands, we would have patients who could not have computerised records, because we have AI sitting behind every computer. The AI starts at the beginning. It starts with our telephone system, so, in fact, the patient would not even be able to use the telephone to access us; they or a relative would have to come in. They certainly would not be allowed to have computerised records, because of the digital and AI systems that we have in order to pick out diseases and to make sure that we are safely practising.
They also would not be able to have electronic prescribing, in many ways, because the pharmacy end too uses AI to make sure that patients are not being overmedicated and for drug interactions, et cetera, and, if they are using a computer system, AI is also used to digitally scribe consultations. So I understand the essence of this amendment, which I think, as many have said, is to not allow AI to decision-make somebody at the end of their life, but, as it stands, I have to warn noble Lords that it is unworkable in clinical practice.
My Lords, I am grateful to my noble friend for laying such a broad amendment, and obviously I agree with much of what the right reverend Prelate said. It is interesting that this is coming straight after the debate on face-to-face conversations. We are all used to ticking the “I am not a robot” box, but AI now has the ability to create persons, and it is often very difficult if you are not face to face to judge whether the person on screen is actually a person. I cannot believe we have got there quite so quickly.
However, it is also important to consider about public confidence and understanding at the moment. This is, as we keep saying, such an important life-or-death decision. There is a lack of understanding and people are potentially worried about these implications, often with regard to employment but also other purposes. For instance, as I was preparing this, it made me reflect, as the noble Baroness, Lady Gerada, said, on how your GP uses AI. When Patchs told me recently that the NHS guidance was that I should not take an over-the-counter drug for more than two weeks, I queried it.
However, only yesterday, I thought: was that answer actually from my GP or was it from an AI tool sitting behind the system? We really need to be careful with the level of public understanding and awareness of its use. This use of AI is also one step on and connected to Clause 42, which relates to advertising. I am grateful that the noble and learned Lord is going to bring forward some amendments on that clause. I hope that the connection with AI, as well as the Online Safety Act 2023, have been considered. If I have understood the noble and learned Lord correctly, I am disappointed that we have had no assurance that those amendments will be with us by the end of Committee, when the noble and learned Lord gave evidence on 22 October last year and accepted that there was additional work to be done on Clause 42.
I said at Second Reading that the Bill is currently drafted for an analogue age. I am not wanting to take us back to some kind of quill and no-use-of-AI situation. Obviously, as other noble Lords have said, the Bill do not deal with the pressure or coercion not being from a human being. It also does not consider that coercion can now be more hidden with the use of AI. The Bill does not deal with people being able to learn to answer certain tools by watching YouTube. Therefore, we could be in a situation where someone who would not qualify if there was a face-to-face non-AI system could learn those answers and qualify.
There are also good studies to say that its use in GP practices has had some inaccuracies. In many circumstances, there is a lack of transparency and accountability in tracing where the decision has come from. We do not even understand the algorithms that are sending us advertisements for different shops, let alone how they could be connected to a decision such as this.
Finally, my biggest concern is that there will be a limited number of practitioners who will want to participate in this process. That has been accepted on numerous occasions in your Lordships House. I will quote from a public letter written on 12 June last year. All of Plymouth’s senior palliative medicine doctors were signatories to a letter warning us of the risks of the Bill and saying that the
“changes would significantly worsen the delivery of our current health services in Plymouth through the complexity of the conversations required when patients ask us about the option of assistance to die”.
That is relevant for two reasons. First, if we have a shortage of practitioners in parts of the country, such as the south-west if those doctors’ opposition to the Bill translates into not being involved, there may therefore be an increased temptation to resort to more use of AI. I hope that the noble and learned Lord or the Minister can help on this point.
Many of these systems—I am speaking as a layperson here—rely on data groups and information within the system: the learning is created from that. If you have a very small pool of practitioners and some form of AI being used, does that not affect the creation of the AI tool itself? I hope that I have explained that correctly. With such a small group doing it, will that not affect the technology itself?