Terminally Ill Adults (End of Life) Bill Debate
Full Debate: Read Full DebateLord Markham
Main Page: Lord Markham (Conservative - Life peer)Department Debates - View all Lord Markham's debates with the Department of Health and Social Care
(1 day, 7 hours ago)
Lords ChamberEssentially, I agree with the right reverend Prelate the Bishop of Hereford. I could almost leave it there, but I will briefly say, in the spirit of the amendments, that the tablers are right to raise general concerns about the possibility of abuse through bias—as we heard from the noble Baroness, Lady O’Loan—and hallucination. After all, we have had the first high-profile resignation of a public sector leader in the form of the chief constable of the West Midlands praying in aid the fabrication of a non-existent football match as the reason why Parliament was misled.
In addition to bias and hallucination, there is the risk of what is called scheming. The results from some of the LLMs—published, for example, in the journal Nature in October—show some pretty disturbing examples. In the article, headed “AI Models that Lie, Cheat, and Plot Murder”, there are examples where models have attempted to write self-propagating worms, fabricate legal documentation and leave hidden notes to future instances of themselves. The punchline, essentially, is that, in regard to some of these technologies,
“the world is in a lucky period in which models are smart enough to scheme but not smart enough to escape monitoring”.
That is scary because, in five years’ time, that may no longer be true. So there are good reasons for generalised concerns about AI and wanting to circumscribe the role it might play in this legislation.
However, for the reasons that others have mentioned— I suspect the noble Baroness, Lady Coffey, herself would accept this—this probing amendment is written too broadly. It says:
“Artificial intelligence must not be used to carry out any functions in any section or schedule of this Act”.
Given that, for example, under Clauses 5(5) and 12(2), a doctor has to discuss with a person their diagnosis, their prognosis, any treatments available, the likely effects of them, and palliative, hospice and other care, it is highly likely that those will be informed by machine learning. It will interpret, for example, CT scans or MRIs, and AI tools will personalise and optimise therapies, potentially with predictive AI for better prognosis. So, were this to come back on Report, there would be a good case for ensuring greater precision in the firepower that is aimed at this particular concern.
However, all that should not in any way excuse or divert us from an equivalent worry: we must not kid ourselves that the gold standard is human expert judgment on many of the questions posed by the Bill. As we discussed, Clause 2(1)(b) requires an assessment of whether somebody with a terminal illness will live longer than six months. Unfortunately, as we have heard, that turns out to be a clinically irrelevant threshold that is very hard for expert judgment to get right.
I have just pulled the data from a large study looking at 98,000 people across London over the last decade and at prognostic accuracy, and the answer was that clinicians were able to be accurate about whether somebody was going to live for two weeks with about 74% accuracy, and they were able to be accurate about whether somebody was going to live more than a year with 83% accuracy, but, in terms of being able to predict whether somebody is going to survive for weeks or months, accuracy was only 32%. So, whatever our concerns about AI, human expert judgment, which underpins the Bill, is itself highly fallible.
To follow on from that, as my noble friend said right at the beginning, the amendment was put down in such a blunt fashion absolutely to stimulate this sort of debate. What has been really useful in this debate is finding that there is a broad degree of consensus that AI can be valuable as an input to decision-making, but it should not be used as the output: as the final decision-maker. As mentioned, AI can detect the progression of cancers and can probably do better prognosis or improve, especially over the time that we are looking at here, so that you can get better assessments of how long someone is likely to live.
On the AI in the chat box, there are very many instances where it could be very useful in terms of detecting coercion if it is talking to someone over quite a long period of time. Therefore, in all of this we see that, with inputs to the decision-making process, AI has a valuable part to play, but I think we would also absolutely agree that the final decision-maker in terms of an output clearly has to be a human; obviously they will be armed with the inputs from AI, but the human will make the final decision. I think that is what the Bill does, if I am correct, in that it is very clear that the decision-makers, the panels, the doctors and everything are those people, but at the same time—although I guess the Bill is silent on this—obviously it enables AI as an input.
I hope this debate is useful in that it shows a degree of consensus and that in this instance we probably have the right balance, but, again, I would be interested to hear from the Bill sponsor in his response whether that is the case.
Baroness Gerada (CB)
My Lords, under this amendment as it stands, we would have patients who could not have computerised records, because we have AI sitting behind every computer. The AI starts at the beginning. It starts with our telephone system, so, in fact, the patient would not even be able to use the telephone to access us; they or a relative would have to come in. They certainly would not be allowed to have computerised records, because of the digital and AI systems that we have in order to pick out diseases and to make sure that we are safely practising.
They also would not be able to have electronic prescribing, in many ways, because the pharmacy end too uses AI to make sure that patients are not being overmedicated and for drug interactions, et cetera, and, if they are using a computer system, AI is also used to digitally scribe consultations. So I understand the essence of this amendment, which I think, as many have said, is to not allow AI to decision-make somebody at the end of their life, but, as it stands, I have to warn noble Lords that it is unworkable in clinical practice.