Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the effectiveness of Gov.uk Chat in providing accurate information about public services; and what safeguards they plan to put in place to ensure that AI-driven responses to queries about public services are reliable and clear.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
GOV.UK Chat is being developed and tested to support users in accessing accurate and clear information about public services on GOV.UK. Its effectiveness is assessed through structured user testing and independent evaluation.
Robust safeguards are in place to ensure reliability and security. GOV.UK Chat answers are exclusively drawn from guidance published on the GOV.UK website. The team has worked with the AI Security Institute and Anthropic to implement guardrails to prevent malicious or inappropriate use, and to carry out red-teaming activity as further assurance.
During the October 2025 pilot, the system successfully prevented all attempted efforts to circumvent its safeguards. Testing and assurance activity will continue as the service develops, with accuracy, clarity and safety as core priorities
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the HM Treasury:
To ask His Majesty's Government what steps they are taking to increase capability, transparency and value for money in public sector expenditure.
Answered by Lord Livermore - Financial Secretary (HM Treasury)
This Government is committed to ensuring that every penny of public money is spent wisely, driving out low value spending and ensuring the state becomes more productive.
At Spending Review 2025, the Government announced that it would deliver total annual efficiency gains of almost £14 billion by 2028-29. It published departments’ efficiency targets and plans, allowing external scrutiny and public accountability.
At the Budget in November 2025, the Government committed to going further on efficiency and savings by delivering an additional £2.8 billion savings in 2028-29 and £5 billion by 2030-31. Alongside this, the Chief Secretary to the Treasury is leading a suite of reviews to drive value for money across government spending.
The Government has recently published an updated Green Book, the UK government guidance on appraisal and value for money. It has also started to publish business cases for major projects, meaning the public can be confident that taxpayers’ money is being spent on projects that deliver best possible value.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department of Health and Social Care:
To ask His Majesty's Government what assessment they have made of the use of artificial intelligence technologies by hospice and palliative care providers; and what safeguards are in place to ensure that those technologies maintain patient safety, data protection and equitable access to high-quality end of life care.
Answered by Baroness Merron - Parliamentary Under-Secretary (Department of Health and Social Care)
No formal assessment has been made of the use of artificial intelligence (AI) technologies by hospices and other palliative care providers. The majority of hospices are independent charitable organisations and so are free to make their own decisions regarding the adoption and deployment of AI tools.
NHS England is dedicated to enabling the safe deployment and adoption of AI technologies, providing clear guidance on approval, implementation, information governance, security, privacy, and controls. NHS England provides guidance on how technologies should be selected, deployed, and scaled to ensure they are safe, effective, and eligible for National Health Service adoption, including accuracy. NHS trusts are expected to ensure that access to the AI tools they employ is safe, ethical, effective, and equitable for all within their remit.
Strict safeguards are in place across the NHS to guarantee patient safety, and data protection. All NHS organisations, including NHS palliative care and end-of-life care services, are expected to comply with Medical Devices Regulations (SI 2002 No 618, as amended) (UK MDR 2002) and digital clinical safety standards.
Providers handling patient data must comply with UK General Data Protection Regulation and the Data Protection Act 2018. Each health organisation is required to appoint a Caldicott Guardian, whose role is to advise on the protection and proper use of health and care data, including where AI is involved.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Ministry of Justice:
To ask His Majesty's Government what assessment they have made, if any, of whether the use of generative AI tools for drafting by claimants is a contributing factor to the increase in employment tribunal cases; and what steps they are taking to ensure that employment tribunal processes are efficient and resilient.
Answered by Baroness Levitt - Parliamentary Under-Secretary (Ministry of Justice)
The Government is aware of the increased use of generative AI. Some stakeholders have reported that some potential Employment Tribunal claimants are using generative AI to provide a view on the strengths of their potential claim or to help with drafting elements of their claim. While no formal assessment has been made of the impact of generative AI on the caseload, to acknowledge changing behaviour, HMCTS has developed its own ‘Responsible AI Principles’ guidance to ensure use of AI in the courts and tribunals is appropriate, safe and controlled.
The Government is taking steps to increase the efficiency and resilience of the Employment Tribunal through the recruitment of additional judges, deploying Legal Officers actively to manage cases, the development of modern case management systems and the use of remote hearing technology. We continue to monitor demand in the Employment Tribunal and will consider any further actions needed to manage this.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department of Health and Social Care:
To ask His Majesty's Government what assessment they have made of the use of AI tools by NHS hospitals to support clinical documentation, including real-time note-taking systems; and what safeguards are in place to ensure that those tools maintain accuracy, patient safety and data protection.
Answered by Baroness Merron - Parliamentary Under-Secretary (Department of Health and Social Care)
National Health Service hospitals are increasingly using artificial intelligence (AI) tools, such as real-time note-taking systems, to support clinical documentation. Among these, Ambient Voice Technologies (AVT) hold transformative potential to improve both patient care and operational efficiency. These tools have been shown to cut down on the amount of time clinicians spend on paperwork by half, giving them more time to spend on other important tasks, like interacting with their patients.
NHS England is dedicated to enabling the safe deployment and adoption of such technologies, providing clear guidance on approval, implementation, information governance, security, privacy, and controls. National standards and additional guidance will explain how AVT solutions should be selected, deployed, and scaled to ensure they are safe, effective, and eligible for NHS adoption including accuracy.
Strict safeguards are in place across the NHS to guarantee patient safety, and data protection. All NHS organisations must comply with Medical Devices Regulations (SI 2002 No 618, as amended) (UK MDR 2002) and digital clinical safety standards. Providers handling patient data must comply with UK General Data Protection Regulation and the Data Protection Act 2018. Each health organisation is required to appoint a Caldicott Guardian, whose role is to advise on the protection and proper use of health and care data, including where AI is involved.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the potential use of AI in bank payment systems to prevent fraud; and what steps they are taking to ensure that safeguards and consumer protections are effective for the deployment of AI in payment systems.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
HM Treasury works closely with the UK financial regulators to monitor evolving risks from new technologies, and ensure that the opportunities AI presents can be realised in a safe and responsible way.
The government is engaging closely with the FCA on AI, and we support the approach the FCA is taking to encourage the safe adoption of AI in financial services. This includes several initiatives to support the safe adoption of AI, including the supercharged sandbox which enables firms to safely experiment with AI innovations.
The financial services sector has also been developing AI tools which can be used to detect and prevent fraud. These include HSBC’s pilot with Google to use AI to support financial crime detection. and Mastercard’s use of AI to identify and flag APP scams.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government, what steps they are taking to ensure that the use of AI in recruitment processes does not lead to unfair exclusion, bias, or reduced transparency for job applicants.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
I refer the noble Lord to the answer given on 11 February 2026 to Question UIN 109609.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the development of UK-based startups focused on AI safety and governance technologies; and what steps they are taking to support innovation in AI assurance and security.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Government recognises the growing strength of UK start‑ups developing AI safety, governance and assurance technologies. The Spending Review allocated up to £500 million to the Sovereign AI Unit to provide targeted support to enable high-potential startups and scaleups to become national AI champions.
As highlighted in the AI Opportunities Action Plan: One Year On publication, we have taken steps to build the AI assurance ecosystem that underpins safe and responsible use of AI. This includes establishing a new Centre for AI Measurement at the National Physical Laboratory, designed to accelerate the development of secure, transparent and trustworthy AI.
The AI Growth Lab will also act as a cross‑economy AI sandbox, encouraging innovation by enabling responsible AI products and services to be deployed under close supervision in live markets.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what steps they are taking to ensure that regulation of retail financial markets remains effective as AI adoption in fintech increases.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
HM Treasury works closely with the UK financial regulators to monitor evolving risks from new technologies, and ensure that the opportunities AI presents can be realised in a safe and responsible way. This includes engaging closely with the Financial Conduct Authority, and we support the approach it is taking to encourage the safe adoption of AI in financial services.
Alongside this, we have launched a new Centre for AI Measurement to develop new AI assurance tools and strengthen the UK AI Assurance ecosystem; committed to preserving the capability, trust, and collaboration of the AI Security Institute. We also concluded a call for evidence on the AI Growth Lab, a cross-economy AI sandbox, to inform further development and identify priority areas.
The government will act where these laws and initiatives are not enough to ensure AI security and we are exploring whether additional protections are needed.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the HM Treasury:
To ask His Majesty's Government what assessment they have made of the implications of the increased use of AI to drive cost efficiencies in banking for financial stability, competition and consumer outcomes in the financial services sector.
Answered by Lord Livermore - Financial Secretary (HM Treasury)
The Government’s ambition is to make the UK a global leader in AI, leveraging our dual strength in financial services and AI to drive growth, productivity, and consumer benefits. Encouraging safe adoption is an essential part of realising that ambition.
The treatment of customers by UK banks and building societies is governed by the Financial Conduct Authority (FCA), whose independent regulatory powers ensure consumer protection in the financial services sector. The FCA’s Principles for Businesses require firms to provide prompt, efficient, and fair service to all their customers. The FCA’s Consumer Duty requires firms to act in good faith, prevent foreseeable harm, and act in the best interests of consumers.
UK banks are required to comply with relevant laws and regulations that are fundamental to consumer protection. In April 2024, the FCA published an update on its regulatory approach to AI, making it clear that where firms use AI as part of their business operations, they remain responsible for meeting FCA rules. Firms remain fully accountable for outcomes delivered by AI systems.
The FCA is also the regulator responsible for promoting effective competition in the interests of consumers in financial services. The FCA’s 2024 update on its regulatory approach to AI also considers competition risks and the impacts of beneficial innovation on competition in financial services. The FCA also works alongside the Competition and Markets Authority (CMA) as part of the Digital Regulation Cooperation Forum (DRCF), including conducting joint consumer research on generative AI with the CMA.
The Bank of England’s Financial Policy Committee (FPC) is responsible for identifying and monitoring risks to UK financial stability. In their April 2025 Financial Stability in Focus publication, they set out the potential benefits and risks to financial stability that could result from AI use in the financial system, HM Treasury continues to work closely with the FPC and UK financial regulators to assess risks to financial stability.
The Government will continue to work with regulators and industry to ensure innovation proceeds safely and responsibly.