Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Education:
To ask His Majesty's Government what steps they are taking to ensure that AI tools are deployed safely in schools under the expansion of the EdTech Testbeds pilot programme.
Answered by Baroness Smith of Malvern - Minister of State (Department for Work and Pensions)
The government is taking clear, evidence-based steps to ensure artificial intelligence (AI) tools are deployed safely. The EdTech Testbeds programme will test educational technologies, including AI and Assistive Technologies, in real education settings to evaluate their impact on workload, learner outcomes and inclusion. Alongside this, we have introduced Generative AI Product Safety Standards, which set out strict safeguards. These include child-centred design, enhanced filtering of harmful content and strong data protection and safeguarding requirements.
To support safe adoption, we have published materials created with the Chiltern Learning Trust and the Chartered College of Teaching, to help teachers and leaders use AI responsibly and effectively.
These measures, combined with strengthened digital and technology standards for all schools, ensure that AI can be introduced safely while delivering meaningful educational benefits.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department of Health and Social Care:
To ask His Majesty's Government what assessment they have made of the impact of proposals to indefinitely recognise CE-marked medical devices on the availability of medical technologies in the UK.
Answered by Baroness Merron - Parliamentary Under-Secretary (Department of Health and Social Care)
Approximately 90% of medical devices currently on the British market are CE marked and their continued supply to the National Health Service and wider health system is vital for patient access to essential products. The Medicines and Healthcare products Regulatory Agency recognises CE marked products until 2028 or 2030, depending on risk classification and the European Union legislation they comply with. The proposals are intended to allow continued access to medical devices that have been assessed as safe and effective in the EU while aligning with international best practice. As Northern Ireland follows EU medical devices regulations, continued recognition of CE marked medical devices in Great Britain would further support the functioning of the UK Internal Market, as manufacturers could continue to place the same product on the entire United Kingdom market.
The proposals are anticipated to drive growth in the medical technology sector by reducing administrative costs and safeguarding the continued supply of medical technologies. The purpose of the proposed policy is to enable indefinite market access for CE marked medical devices on the British market. The impact on safety, availability, and favourability may vary depending on whether all devices are recognised, or just devices that are the same risk class in Great Britain, or lower. An assessment of each proposal against the availability of medical devices can be found in Annex C of the published consultation document, which is available on the GOV.UK website.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department of Health and Social Care:
To ask His Majesty's Government what assessment they have made of the role of AI tools in supporting radiologists and improving diagnostic capacity in the NHS breast screening programme.
Answered by Baroness Merron - Parliamentary Under-Secretary (Department of Health and Social Care)
The Department is actively testing artificial intelligence (AI) in areas with significant impact on health and the economy. AI tools have demonstrated clear potential in aiding radiologists and enhancing diagnostic capacity within the National Health Service, especially in breast screening.
While no formal assessment has yet been completed, emerging evidence is already highlighting the benefits that AI can provide to the NHS. Previously, two radiologists were required to review each scan, but now an AI assistant can perform a preliminary check, which is then verified by a qualified radiologist. This approach reduces the number of radiologists needed to review each scan, but it does not result in fewer radiologists employed by the NHS. Instead, it enables clinicians to work more efficiently and to review a greater volume of scans, thereby improving diagnostic capacity and ensuring more patients are seen promptly.
Furthermore, on 4 February 2025, the Department announced that nearly 700,000 women nationwide will participate in the world-leading Early Detection using Information Technology in Health (EDITH) trial. This initiative aims to test advanced AI tools to detect breast cancer cases earlier and is supported by £11 million of Government funding through the National Institute for Health and Care Research.
The Department is pursuing significant initiatives to evaluate and expand the use of AI in NHS breast screening. Early evidence points to improved efficiency and diagnostic capacity, and the EDITH trial will further examine the potential of AI in delivering earlier detection of breast cancer for patients across the country.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the HM Treasury:
To ask His Majesty's Government what assessment they have made of the use of generative AI tools by consumers for pension planning and investment decision-making; and what steps they are taking to ensure that appropriate consumer protections and regulatory safeguards are in place.
Answered by Lord Livermore - Financial Secretary (HM Treasury)
HMT has recently appointed Harriet Rees and Rohit Dhawan as Financial Services AI Champions. They will focus on helping firms seize opportunities of AI while protecting consumers and ensuring financial stability.
In recognition of growing consumer interest in these tools, the Financial Conduct Authority (FCA) has published information for consumers on using AI for investment research. This sets out the pros and cons of such tools, including the risk of incorrect or out-of-date information, and makes clear that advice from general purpose AI tools is not regulated and does not benefit from protections such as the Financial Services Compensation Scheme or the Financial Ombudsman Service.
The FCA also launched the Mills Review in January 2026 which will consider the implications of advanced AI on consumers, retail financial markets and regulators. The review will help the FCA support innovation while promoting the safe and trusted adoption of AI in retail financial services.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what guidance they have issued to civil servants about the use of generative AI tools in drafting policy advice and official correspondence; and what measures are in place to ensure accuracy and data security.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
His Majesty’s Government has issued cross‑government guidance and training to civil servants supporting the safe and responsible use of generative AI.
The AI Knowledge Hub was launched in May 2025. This includes core principles and relevant guidance for civil servants and supersedes the AI Playbook for Government.
From October 2025 to February 2026 over 221,000 civil servants took part in a learning initiative ‘AI for All’ which built foundational AI knowledge, developed literacy and gave staff the skills to use AI responsibly, including drafting advice and correspondence.
We are committed to ensuring that the adoption of AI is safe, effective, efficient and ethical. Tools such as the Data and AI Ethics Framework and Model for Responsible Innovation help teams manage risks.
To ensure accuracy and application of data security principles, departments participate in large‑scale pilots designed to generate real‑world evidence on AI performance.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the potential labour market impacts of AI-related displacement of staff; what consideration they have given to transitional income support or retraining mechanisms to assist affected workers; and what assessment they have made of the fiscal and productivity impacts of those measures.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Government recognises that AI is transforming workplaces, demanding new skills and augmenting existing roles. We have launched the AI and the Future of Work Unit - a cross‑government function to research and monitor AI’s economic and labour market impacts and provide timely advice on when new policies should be implemented. We published an initial assessment of AI impacts on the labour markets in January 2026 and are preparing for a range of possible futures to ensure positive outcomes for the economy, jobs, and workers.
We are also acting now to upskill 10 million workers in AI skills by 2030 through the AI Skills Boost programme, which aims to build a digitally skilled workforce to support long-term economic growth, drive innovation and expand individual
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the cybersecurity risks arising from the deployment of advanced or autonomous AI systems with significant access to the data of businesses and public sector organisations.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
We're confident that AI will bring huge benefits to businesses and public sector organisations.
Last year the government published the AI Cyber Security Code of Practice which sets out measures to address cyber security risks to AI systems.
Organisations which develop and deploy AI systems should use this code to protect our citizens and our digital economy, while ensuring the many benefits of AI can be realised
To complement this, the AI Security Institute conducts research on the risks posed by frontier AI, including cyber offensive capabilities. The Institute shares insights with Government security organisations, including the National Cyber Security Centre, to ensure the Government can plan for serious AI impacts.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the UK's role in international AI safety governance; and what steps they are taking to strengthen the capacity, resources and global partnerships of the AI Security Institute to support international standards setting and risk evaluation.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The UK AI Security Institute (AISI) was the world’s first state-backed organisation dedicated to providing a scientific of understanding transformative AI capabilities and their associated risks. AISI work and research is world-leading and similar institutes have been created across the world in UK AISI’s image.
UK AISI works closely with international partners to share information on the latest AI capabilities to inform decision making. UK AISI works particularly closely with the UK’s security partners to share information on AI capabilities as they relate to mutual national security matters.
UK AISI has agreements in place with the US Center for AI Standards and Innovation (US CAISI), Singapore AISI and other international partners to formalize AI security co-operation, informing international AI governance.
UK AISI is the coordinator of the International Network for Advanced AI Measurement, Evaluation and Science. The Network brings together international partners to advance AI evaluation science and measurement.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the deployment of AI systems in robotics and machinery; and what plans they have to introduce frameworks to regulate that deployment.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
AI-enabled capabilities in robotics and autonomous systems are supporting growth as an essential driver of productivity across sectors including manufacturing, healthcare, and logistics. The Government is accelerating the adoption of these technologies through a new set of £52m Robotics Adoption Hubs to provide businesses with the expertise to understand and use these systems.
In response to the AI Action Plan, the Government committed to work with regulators to boost their capabilities. The government has been clear that we will legislate where needed but we will do so based on evidence where any serious gaps are.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the use of AI tools in recruitment processes, including automated screening systems; and what guidance or regulatory frameworks are in place to prevent algorithmic bias and promote fair access to employment opportunities.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Government is committed to removing barriers to AI adoption, unlocking new opportunities, while ensuring technologies are fair, inclusive and accessible.
We have published the Responsible AI in Recruitment guidance which sets out good practice procuring and deploying AI systems for HR and recruitment. This guidance highlights the mechanisms that can be used to ensure the safe and trustworthy use of AI in recruitment.
A range of existing regulatory frameworks already apply to AI systems in the UK, such as data protection, equality legislation and other forms of sectoral regulation. The government will act where additional protections are needed.