Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the increase in technology company incorporations in the UK in 2025; and what impact that increase is having on their policies for regional economic development and tech entrepreneurship.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
This Government welcomes the increase in technology company incorporations in the UK in 2025, which reflects the strength of the UK’s tech ecosystem and growing levels of tech entrepreneurship across the country. We are encouraged that new tech businesses are being founded across UK regions and cities, supporting local growth, attracting investment, and helping to build strong regional tech and innovation clusters beyond London. And we are committed to removing barriers to growth for startups across the UK – ensuring the UK is one of the best places for tech companies to start, scale and stay.
We are supporting regional economic development through measures such as the Regional Tech Booster, a programme supporting startups and accelerating tech clusters beyond London. Partnerships across the UK have bid for up to £20 million through our Local Innovation Partnerships Fund - a new £500 million UKRI-led programme to grow regional strengths including those in the digital and technology sector.
We are supporting tech entrepreneurship and the sector through venture capital schemes, R&D tax reliefs, targeted visa routes, the AI Opportunities Action Plan, and by streamlining regulation to support innovation. We are investing in skills, compute, and designated AI Growth Zones; on R&D, we are committing £38.6 billion to UKRI over five years; and powering entrepreneurship through the Entrepreneurship Prospectus, Enterprise Fellowships, and Innovate UK’s £130 million Growth Catalyst.
We are also unlocking finance via pension and capital‑markets reforms, while the British Business Bank increases annual investment to £2.5bn and commits £5bn to growth‑stage funds.
Together, these measures set out a comprehensive, long‑term plan, backed by record funding, to support tech entrepreneurship and drive economic growth across all regions of the UK.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the effectiveness of cybersecurity legislation for AI-associated cyber threats; and what steps they are taking to improve legislation to address those threats.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
A range of existing rules already apply to artificial intelligence (AI) systems, such as data protection, competition, equality legislation, and online safety. The Department for Science, Innovation and Technology (DSIT), in close collaboration with the National Cyber Security Centre (NCSC), has created a voluntary Software Security Code of Practice which enables software vendors to secure software at all stages of their lifecycle.
As a government, we have also committed through the AI Action Plan to work with regulators to boost their capabilities, and DSIT and NCSC have taken a leading role in the development of the world's first published global standard for AI cyber security in ETSI (EN 304 223), which sets minimum-security requirements to help secure AI models and systems.
The Cyber Security and Resilience (Network and Information Systems) Bill does not specifically bring large language models or AI companies into scope. However, where organisations in scope of the Bill use AI models and systems, that organisation will need to take appropriate and proportionate steps to manage the risks to these from hackers. This would include large language models which are used as part of the day-to-day software available to staff in a hospital.
The practices recommended to protect against AI-driven cyber threats are essentially the same as those recommended for protecting against “traditional” cyber threats, which are to get good cyber hygiene measures in place, such as using the government’s Cyber Essentials scheme, and managing digital risks by using the Cyber Governance Code of Practice.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the HM Treasury:
To ask His Majesty's Government what steps they are taking to support the responsible adoption of AI by UK financial services firms; and what assessment they have made of the impact of AI use by those firms on productivity, service delivery and competitiveness.
Answered by Lord Livermore - Financial Secretary (HM Treasury)
The Government believes that the safe adoption of artificial intelligence (AI) by the financial services sector is a major strategic opportunity, with the potential to power growth across the UK. As set out in the Government’s Financial Services Growth and Competitiveness Strategy, it is our ambition to make the UK ”the world’s most technologically advanced global financial sector”, leveraging our dual strengths in FS and AI to drive growth, productivity, and deliver consumer benefits.
The Government has appointed Financial Services AI Champions, Harriet Rees and Rohit Dhawan, who will focus on helping firms seize opportunities of AI while protecting consumers and financial stability.
AI is already widely used across financial services, with around three-quarters of UK firms now deploying AI according to a recent survey by the Bank of England and the FCA. Additionally, recent reports from the City of London Corporation suggests AI could add tens of billions of pounds to the financial and professional services sector by 2030, as well as transforming services for consumers, and increasing productivity by up to 50% - underlining both the pace of adoption and the scale of the opportunity ahead.
The Government will continue working closely with industry and the regulators to safely capitalise on the opportunities AI presents while protecting consumers and financial stability.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what steps they are taking to improve access to AI computing capacity for UK researchers, start-ups and businesses.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
This government is committed to harnessing the power of compute to enable innovations that will deliver growth and opportunity and for people across the UK.
The AI Research Resource (AIRR) is now live and is free to use for the UK’s scientists, public sector organisations, and start-ups and SMEs. It is made up of two supercomputers: Dawn at Cambridge, and Isambard-AI in Bristol – one of the world’s top 10 public supercomputers and the 4th greenest in the world.
DSIT are investing up to £2 billion in public compute until 2030. This includes expanding our AI Research Resource twentyfold by 2030. As part of this, HMG recently announced a £36 million investment to expand Cambridge’s DAWN supercomputer sixfold by spring 2026. These investments will provide UK researchers and start‑ups free access to world‑class AI compute, enabling breakthroughs in areas such as personalised medicine, climate modelling and more efficient public services.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the 85 per cent increase in AI companies in the UK between 2023 and 2025; and what steps they are taking to support sustainable and regionally balanced growth in the AI sector.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
AI is a central focus of the Government’s economic and innovation priorities.
Due to data lags, 2025 AI Sector size is not currently available but will be released later this year in the 2025 AI Sector Study. Our most recent analysis highlights that between 2022 and 2024, the number of AI firms operating in the UK increased from 3,170 to 5,863, an 85% rise. This reflects both the strength of the UK’s AI investment environment, which is the highest in Europe, and ongoing work across Government to support safe and effective AI adoption.
Key policies continuing to drive this growth include the launch of the AI Opportunities Action Plan — with 38 of its 50 recommendations already in progress. This includes the development of five AI Growth Zones across England, Scotland and Wales, and initiatives such as the AI Skills Hub and support for employers to upskill 10 million workers so they can thrive in an AI‑enabled economy, as well as the Barnsley Tech Town programme.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the HM Treasury:
To ask His Majesty's Government what assessment they have made of the impact of workplace financial wellbeing companies on financial inclusion.
Answered by Lord Livermore - Financial Secretary (HM Treasury)
The Government recognises that employers can play an important role in supporting the financial wellbeing of their employees. The Financial Inclusion Strategy seeks to support employers who want to build the financial resilience of their workforce.
Payroll savings schemes are identified in the Strategy as a specific, impactful step employers can take to achieve this goal. The Strategy outlines the Government’s work with the Financial Conduct Authority to provide greater regulatory clarity to employers, so they can offer these schemes with confidence. The Money and Pensions Service is also working with Nest Insight and The Investing and Savings Alliance on the launch of a National Coalition of Employers to encourage uptake among firms.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the impact of AI adoption in the UK on the labour market; and what plans they have to support workers affected by technological change.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
We recognise that AI is transforming workplaces, demanding new skills and augmenting existing roles, although the future scale of change remains uncertain. This Government is planning against a range of plausible outcomes to ensure workers continue to have access to good, meaningful employment.
To support this work, the Government has established a new Future of Work Unit in DSIT. The Unit will provide robust analysis and evidence on the impact of AI on the labour market and will coordinate action across government, ensuring our principles are delivered through practical help and support for workers and employers.
At the same time, AI presents significant opportunities for the labour market. Around 35% of UK jobs predicted to benefit from productivity gains through AI adoption. Through the AI Opportunities Action Plan, we have committed to upskilling 10 million workers in essential AI skills by 2030. This will support workers to adapt and thrive in workplaces where AI tools are increasingly widespread.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the regulatory and consumer protection implications of the use of AI as financial guidance tools; and what safeguards they are putting in place to protect consumers.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Government aims for the UK to be a global leader in AI, using our strengths in financial services and AI to boost growth, productivity and consumer benefits. Safe adoption is central to this.
Organisations must handle personal data fairly, lawfully, transparently and securely, with individuals retaining rights such as access, correction and deletion.
The Financial Conduct Authority is also acting in this space, including publishing guidance for consumers on using AI tools for investment research and highlighting risks like inaccurate or outdated information.
The FCA’s Supercharged Sandbox and AI Live Testing service give firms access to computing, data and safe real‑world environments to support responsible AI use in UK financial markets.
More broadly, the Government recognises that people often lack the support they need when making financial decisions. To improve this, we are introducing a new targeted support regime enabling trusted firms to suggest suitable products or actions based on a customer’s circumstances. Targeted Support will launch in April 2026.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department of Health and Social Care:
To ask His Majesty's Government what assessment they have made of the quality of surgical outcome data collected by NHS trusts; and what steps they are taking to support NHS trusts to use that data to improve patient safety.
Answered by Baroness Merron - Parliamentary Under-Secretary (Department of Health and Social Care)
The National Clinical Audit and Patient Outcomes Programme (NCAPOP), is commissioned, managed, and developed by the Health Quality Improvement Partnership on behalf of NHS England, the Welsh Government, and other devolved administrations.
The programme currently consists of over 30 national clinical audits, registries, and databases as well as five clinical outcome review programmes.
The audit and registry topics include, for example, the national vascular registry, the national emergency laparotomy audit, and multiple cancer topics, all of which monitor a variety of clinical metrics including surgical outcomes.
The role of the NCAPOP is to detect unwarranted clinical variation and to feed this back to National Health Service trusts in an agile manner. Timely feedback to trusts enables them to make quick improvements to clinical practice. The NCAPOP work programme achieves this by making trust data available in near real time dynamic dashboards. The NCAPOP audits also operate a statistically rigorous outlier process with the aim of detecting negative trust outcomes. Outlier information is provided to the trust concerned, NHS England, and the Care Quality Commission.
The dashboard and outlier data can be used by trusts to influence quality governance, improve patient safety and reduce patient harm, and enable tailored clinical quality improvement programmes.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what steps they are taking to ensure that partnerships with AI companies to develop pilot tools for Gov.uk services deliver benefits for users while protecting data privacy.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
His Majesty’s Government is implementing artificial intelligence partnerships through a phased, test-and-learn approach that embeds data protection from the outset while testing transformational capabilities before committing significant public funds.
Working with the Commercial Innovation Hub, the Government has developed procurement approaches tailored to AI’s unique characteristics, including the National AI Tender for GOV.UK and the Planning Transformation Accelerator for AI-assisted decision-making, ensuring procurement methods are appropriate for evaluating frontier technologies.
All procurement frameworks require GDPR compliance as a mandatory qualification criterion, with partnerships operating under the UK Data Protection Act 2018 and comprehensive privacy-by-design principles. Pilots are deployed in controlled environments with oversight from departmental information security teams and data protection officers, with government retaining intellectual property ownership to prevent vendor lock-in.
Decisions to scale are contingent on pilots demonstrating measurable user benefits and full compliance with data protection standards through defined evaluation criteria and contractual break clauses, ensuring AI capabilities can be advanced while maintaining robust privacy safeguards.