Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what assessment she has made of the potential impact of the growth in the availability of open source Chinese AI platforms on the UK.
The Government continues to monitor global developments in AI, including open-source platforms. Open-sourcing AI models decentralises control, enabling more developers to innovate, experiment and deploy systems for diverse purposes.
This approach can deliver significant benefits by fostering innovation, competition and transparency. However, decentralisation also introduces security risks. Open model releases may allow malicious actors to remove safeguards and fine-tune models for harmful purposes.
Consumers and businesses should choose the AI system most suitable for their purpose, considering whether they trust the organisation hosting the model and handling of potentially sensitive queries. The National Cyber Security Centre (NCSC) has published guidance to help individuals use AI tools safely, including advice on understanding how personal information is processed and shared.
As part of its research to understand the capabilities and impacts of advanced AI and develop and test risk-mitigations, the AI Security Institute (AISI) takes a leading role in safety-testing open and closed AI models wherever they come from.