Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what guidance his Department has issued on the use of third-party AI tools in relation to topics with a security classification; and what assessment he has made of whether such AI tools transfer information outside of government further to their terms and conditions.
The AI Playbook for the UK Government acknowledges the use of third-party AI tools, offering high-level guidance on commercial, legal, and security aspects. Principle 8 advises consulting Commercial colleagues on procurement (p. 39) to ensure that expectations around responsible and ethical AI use are the same for in-house and third-party systems. The legal section (p. 61) covers intellectual property considerations when using third-party tools, while the security section (p. 74) examines risks and opportunities in third-party tools and embedded AI solutions.
As with any third-party tool, departments are required to undertake necessary risk assessments, including data protection impact assessment (DPIA) when using third-party AI tools. The DPIA process is designed to identify different types of sensitive data to be processed in different phases of use, including inputs and outputs. The AI Playbook outlines a section on data protection, which covers the importance of undertaking DPIAs for risk mitigation. The DPIA process would identify data governance risk areas, which would be covered in the terms and conditions of the suppliers' contract. Breaches of contractual clauses are legally binding. Security classifications are derived from the Government Security Classifications Policy (GSCP), and that the principles set out in the GSCP must also be adhered to for use of all tools.