Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what assessment her Department has made of the benefits of (a) a duty of candour requiring AI developers and deployers to publicly disclose when biases are discovered in their algorithms or training data and (b) providing clear mitigation strategies, similar to disclosure requirements in other regulated sectors such as medicines.
A range of existing rules already apply to AI systems such as data protection, competition, equality legislation and sectoral regulation. The government is also committed to supporting regulators to promote the responsible use of AI in their sectors and mitigate AI-related challenges, such as identifying and addressing algorithmic bias.
To help tackle this issue, we ran the Fairness Innovation Challenge (FIC) with Innovate UK, the Equality and Human Rights Council (EHRC), and the ICO. FIC supported the development of novel of solutions to address bias and discrimination in AI systems and supported the EHRC and ICO to shape their own broader regulatory guidance.
This is complemented by the work of the AI Security Institute (AISI) who work in close collaboration with AI companies to assess model safeguards and suggest mitigations to risks pertaining to national security.
To date, AISI has tested over 30 models from leading AI companies, including OpenAI, Google DeepMind and Anthropic.
The government is committed to ensuring that the UK is prepared for the changes AI will bring and AISI’s research will continue to inform our approach.