Asked by: Bambos Charalambous (Labour - Southgate and Wood Green)
Question to the Department for Digital, Culture, Media & Sport:
To ask the Secretary of State for Digital, Culture, Media and Sport, what support he is providing to ensure UK-based academics can access adequate computer technology to carry out effective research on artificial intelligence.
Answered by John Whittingdale
Historically, UKRI councils have advocated for responsible research and innovation approaches. EPSRC has advocated and established a flexible and proportionate approach for its community (and staff) to consider what Responsible Innovation means for their activities. The development of the AREA framework was introduced in 2013. This encourages everyone involved in the research to describe and analyse possible impacts that may arise from their research activities, reflect on what that may mean going forward, openly engage with others, and to use these processes to influence the direction and trajectory of the research.
Within the UK, there are a number of activities and initiatives around responsible research and innovation in AI and providing leadership in this space. Examples include the Ada Lovelace Institute, The Alan Turing Institute, The Observatory for Responsible Research and Innovation in ICT (ORBIT).
UKRI is making investments in research to understand and implement the properties of Trustworthy AI across all applications of AI but this is a relatively new research area in which still further research is needed. Responsible, trustworthy AI is also a consistent theme in the investigations and strategic approaches of key UK and international stakeholders. For example, the G20 AI Principles and OECD Recommendations on AI focus on Responsible AI as a key theme for international AI development going forward. Through the Royal Society’s report ‘Machine Learning: The Power and Promise of Computers that Learn by Example’ the breadth of the responsibility challenge was illustrated, with clear current public concerns and barriers to adoption discussed as well as opportunities if fully Responsible AI is adopted.
Asked by: Bambos Charalambous (Labour - Southgate and Wood Green)
Question to the Department for Digital, Culture, Media & Sport:
To ask the Secretary of State for Digital, Culture, Media and Sport, what plans his Department has to support good governance and ethical considerations at institutions carrying out artificial intelligence research.
Answered by John Whittingdale
Historically, UKRI councils have advocated for responsible research and innovation approaches. EPSRC has advocated and established a flexible and proportionate approach for its community (and staff) to consider what Responsible Innovation means for their activities. The development of the AREA framework was introduced in 2013. This encourages everyone involved in the research to describe and analyse possible impacts that may arise from their research activities, reflect on what that may mean going forward, openly engage with others, and to use these processes to influence the direction and trajectory of the research.
Within the UK, there are a number of activities and initiatives around responsible research and innovation in AI and providing leadership in this space. Examples include the Ada Lovelace Institute, The Alan Turing Institute, The Observatory for Responsible Research and Innovation in ICT (ORBIT).
UKRI is making investments in research to understand and implement the properties of Trustworthy AI across all applications of AI but this is a relatively new research area in which still further research is needed. Responsible, trustworthy AI is also a consistent theme in the investigations and strategic approaches of key UK and international stakeholders. For example, the G20 AI Principles and OECD Recommendations on AI focus on Responsible AI as a key theme for international AI development going forward. Through the Royal Society’s report ‘Machine Learning: The Power and Promise of Computers that Learn by Example’ the breadth of the responsibility challenge was illustrated, with clear current public concerns and barriers to adoption discussed as well as opportunities if fully Responsible AI is adopted.
Asked by: Bambos Charalambous (Labour - Southgate and Wood Green)
Question to the Department for Digital, Culture, Media & Sport:
To ask the Secretary of State for Digital, Culture, Media and Sport, what plans her Department has to ensure (a) lines of accountability and (b) attributable liability for mistakes of artificial intelligence services.
Answered by John Whittingdale
Our future work related to attributable liability for mistakes of artificial intelligence services will be informed by independent expert advice. As part of its current work programme, the Centre for Data Ethics and Innovation is conducting a review into the potential for bias in the use of algorithms and will publish its report in March 2020.
Other measures include promoting a more ethical use of data within government. For example, one of the seven principles of the UK’s Data Ethics Framework is transparency about the tools, data and algorithms used to conduct work to enable greater scrutiny. The Framework encourages sharing models for algorithmic accountability and making data science tools available for scrutiny wherever possible.
Moreover, the Data Protection Act introduced the necessary safeguards such as the right to be informed of automated processing as soon as possible and the right to challenge an automated decision made by a data controller or processor.