To match an exact phrase, use quotation marks around the search term. eg. "Parliamentary Estate". Use "OR" or "AND" as link words to form more complex queries.


Keep yourself up-to-date with the latest developments by exploring our subscription options to receive notifications direct to your inbox

Written Question
Internet: Safety
Monday 16th May 2022

Asked by: Chi Onwurah (Labour - Newcastle upon Tyne Central)

Question to the Department for Digital, Culture, Media & Sport:

To ask the Secretary of State for Digital, Culture, Media and Sport, what assessment she has made of the effectiveness of the Online Safety Bill in ensuring that content moderators working for social media companies have adequate mental health support, regardless of their location.

Answered by Chris Philp - Minister of State (Home Office)

The Online Safety Bill will require social media companies to put in place appropriate content moderation systems to comply with their new statutory duties. It does not replace or duplicate existing employment or health and safety laws that may be relevant to companies’ obligations in regards to the health and wellbeing of their employees.


Written Question
Staff: Surveillance
Monday 16th May 2022

Asked by: Chi Onwurah (Labour - Newcastle upon Tyne Central)

Question to the Department for Digital, Culture, Media & Sport:

To ask the Secretary of State for Digital, Culture, Media and Sport, what discussions he has had with the Secretary of State for Digital, Culture, Media and Sport on protecting workers from intrusive monitoring, including monitoring eye movements and toilet breaks; and what protections employees have in respect of that monitoring.

Answered by Julia Lopez - Minister of State (Department for Science, Innovation and Technology)

Employers are neither expressly permitted to monitor, nor are they prohibited from doing so. Monitoring by employers must not breach the duty of trust and confidence implied into an employee's contract of employment and must comply with the European Convention of Human Rights, Data Protection legislation and Equality Act 2010.

Organisations that process workers’ personal data for the purposes of monitoring their activities or surveillance must comply with the requirements of the UK General Data Protection Regulation (‘UK GDPR’) and the Data Protection Act 2018 (‘DPA’). This means that the data processing must be fair, lawful and transparent.

Any adverse impact of monitoring on individuals must be necessary, proportionate and justified by the benefits to the organisation and others. A Data Protection Impact Assessment (DPIA) would usually be required, particularly where the processing involves the use of new technologies, or the novel application of existing technologies. Where organisations are operating behavioural biometric identification techniques such as through keystroke analysis or gaze analysis (eye tracking) they would generally need to conduct a DPIA.

The UK GDPR and the DPA are administered and enforced independently of the government by the Information Commissioner’s Office (ICO). The ICO publishes a range of advice and guidance for organisations on their data protection obligations including specific guidance for employers here.

The ICO ran a call for views seeking stakeholder and public input into future guidance on data protection and employment practices and has published a summary of responses here. The ICO is now acting on the feedback received and creating products that they will be consulting on and publishing on an iterative basis. The products will form a new, more user friendly hub of employment guidance.

Regular discussions are held across the government on all aspects of data protection.


Written Question
Data Protection
Monday 16th May 2022

Asked by: Chi Onwurah (Labour - Newcastle upon Tyne Central)

Question to the Department for Digital, Culture, Media & Sport:

To ask the Secretary of State for Digital, Culture, Media and Sport, which data rights will be clarified by the Data Reform Bill, as announced in the Queen's Speech 2022.

Answered by Julia Lopez - Minister of State (Department for Science, Innovation and Technology)

Now that we have left the EU, we have an opportunity to simplify the clunky parts of GDPR and create a world class data rights framework that will allow us to realise the benefits of data use while maintaining the UK’s high data protection standards.

The bill will contain measures from the ‘Data: A New Direction’ consultation document, and we will publish our response shortly. The bill will also make good on the government’s commitment to legislate for other policies in similar subject areas, such as increasing industry participation in Smart Data schemes and enabling digital identity-verification services.


Written Question
Government Departments: Internet
Thursday 31st March 2022

Asked by: Chi Onwurah (Labour - Newcastle upon Tyne Central)

Question to the Department for Digital, Culture, Media & Sport:

To ask the Secretary of State for Digital, Culture, Media and Sport, if she has plans to bring forward a UK cloud strategy.

Answered by Julia Lopez - Minister of State (Department for Science, Innovation and Technology)

In November 2021, DCMS published the National Data Strategy Mission 1 Policy Framework: Unlocking the value of data across the economy, which provides a framework for government action to set the right conditions to make private and third sector data more usable, accessible and available. The Framework identifies seven priority areas for action, three of which contribute to the goal of supporting the development of data sharing infrastructure within the UK. One of these is particularly focused on how the government can support the development of infrastructure that promotes wider economy data sharing for research and development purposes, which could include cloud services.

The recently formed Central Digital and Data Office (CDDO) in the Cabinet Office is also working on standardising the approach that government organisations take to the use of cloud services and data hosting.


Written Question
Facebook: Safety
Tuesday 8th March 2022

Asked by: Chi Onwurah (Labour - Newcastle upon Tyne Central)

Question to the Department for Digital, Culture, Media & Sport:

To ask the Secretary of State for Digital, Culture, Media and Sport, with reference to the article entitled Facebook failing to flag harmful climate misinformation, new research finds, published by the Centre for countering digital hate in February 2022, what assessment her Department has made of the implications for her policies of the findings that Metaverse is unsafe.

Answered by Chris Philp - Minister of State (Home Office)

The Government takes the issue of mis and disinformation very seriously, including climate misinformation. During the COP-26 Summit last year the cross Whitehall Counter Disinformation Unit brought together monitoring and analysis capabilities and expertise from across Government to understand the scope, scale, and nature of disinformation and misinformation risks to the Summit and worked with partners to tackle it. We are regularly engaging with social media platforms to flag content that we consider to be particularly harmful. Where this content breaches their own terms and conditions, we expect platforms to remove it promptly.

We are also introducing groundbreaking legislation to help prevent the spread of harmful disinformation. The Online Safety Bill will force companies subject to the safety duties to tackle illegal misinformation and disinformation in scope of the Bill, and protect children from harmful content. The biggest platforms will also need to address legal but harmful content for adults, which will include some types of harmful misinformation and disinformation.

The Bill will apply to all services that allow users to post content online or to interact with each other, regardless of whether users interact through online forums or as avatars in a digital environment. This includes the Metaverse. However, we expect companies to take steps now to improve safety, and not wait for the legislation to come into force before acting.


Written Question
Disinformation: Russia
Monday 7th March 2022

Asked by: Chi Onwurah (Labour - Newcastle upon Tyne Central)

Question to the Department for Digital, Culture, Media & Sport:

To ask the Secretary of State for Digital, Culture, Media and Sport, with reference to the oral contribution of the Prime Minister, Official Report, Thursday 24 February 2022, volume 709, what plans she has to address Russian state-sponsored misinformation.

Answered by Chris Philp - Minister of State (Home Office)

The Government takes the issue of disinformation very seriously. The Counter Disinformation Unit is working closely with the new Government Information Cell (GIC) to identify and counter Russian disinformation targeted at UK and international audiences. The GIC brings together expertise from across government including - but not limited to - FCDO, MoD, DCMS and CO experts in assessment and analysis, disinformation, and behaviour and attitudinal change.

We have been working closely with the major social media platforms to monitor and share information as the situation in Ukraine develops. We have made clear the seriousness of the current situation and the importance of cooperating at speed on countering these threats including swiftly removing disinformation and coordinated inauthentic or manipulated behaviour which is against their terms of service and promoting authoritative content.

As the Secretary of State set out in her statement to Parliament on 3 March, RT's broadcast news channel has been shut down on Freeview, Freesat and Sky. The Government welcomes the action Youtube has taken to prevent access to RT in the UK and the Secretary of State has written to other major platforms, including Meta and TikTok on this subject asking them to do all they can in preventing access to RT in the UK.


Written Question
Social Media: Disclosure of Information
Friday 4th March 2022

Asked by: Chi Onwurah (Labour - Newcastle upon Tyne Central)

Question to the Department for Digital, Culture, Media & Sport:

To ask the Secretary of State for Digital, Culture, Media and Sport, whether her Department plans to introduce full transparency requirements for social media networks.

Answered by Chris Philp - Minister of State (Home Office)

Companies providing high-reach, high-risk online services, such as the major social media sites, will be required to publish annual transparency reports containing information about the steps they are taking to tackle online harms on their platforms. This will include steps companies are taking to comply with their online safety duties, the systems and processes in place for users to report illegal content and the application of companies’ terms of service.

The Online Safety Bill sets out high level categories of information that Ofcom may require companies to include in their transparency reports. Ofcom will set out what information is required from companies in a notice, which will also specify the format, manner and deadline for the information to be provided to Ofcom. Ofcom will publish an annual transparency report which will include information about the contents of the reports companies have produced.

Ofcom will have a range of additional powers to assess whether companies are fulfilling their duties, such as the power to require information from companies, require interviews, require companies to undergo a skilled person’s report, and in certain circumstances, the power to access premises, data and equipment.


Written Question
Telecommunications: Digital Technology
Thursday 3rd March 2022

Asked by: Chi Onwurah (Labour - Newcastle upon Tyne Central)

Question to the Department for Digital, Culture, Media & Sport:

To ask the Secretary of State for Digital, Culture, Media and Sport, what recent assessment she has made of the effect of switching the telephone network from copper wires to digital phone lines on resilience to power cuts.

Answered by Julia Lopez - Minister of State (Department for Science, Innovation and Technology)

Ofcom, the independent telecoms regulator, has issued guidance on how telecoms companies can fulfil their regulatory obligation to ensure that their Voice over Internet Protocol (VoIP) customers have access to the emergency services during a power outage. This guidance was prepared following consultation with Ofgem and the industry, looking at data on average power outages among other factors.

This guidance states that providers should have at least one solution available that enables access to emergency organisations for a minimum of one hour in the event of a power outage in the premises, and that the solution should be suitable for customers’ needs and should be offered free of charge to those who are at risk as they are dependent on their landline. This might include relying on the mobile network, which has a high degree of power resilience, or using a battery back-up unit to provide power. Ofcom’s full guidance is available on its website here.

Whilst the upgrade is an industry led initiative, the government and Ofcom are working together to ensure consumers and sectors are protected and prepared for the upgrade process.


Written Question
Artificial Intelligence
Tuesday 1st March 2022

Asked by: Chi Onwurah (Labour - Newcastle upon Tyne Central)

Question to the Department for Digital, Culture, Media & Sport:

To ask the Secretary of State for Digital, Culture, Media and Sport, with reference to her Department's press release, New UK initiative to shape global standards for Artificial Intelligence, published on 12 January 2022, how long the pilot of the artificial intelligence standards hub will last; what metrics that hub will be measured against; and who will evaluate the performance of that pilot scheme.

Answered by Chris Philp - Minister of State (Home Office)

The AI Standards Hub pilot aims to grow UK contributions to global AI standards development. As outlined in the National AI Strategy, the UK is taking a global approach to shaping technical standards for AI trustworthiness, seeking to embed accuracy, reliability, security, and other facets of trust in AI technologies from the outset.

The pilot follows the launch of the Centre for Data Ethics and Innovation’s (CDEI) ‘roadmap to an effective AI assurance ecosystem’, which is also part of the National AI Strategy. The roadmap sets out the steps needed to develop world-leading products and services to verify AI systems and accelerate AI adoption. Technical standards are important for enabling effective AI assurance because they give organisations a common basis for verifying AI.

Alongside the AI Standards hub pilot and AI assurance roadmap, the government, via the National AI Strategy, has committed to undertake a review of the UK’s AI governance landscape, and publish an AI governance white paper. AI Standards, assurance, and regulation can be mutually complementary drivers of ethical and responsible AI.

The Alan Turing Institute is leading the AI Standards Hub Pilot, supported by the British Standards Institution and National Physical Laboratory. The pilot is expected to complete its initial activities by the end of 2022.

The AI Standards Hub pilot will involve engagement and collaboration with industry and academics. This includes a series of stakeholder roundtables being led by The Alan Turing Institute.

Once the Hub pilot finishes, there will be a process to evaluate and review its impact and determine the appropriate next steps.


Written Question
Artificial Intelligence
Tuesday 1st March 2022

Asked by: Chi Onwurah (Labour - Newcastle upon Tyne Central)

Question to the Department for Digital, Culture, Media & Sport:

To ask the Secretary of State for Digital, Culture, Media and Sport, with reference to her Department's press release, New UK initiative to shape global standards for Artificial Intelligence, published on 12 January 2022, what discussions officials in her Department had with industry leaders and academics when designing the artificial intelligence standards hub.

Answered by Chris Philp - Minister of State (Home Office)

The AI Standards Hub pilot aims to grow UK contributions to global AI standards development. As outlined in the National AI Strategy, the UK is taking a global approach to shaping technical standards for AI trustworthiness, seeking to embed accuracy, reliability, security, and other facets of trust in AI technologies from the outset.

The pilot follows the launch of the Centre for Data Ethics and Innovation’s (CDEI) ‘roadmap to an effective AI assurance ecosystem’, which is also part of the National AI Strategy. The roadmap sets out the steps needed to develop world-leading products and services to verify AI systems and accelerate AI adoption. Technical standards are important for enabling effective AI assurance because they give organisations a common basis for verifying AI.

Alongside the AI Standards hub pilot and AI assurance roadmap, the government, via the National AI Strategy, has committed to undertake a review of the UK’s AI governance landscape, and publish an AI governance white paper. AI Standards, assurance, and regulation can be mutually complementary drivers of ethical and responsible AI.

The Alan Turing Institute is leading the AI Standards Hub Pilot, supported by the British Standards Institution and National Physical Laboratory. The pilot is expected to complete its initial activities by the end of 2022.

The AI Standards Hub pilot will involve engagement and collaboration with industry and academics. This includes a series of stakeholder roundtables being led by The Alan Turing Institute.

Once the Hub pilot finishes, there will be a process to evaluate and review its impact and determine the appropriate next steps.