Asked by: Baroness Warsi (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what discussions they have had with technology companies regarding preventing foreign governments from using artificial intelligence to run online disinformation campaigns in the United Kingdom.
Answered by Lord Vallance of Balham - Minister of State (Department for Energy Security and Net Zero)
The government engages regularly with technology companies to make clear their responsibility to keep users safe.
The Online Safety Act requires all in-scope companies to tackle illegal content, including state-sponsored disinformation that meets the threshold of the Foreign Interference Offence. Where such content is generated using artificial intelligence, it would be captured as the Act applies regardless of how the content is produced.
Asked by: Baroness Warsi (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the adequacy of current safeguards at social media companies in detecting AI-generated disinformation targeting minority communities.
Answered by Lord Vallance of Balham - Minister of State (Department for Energy Security and Net Zero)
The Online Safety Act gives services duties to protect all UK users from illegal content, including illegal AI-generated disinformation. These protections apply to all users, including minority communities who are often disproportionately targeted by harmful online content.
Ofcom’s illegal content codes of practice strengthen the safeguards by requiring services to reduce exposure to illegal content. With Ofcom, we are monitoring the implementation of the Act and platforms’ compliance.
This Government recognises the challenges of detecting AI-generated content and is partnering with industry and academia to support technical innovation.