Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what assessment he has made of the effectiveness of current mechanisms for reporting and removing extremist content from major social media platforms.
Under the Online Safety Act, platforms now have a legal duty to protect users. Since March 2025, services must proactively scan for and remove illegal content such as terrorist material, or that which stirs up racial hatred. In July 2025, additional new child safety duties came into force, placing a legal duty on services to protect children from content that is harmful to them, including that which is hateful or abusive. Services must ensure their algorithms do not promote this content and enable users to easily report where it appears on regulated services.
The Act requires the Secretary of State to review and report to Parliament on the effectiveness of the regime 2-5 years after the Act is fully implemented.