Summary of the UK Online Safety Act 2023
“For too long, sites and apps have been unregulated, unaccountable and unwilling to prioritise people’s safety over profits. That changes from today.”
The UK Online Safety Act 2023 introduces a transformative regulatory framework to enhance user safety online, with a particular focus on protecting children from harmful content while upholding freedom of expression. It applies to internet platforms accessible to UK users, including social media platforms, search engines, and AI-generated content platforms.
Key Requirements for Providers
Duty of Care
All regulated services must take proactive steps to protect UK users from illegal content and shield children from online harm. Providers are required to conduct regular risk assessments and implement proportionate systems to address these risks.
Illegal Content
Platforms must assess risks related to illegal content and implement measures to detect and remove it. Illegal content includes terrorism, child sexual exploitation, hate crimes, and fraud.
Protecting Children
Providers must introduce age-appropriate safeguards to protect children from harmful content, such as pornography, self-harm material, and cyberbullying. Enhanced safety settings may need to be default-enabled for child accounts.
Transparency
Providers must clearly communicate their content moderation policies, algorithmic processes, and decision-making criteria. Transparency reports are required to be submitted to Ofcom to demonstrate compliance.
Age and Identity Verification
Platforms hosting adult content or other high-risk materials may be required to implement robust age verification or estimation measures. Category 1 services must offer identity verification options for adult users to further enhance safety.
Protecting Democratic Content
The Act mandates that platforms safeguard journalistic content, democratic discourse, and diverse political opinions, ensuring these are not unduly restricted under content moderation policies.
Services Categorization
Services are categorized based on their reach and risk profile. Category 1 services, which include the largest and highest-risk platforms, are subject to the most stringent obligations, including heightened transparency and free speech protections.
Ofcom’s Role
As the designated regulator, Ofcom oversees compliance and enforcement of the Act. It publishes codes of practice and guidance to help providers understand their obligations. Ofcom has enforcement powers, including issuing fines of up to £18 million or 10% of a platform’s global annual revenue for non-compliance.
Timelines for Implementation
The Act’s implementation is phased. The first phase focuses on measures to address illegal harms and requires risk assessments to be completed by March 2025. Future phases will introduce obligations related to protecting children, transparency reporting, and other key areas.
Enforcement
Ofcom will monitor compliance and take enforcement actions against non-compliant platforms. This includes issuing fines, requiring platforms to take corrective measures, or restricting access to non-compliant services within the UK.
Impact on Specific Services
- AI Services: AI chatbots and content generation platforms must ensure they are not used to create or disseminate harmful content. Moderation systems must be designed to address risks specific to AI technologies.
- Gaming Platforms: Platforms like Roblox and Fortnite must implement protective measures to safeguard children from harmful content, inappropriate interactions, and exploitation.
- Video Conferencing Tools: Services such as Zoom and Snapchat may need to adopt content moderation and age assurance systems if they enable user-generated content or public interactions.
Addressing Democratic Threats
The Act seeks to mitigate the spread of misinformation and hate speech on social media while protecting democratic discourse and the free exchange of ideas. Concerns about overreach and potential censorship remain points of debate.
Balancing Safety and Freedom of Expression
A central aim of the Act is to strike a balance between user safety and freedom of expression. Providers are required to implement safeguards against harmful content without disproportionately limiting lawful speech or diverse viewpoints.
Conclusion
The UK Online Safety Act 2023 marks a significant step toward making the internet a safer and more responsible space. By prioritizing user safety, transparency, and democratic values, the Act places accountability at the core of digital platforms operating in the UK. However, challenges related to enforcement, costs, and unintended consequences highlight the need for careful implementation and oversight.
Online Safety and Digital Services Knowledge Hub
Go to our Online Safety Knowledge Hub for a more detailed look at the UK Online Safety Act 2023 and to keep track of the main legal and ethical issues as this area of law evolves in the UK, EU and internationally.