UK Online
Safety Act 2023 (OSA)

Uk Online Safety Act
AI generated image of Elon Musk

The UK Online Safety Act 2023 (OSA or Act) is a landmark law reshaping how platforms like X (formerly Twitter), Instagram, Threads and Bluesky operate in the UK. It also impacts search applications, pornographic content, gaming platforms and AI generated content platforms. 

With a strong focus on protecting democratic content, increasing transparency, and curbing hate speech, the Act imposes strict obligations on platforms to balance user safety with free expression. One of the primary focuses of the OSA is to protect children from harmful content however it also aims to protect against hate speech and threats to democracy. 

This know-how page explores how the changes required by the OSA will affect the policies, algorithms, and content management system for social media platforms and AI generated applications with a focus on its impact for adults and social media and social networks.

Major Compliance Deadline: Providers now have a duty to assess the risk of illegal harms on their services, with a deadline of 16 March 2025. Providers will need to take the safety measures set out in the Codes or use other effective measures to protect users from illegal content and activity.

Online Safety Act

"For too long, sites and apps have been unregulated, unaccountable and unwilling to prioritise people’s safety over profits. That changes from today."

Introduction

Introduction
 The UK Online Safety Act 2023 (OSA or Act) imposes significant obligations on social media platforms, interactive websites and applications (including group video) and AI apps.

The OSA represents a significant regulatory shift, especially for platforms like regulated internet services that play a pivotal role in public discourse.

By enhancing transparency, refining algorithms, and protecting democratic content, regulated internet services have the opportunity to demonstrate leadership in compliance and user safety. However, navigating these new requirements will require substantial effort, resources, and innovation. 

As Ofcom enforces the Act, the response of regulated internet services providers will set a precedent for how social media platforms can adapt to an evolving regulatory landscape.

However, there are significant concerns that Ofcom (and by extension the UK Govt) will be slow and timid in its roll out of key aspects of the OSA and in its guidance and enforcement action.

Humility is clearly not going to work with persons that wish to undermine democratic institutions and freedom of expression. Any further delays in rolling out the core obligations (particularly for Category 1 services) are going to be deeply damaging to UK democracy.

Read: Time for tech firms to act: UK online safety regulation comes into force 

Is Ofcom about to delay action on fake and anonymous accounts until 2027?

 The OSA imposes significant obligations on social media platforms, interactive websites and applications (including group video) and AI apps affecting how these user-to-user and search service providers manage:
  • Their new ‘Duty of Care’

  • Illegal content (including hate crimes)

  • Protecting children from ‘harmful content’ including grooming, bullying and harassment

  • Ensuring transparency 

  • Protecting journalistic content and democratic discourse; and 

  • Managing algorithmic impacts. 

Other than the duty of care and the requirement to protect all UK persons from illegal content and children from online harm, the full scope of the duties depends on the categorisation of the services provider.

The OSA requires providers to take extra measures to protect children even if the content is not illegal content.

  • The OSA requires services to proactively prevent children from encountering primary priority content, which includes pornography, content promoting self-harm, eating disorders, or suicide. This is a central focus of the legislation for protecting children from the most harmful content.

  • The OSA mandates services to protect children from priority content, such as bullying, hate speech, and violent content. Services are expected to tailor these protections to specific age groups based on the risks identified.

  • The OSA also obligates services to assess risks associated with non-designated harmful content through children’s risk assessments and implement measures to mitigate these risks.

  • Regulated Services: Encompasses internet services, user-to-user services, and search services that meet specific thresholds defined in the Act. 

    • User-to-User Services: These are internet services where users can generate, upload, or share content accessible to others on the same platform.

      • Examples include social media platforms, forums, and collaborative applications.

      • This includes AI image and AI content generation platforms (like grok, chatgpt, gemini etc).

      • The definition is comprehensive, capturing services even if user interaction is not a primary feature. Exemptions apply to limited functions like private business communications and one-to-one messaging services such as email and SMS/MMS.

    • Search Services: These include any internet service functioning as a search engine, allowing users to search across websites or databases.

      • This category extends beyond traditional search engines (like Google) to any platform offering a search or filtering capability, such as websites with tag-based filtering.

      • Services operating as both user-to-user and search services are classified as “combined services” and must comply with the obligations of both categories

    • Internet Services: An internet service, other than a regulated user-to-user service or a regulated search service, that is within 80 (2) or Schedule 2 (primarily related to pornographic content).

See Frequently Asked Questions (FAQs) further below for out of scope services.

All online regulated services within scope of the OSA must protect UK users from illegal content and, where applicable, protect children from online harm. However, additional more detailed obligations apply to specified categories of service provider. 

The OSA, and additional regulations to be published pursuant to it, are expected to categorise services providers as follows:

  • Category 1: Services with a significant number of UK users and functionalities that pose higher risks of harm. Ofcom has advised that this category should capture services that meet one of the following criteria:

    • Use content recommender systems and have more than 34 million UK users (approximately 50% of the UK population).

    • Allow users to forward or reshare user-generated content, use content recommender systems, and have more than 7 million UK users (approximately 10% of the UK population).

  • Category 2A: Services with a moderate reach and risk profile, likely to be the highest reach search services. Ofcom recommends that this category include search services (excluding vertical search engines) with over 7 million UK users.

  • Category 2B: Services with a moderate reach and risk profile, likely to be other user-to-user services with potentially risky functionalities or characteristics. Ofcom recommends that this category target services allowing direct messaging, with over 3 million UK users.

Once the thresholds are set, Ofcom will publish a register of categorized services in the summer of 2025. Ofcom anticipates that the final thresholds will result in 35 to 60 services being categorised.  Most in-scope service providers will not be categorised (as they will not be sufficiently large) and so will not be subject to the additional category duties (summarised below).

Ofcom Guide to Categories and Requirements

Ofcom Summary
 

Ofcom have recently published a summary of their decisions in their Illegal Harms statement (the “Statement”) which outlines which services they relate to. It sets out:

  • The detailed measures they are recommending for user-to-user (U2U) services;

  • The detailed measures they are recommending for search services;

  • Their guidance for risk assessment duties, applicable to all U2U and search services; and

  • Their guidance for record-keeping and review duties, applicable to all U2U and search services.

The guidance sets out more than 40 safety measures that must be introduced by March 2025.

A snapshot of some of the measures, see the Statement for the full table for all service providers

Governance and Accountability

  • Regular reviews of risk management activities and internal monitoring.

  • Clear designation of an individual responsible for illegal content safety and reporting.

  • Documented responsibilities and codes of conduct for staff.

  • Tracking of emerging illegal harms.

Content Moderation

  • Implementation of content moderation systems (both human and automated).

  • Establishment of internal content policies and performance targets.

  • Prioritisation and resourcing of content review efforts.

  • Training for content moderation staff and provision of materials for volunteers.

Reporting and Complaints

  • Mechanisms for user complaints and reporting of illegal content.

  • Clear communication and timelines for handling complaints.

  • Processes for handling appeals and specific types of complaints.

User Controls and Support

  • Safety features for child users (e.g., default settings).

  • Terms of service that are clear and accessible.

  • Support services for child users.

  • Tools for user blocking and muting.

Additional Measures

  • Specific measures for recommender systems (e.g., safety metrics collection).

  • Removal of accounts associated with proscribed organizations.

  • Labeling schemes for notable users and monetized content.

  • Dedicated reporting channels for trusted flaggers.

Search-Specific Measures

  • Moderation of search results and predictive search suggestions.

  • Provision of content warnings and crisis prevention information.

  • Publicly available statements about content safety measures.

Ofcom Statement: Protecting people from illegal harms online

  • Free Speech: Regulated category 1 service providers must safeguard diverse political opinions, journalistic content, and democratic discourse while complying with moderation obligations.

  • Algorithm Transparency: All categorised service providers must provide detailed disclosures about how algorithms identify harmful content, moderate misinformation, and serve recommendations will be required.

  • Protect Children from Harm: All providers must take extra measures to protect children even if the content is not illegal content.

  • Harmful and Criminal Content Management: All providers must implement robust systems to detect and remove illegal criminal content and provide clear reporting tools for users. Category 1 providers must also take extra measures to enable adult users to reduce their exposure to legal but potentially harmful content.

  • User Control & Identity Verification: Category 1 providers must empower users with tools to manage their online experience such as use of personalised filters and ID verification.

  • Codes of Practice: Managing and understanding the practical compliance obligations for providers and users will be with reference to Ofcom guidance and codes of practice which assist to interpret the law. Under the OSA, Ofcom is required to prepare and issue the following separate Codes of Practice:

    • Codes of Practice for terrorism content

    • Codes of Practice for child sexual exploitation and abuse (CSEA) content

    • Codes of Practice for the purpose of compliance with the relevant duties relating to illegal content and harms.

Illegal content is defined broadly to encompass a wide range of what are known as priority offences. These include:

  • Terrorism: Content that promotes, glorifies, or incites terrorism

  • Child Sexual Exploitation and Abuse (CSEA): Material depicting or promoting child abuse 

  • Sexual Exploitation of Adults

  • Threats, Abuse & Harassment including Hate Crimes: Content that incites violence or hatred based on protected characteristics.

  • Unlawful Pornographic Content: image based sexual offences.

  • Fraud: Deceptive or misleading content intended to defraud users.

  • Suicide: Assisting or encouraging suicide.

  • Buying/Selling unlawful items: e.g. buying or selling drugs or weapons.

See the Ofcom Background Guidance (‘Protecting people from illegal harms online’) for more information.

Illegal Content Judgments

  1. Providers must conduct “suitable and sufficient” Illegal Content Risk Assessments (ICRAs) that consider the risks of users encountering illegal content, including “priority illegal content”.

  2. Providers must make illegal content judgments based on “reasonable grounds to infer,” a lower threshold than the criminal standard of “beyond reasonable doubt.” This means that there must be reasonable grounds to infer that:

    • The conduct element of a relevant offense is present or satisfied.

    • The state of mind element of that same offense is present or satisfied.

    • There are no reasonable grounds to infer that a relevant defense is present or satisfied.

    Freedom of expression and privacy must be considered when making these judgments.

  3. When service providers are alerted to the presence of illegal content or are aware of its presence in any other way, they have a duty to operate using proportionate systems and processes designed to “swiftly take down” any such content. This is referred to as the “takedown duty”.

Ofcom has issued the Illegal Content Judgements Guidance (ICJG) to support providers in understanding their regulatory obligations when making judgments about whether content is illegal under the OSA. It provides guidance on how to identify and handle illegal content, while considering freedom of expression and privacy. The ICJG outlines the legal framework for various offenses, the importance of context, jurisdictional issues, and the handling of reports and flags. It also offers specific guidance on various offense categories including the conduct and mental elements, as well as relevant defences.

 

User to User services: For the purposes of brevity given the scope of the OSA, I will focus on user to user services Codes and Guidance (as the most relevant category for social network platforms like X). The Illegal content Codes of Practice for search services is available here.

The draft Illegal content Codes of Practice for user-to-user services has been published with measures recommended for providers to comply with the following duties:

  • The illegal content safety duties set out in section 10(2) to (9) of the Act;

  • The duty for content reporting set out in section 20 of the Act, relating to illegal content; and

  • The duties about complaints procedures set out in section 21 of the Act, relating to the complaints requirements in section 21(4).

  • Section 3 of the document provides an index of recommended measures, including the application, relevant codes, and relevant duties for each measure. The recommended measures cover a range of areas, including governance and accountability, content moderation, reporting and complaints, recommender systems, settings, functionalities and user support, terms of service, user access, and user controls.

The recommended measures are set out in Section 4 of the document and are divided by thematic area:

  • Governance and Accountability

    • Large services should conduct an annual review of risk management activities related to illegal harm in the UK.

    • All services should designate an individual accountable for compliance with illegal content safety and reporting/complaints duties.

    • Large or multi-risk services should:

      • have written statements of responsibilities for senior managers involved in risk management.

      • have an internal monitoring and assurance function to assess the effectiveness of harm mitigation measures.

      • track and report evidence of new or increasing illegal content.

      • have a code of conduct setting standards for protecting users from illegal harm.

      • provide compliance training to individuals involved in service design and operation.

    Content Moderation

    • All services should have a content moderation function to review and assess suspected illegal content and take it down swiftly.

    • Large or multi-risk services should:

      • set and record internal content policies, performance targets, and prioritize content for review.

      • provide training and materials for content moderators (including volunteers) and use hash-matching to detect CSAM.

    Reporting and Complaints

    • All services should have accessible and user-friendly systems for reporting and complaints, and take appropriate action on complaints.

    • Larger services and those at risk of illegal harm should provide information about complaint outcomes and allow users to opt out of communications.

    • Specific requirements apply to handling complaints that are appeals or relate to proactive technology.

    Recommender Systems

    • Services conducting on-platform testing of recommender systems and at risk of multiple harms should collect and analyse safety metrics.

    Settings, Functionalities and User Support

    • Services with age-determination capabilities and at risk of grooming should implement safety defaults for child users and provide support.

    • All services should have terms of service that address illegal content and complaints, and these terms should be clear and accessible.

    User Access

    • All services should remove accounts of proscribed organizations.

    User Controls

    • Large services at risk of specific harms should offer blocking, muting, and comment-disabling features.

    Notable User and Monetised Labelling Schemes

    • Large services with labelling schemes for notable or monetized users should have policies to reduce the risk of harm associated with these schemes.

  • Implementing the recommended measures will involve the processing of personal data, and service providers are expected to comply fully with data protection law when taking measures for the purpose of complying with their online safety duties.

The purpose of ICRAs are to help service providers understand how different kinds of illegal harm could arise on their service and what safety measures need to be put in place to protect users. ICRA’s must be ‘suitable and sufficient’ for a provider to meet their OSA obligations.

The Risk Assessment Guidance and Risk Profiles recommends that service providers consider two main types of evidence when conducting a risk assessment:

  1. Core inputs: This type of evidence should be considered by all service providers and includes risk factors identified through the relevant Risk Profile, user complaints and reports, user data (such as age, language, and groups at risk), retrospective analysis of incidents of harm, relevant sections of Ofcom’s Register of Risks, evidence drawn from existing controls, and other relevant information (including other characteristics of the service that may increase or decrease the risk of harm).

  2. Enhanced inputs: This type of evidence should be considered by large service providers and those who have identified multiple specific risk factors for a kind of illegal content. Examples of enhanced inputs include results of product testing, results of content moderation systems, consultation with internal experts on risks and technical mitigations, views of independent experts, internal and external commissioned research, outcomes of external audit or other risk assurance processes, consultation with users, and results of engagement with relevant representative groups.

The different types of illegal content that must be assessed are:

  • The 17 kinds of priority illegal content: Terrorism, Child Sexual Exploitation and Abuse (CSEA) (including Grooming, Child Sexual Abuse Material (CSAM), Hate, Harassment, stalking, threats and abuse, Controlling or coercive behaviour, Intimate image abuse, Extreme pornography, Sexual exploitation of adults, Human trafficking, Unlawful immigration, Fraud and financial offences, Proceeds of crime, Drugs and psychoactive substances, Firearms, knives and other weapons, Encouraging or assisting suicide, Foreign interference, and Animal cruelty.

  • Other illegal content: This includes non-priority illegal content as described in the Register of Risks and potentially other offences depending on the specific service and evidence available.

Additional factors that service providers should consider when carrying out an illegal content risk assessment:

  • Service characteristics: The characteristics of the service, such as its user base (e.g., age, language, vulnerable groups), functionalities (e.g., live streaming, anonymous posting), and business model, can affect the level of risk.

  • Risk factors: The Risk Profiles published by Ofcom identify specific risk factors associated with each type of illegal content. Service providers should consider these risk factors and any additional factors specific to their service.

  • Likelihood and impact of harm: The assessment should consider the likelihood of each type of illegal content occurring on the service and the potential impact of that content on users and others.

  • Existing controls: The effectiveness of any existing measures to mitigate or control illegal content should be considered.

  • Evidence: Service providers should use a variety of evidence to inform their risk assessment, including user complaints, data analysis, and external research.

Categorised service providers also have the following additional duties regarding their illegal content risk assessments:

  • Publication of Summary: They must publish a summary of their most recent illegal content risk assessment. Category 1 services must include this summary in their terms of service, while Category 2A services must include it in a publicly available statement. The summary should include the findings of the assessment, including the levels of risk and the nature and severity of potential harm to individuals.

  • Provision of Assessment to Ofcom: They must provide Ofcom with a copy of their risk assessment record as soon as reasonably practicable after completing or revising it.

The Online Services Act (s.61) defines content that is harmful to children as:

  • ‘Primary priority content’ being: 

    • Pornographic content

    • Content which encourages, promotes or provides instructions for suicide.

    • Content which encourages, promotes or provides instructions for an act of deliberate self-injury.

    • Content which encourages, promotes or provides instructions for an eating disorder or behaviours associated with an eating disorder.

  • Section 62 defines other priority content that can be harmful to children and must be managed appropriately. It includes:

    • Bullying and cyberbullying

    • Abusive or hateful content

    • Content depicting or encouraging serious violence

    • Content promoting dangerous stunts or challenges

    • Content encouraging the ingestion or exposure to harmful substances

    • Platforms must ensure that access to this type of content is age-appropriate and that protections are in place for children

The OSA prioritises protecting UK users from online harms. 

(1)This Act provides for a new regulatory framework which has the general purpose of making the use of internet services regulated by this Act safer for individuals in the United Kingdom. (2)To achieve that purpose, this Act (among other things)— 

(a)imposes duties which, in broad terms, require providers of services regulated by this Act to identify, mitigate and manage the risks of harm (including risks which particularly affect individuals with a certain characteristic) from—

 (i)illegal content and activity, and 

(ii)content and activity that is harmful to children, and 

(b)confers new functions and powers on the regulator, OFCOM.

The Act outlines specific age and identity verification requirements, particularly for platforms categorized as Category 1 services, which are likely to have a significant number of users and offer a wide range of functionalities. In addition, platforms that are clearly aimed at pornography consumption must carry out age assurance checks.
 

Age Assurance

  • “Highly Effective” Age Verification or Estimation Required: The Act mandates that services likely to be accessed by children use age verification or age estimation methods that are “highly effective” at correctly determining whether a user is a child. This applies across all areas of the service, including design, operation, and content.
  • Self-Declaration Not Sufficient: Simple self-declaration of age is not considered a valid form of age verification or age estimation.
  • Ofcom Guidance on Effectiveness: Ofcom, the designated regulator, is responsible for providing guidance on what constitutes “highly effective” age assurance. This guidance will include examples of effective and ineffective methods, and principles to be considered.
  • Factors for Effective Age Assurance: Ofcom’s guidance suggests that effective age assurance methods should be technically accurate, robust, reliable, and fair. They should be easy to use and work effectively for all users, regardless of their characteristics.
  • Recommended Methods: Ofcom has recommended methods like credit card checks, open banking, and photo ID matching as potentially highly effective.
Transparency and Reporting Requirements:
Platforms using age assurance must clearly explain their methods in their terms of service and provide detailed information in a publicly available statement. They must also keep written records of their age assurance practices and how they considered user privacy.
 

Ofcom Reports on Age Assurance Use:

Ofcom will assess how providers use age assurance and its effectiveness, reporting on any factors hindering its implementation.
 

Identity Verification

  • Category 1 Services Must Offer Identity Verification: The Act requires Category 1 services (like major social media platforms) to offer all adult users in the UK the option to verify their identity, unless identity verification is already necessary to access the service.
  • No Specific Method Mandated: The Act does not specify a particular method of identity verification. Platforms can choose a method that works for their service, but it must be clearly explained in their terms of service.
  • Documentation Not Required: The identity verification process does not necessarily need to involve providing documentation.

 

User Empowerment Features:

Identity verification is linked to user empowerment features, as platforms must offer adult users the ability to:

  • Control their exposure to harmful content.
  • Choose whether to interact with content from verified or non-verified users.
  • Filter out non-verified users.

Ofcom Guidance for Category 1 Services:

Ofcom is expected to provide guidance for Category 1 services on implementing identity verification, with a focus on ensuring availability for vulnerable adult users.

General Considerations

  • The Act aims to strike a balance between online safety and freedom of expression, and this balance influences the implementation of age and identity verification requirements.
  • Specific details regarding the application of these requirements are still under development, and Ofcom is working on codes of practice and guidance to provide further clarification.
The age and identity verification requirements under the UK Online Safety Act 2023 aim to enhance online safety, particularly for children and vulnerable adults. The Act focuses on the effectiveness of these measures, transparency from platforms, and user empowerment to control their online experiences.
The UK Online Safety Act’s requirements regarding pornography varies for specialised pornography platforms and other internet services like search engines.
 

Specialised pronography platforms:

For specialised pornography platforms, which are classified as “services that feature provider pornographic content”, the Act imposes a duty to ensure children are not normally able to encounter regulated provider pornographic content. This means these platforms will have to implement robust age verification or age estimation systems.
 
The Act emphasizes that these measures must be “highly effective” at determining whether a user is a child.
 
The definition of “regulated provider pornographic content” is specific and excludes content that consists solely of text, or text accompanied by emojis or non-pornographic GIFs. However, content in image, video, or audio form that is considered pornographic would fall under this definition and trigger the age assurance obligations.
 
The Act also mandates that these platforms, along with other user-to-user services and search services, conduct risk assessments. These assessments should identify and mitigate potential harms related to illegal content, including child sexual abuse material (CSAM) and extreme pornography.
 
Research indicates that user-to-user pornography services are particularly vulnerable to these types of illegal content. For example, a study found that a user-to-user pornography website hosted nearly 60,000 videos under phrases associated with intimate image abuse.
 
Additionally, evidence suggests that some services that host pornographic content prioritize user growth over content moderation, leading to less effective detection and removal of extreme content.
 

For other regulated internet services:

For other internet services like search engines the Act’s impact is more indirect. While search engines are not directly obligated to implement age verification, they are still subject to the requirement to mitigate and manage the risks of harm from illegal content and content harmful to children.This includes content that may be accessed through search results, even if the search engine itself does not host the content. For example, evidence suggests that search engines can be used to access websites offering illegal items like drugs and firearms.
 
The Act acknowledges that search engines are often the starting point for many users’ online journeys and that they play a crucial role in making content accessible.
 
Search engines are also subject to risk assessments. Given the potential for users to find illegal content through search, they are expected to consider how their functionalities, like image/video search and reverse image search, might increase risks. Furthermore, even if pornography is not their core function or purpose, platforms like X (formerly Twitter) and Reddit, which allow users to share user-generated content, including pornographic material, would be classified as user-to-user services and be subject to the relevant duties under the Act. This means they would also need to conduct risk assessments, consider the risks associated with user-generated pornographic content, and implement measures to mitigate those risks.
 
In conclusion, the Online Safety Act has significant implications for both specialised pornography platforms and other internet services that may have links to pornography. The Act aims to protect children from accessing pornographic content through robust age verification measures and seeks to reduce the prevalence of illegal content on these platforms through risk assessments and content moderation practices. The Act’s wide scope means that even platforms where pornography is not the main focus are still obligated to address the risks associated with such content.

The OSA imposes specific obligations on Category 1 services due to their reach and influence. These rules aim to safeguard the diversity of opinions and the integrity of democratic debate whilst minimising harmful speech. See also ‘Democratic Threats‘ below.

Key Requirements

  • Content of Democratic Importance: Providers must ensure moderation processes do not disproportionately suppress political opinions or stifle democratic discussion. This includes protecting content from verified news publishers, journalistic pieces, and user-generated contributions to political debates.

  • Equal Treatment of Opinions: Decisions about content moderation must respect free expression and avoid bias against particular political viewpoints. This includes avoiding overzealous removals under policies aimed at combating misinformation or hate speech.

  • Protection of Journalistic Content: Articles and posts deemed to have journalistic value must not be unjustly removed or suppressed, ensuring the platform remains a space for investigative reporting and public interest stories. Platforms must protect:

    • Verified news publishers’ content.

    • Journalistic content, even if shared by individual users.

    • User-generated contributions to political debates.

While regulated internet services are required to remove illegal or harmful content, the OSA emphasises the need to uphold free speech. Providers must develop policies and systems that balance protecting users from harm and allowing diverse viewpoints to flourish.

The requirement for transparency reports that include moderation policies and actions will be crucial here

  • Detailed Reporting: all categorised regulated internet services must publish annual transparency reports explaining their algorithms’ role in content moderation and misinformation detection. These reports should detail the volume of flagged and removed content, alongside the impact of moderation algorithms on users.

  • Proactive Technology Disclosure: providers must disclose any automated systems, such as machine learning tools, used to detect harmful or illegal content.

  • Terms of Service Clarity: providers must clearly explain its policies on algorithmic decision-making, especially regarding content of democratic importance and misinformation

User Empowerment Tools

The Act promotes user choice and control by requiring platforms to provide tools that help users manage their online experience. For example:

  • Users can thereby gain more insight into how recommendation systems work.

  • Platforms could be required to offer non-personalised feeds that reduce reliance on algorithm-driven content.

  • Category 1 services must provide adult users with control features that effectively:

    • Reduce the likelihood of encountering specific types of legal but potentially harmful content, such as content promoting suicide, self-harm, or eating disorders.

    • Offer features to filter out interactions with non-verified users.

    • Clearly explain the available control features and their usage in the terms of service.

See ‘Clean up the internet’ recommendations to Ofcom

Algorithm Transparency 

Algorithms are central to how regulated internet services moderate content, serve recommendations, and filter harmful material. The Act introduces transparency and accountability measures to ensure these algorithms and systems are safe and fair:

  • Providers must be transparent about how their algorithms function and the potential impact on users’ exposure to illegal content.

    • They must include provisions in their terms of service or publicly available statements that specify how individuals are protected from illegal content, including details about the design and operation of algorithms used.

    • Additionally, they must provide information about any proactive technology used for compliance, including how it works, and ensure this information is clear and accessible.

  • Category 1 providers have an additional duty to summarise the findings of their most recent ICRAs in their terms of service, including the level of risk associated with illegal content. Factors like the speed and reach of content distribution facilitated by algorithms must be considered. These assessments must be updated regularly to reflect changes in Ofcom’s Codes of Practice (COPs), risk profiles, and the provider’s business practices.

  • Safer Algorithms for Children: If regulated internet services are accessed by children, their algorithms must minimise exposure to harmful content. This includes age-appropriate design measures and risk assessments targeting features that could harm younger users.

AI Chatbots: It is very likely services such as ChatGPT, Gemini, Perplexity etc will be categorised as a user-to-user service, as they allow users to interact with a Generative AI chatbot and share chatbot-generated text and images and other user genrated content with other users.

Art: Services such as Midjourney (Art) are also in scope.

Generative AI Tools and Pornographic Content: Services featuring AI tools capable of generating pornographic material are additionally regulated and must implement highly effective age assurance measures to prevent children from accessing such content.

Generative AI and Search Services: AI tools enabling searches across multiple websites or databases are considered search services under the OSA. This includes tools that modify or augment search results on existing search engines or offer live internet results on standalone platforms. Consequently, these AI-powered search services will need to comply with the relevant duties outlined in the Act.

Ofcom Guidance regarding generative AI and AI chatbots

Combating hate speech is a cornerstone of the OSA. Regulated providers must take decisive measures to reduce the prevalence of illegal hate speech and implement systems for detection, reporting, and removal of hate speech.

Key Duties for Platforms

  • Illegal Content Detection: Hate speech is classified as priority illegal content, requiring regulated internet services to identify and remove such material promptly.

  • Risk Assessments: Regulated providers must evaluate the risks of hate speech on their platform and develop proportionate systems to manage and mitigate these risks.

  • Clear Reporting Mechanisms: The platform must provide users with accessible tools to flag hate speech. Reports must be acted upon swiftly, with outcomes communicated transparently.

Transparency by Moderation

To meet the Act’s transparency standards, regulated services providers must:

  • Publish data on the volume and nature of hate speech flagged, reviewed, and removed.

  • Explain their systems for detecting and moderating hate speech in its transparency reports.

By addressing hate speech robustly, regulated services providers can align legal requirements with fostering a safer environment for users.

The process of bringing the Online Safety Act into law has been winding and subject to lengthy delays. Many of the provisions of the OSA came into force on January 10 2024 (including the new duty of care) for all regulated online services and many of the powers needed by Ofcom as the regulator responsible for enforcing the OSA. However, it has been subject to an implementation process which required Ofcom consultation and the issuance of Codes and guidance.

Major Deadline: All providers have a duty to assess the risk of illegal harms on their services, with a deadline of 16 March 2025. Providers will need to take the safety measures set out in the Codes or use other effective measures to protect users from illegal content and activity.

Additional key protections in respect of Category 1 providers (like X) are unlikely to be in force until 2026 or 2027. Further delay on major platforms now looks very dangerous (see below ‘Democratic Threats’).

The Secretary of State (Schedule 10) will determine regulations specifying Category 1, 2A, and 2B threshold conditions for different types of services. Commencement dates for remaining provisions of the Act will be set by future regulations under Section 240.

Phased roll-out: Regulated service providers must take steps to comply with new duties following Ofcom guidance, which is to be published in phases:

Phase 1: Illegal Harms (December 2024–March 2025)

  • December 2024: Ofcom will release the Illegal Harms Statement, including:

    • Illegal Harms Codes of Practice.

    • Guidance on illegal content risk assessments.

  • March 2025: Service providers must complete risk assessments and comply with the Codes or equivalent measures. Enforcement begins once Codes pass through Parliament.

Phase 2: Child Safety, Pornography, and Protection of Women and Girls (January–July 2025)

  • January 2025:

    • Final guidance on age assurance for publishers of pornography and children’s access assessments.

    • Services likely accessed by children must begin children’s risk assessments.

  • April 2025: Protection of Children Codes and risk assessment guidance published.

  • July 2025: Child protection duties become enforceable.

  • February 2025: Draft guidance on protecting women and girls will address specific harms affecting them.

Phase 3: Categorisation and Additional Duties (2024–2026)

  • End of 2024: Government to confirm thresholds for service categorisation (Category 1, 2A, or 2B).

  • Summer 2025: Categorised services register published; draft transparency notices follow shortly.

  • Early 2026: Proposals for additional duties on categorised services are expected to be released.

  • 2027: Implementation of the proposals for Category provider obligations.

Ofcom Roadmap:

Ofcom Roadmap to Regulation

Ofcom Important Dates

Ofcom state, in the 16 December 2024 Overview, that in early 2025, they will seek to enforce compliance with the rules by a combination of means, including:

  1. Supervisory engagement with the largest and riskiest providers to ensure they understand Ofcom’s expectations and come into compliance quickly, pushing for improvements where needed;

  2. Gathering and analysing the risk assessments of the largest and riskiest providers so they can consider whether they are identifying and mitigating illegal harms risks effectively;

  3. Monitoring compliance and taking enforcement action across the sector if providers fail to complete their illegal harms risk assessment by 16 March 2025;

  4. Focused engagement with certain high-risk providers to ensure they are complying with CSAM hash-matching measure, followed by enforcement action where needed; and

  5. Further targeted enforcement action for breaches of the safety duties where they identify serious ongoing issues that represent significant risks to users, to push for improved user outcomes and deter poor compliance.

“We will also use our transparency powers to shine a light on safety matters, share good practice, and highlight where improvements can be made.”

http://www.ofcom.org.uk/siteassets/resources/documents/online-safety/information-for-industry/roadmap/ofcoms-approach-to-implementing-the-online-safety-act/?v=330308

Compliance Monitoring

Ofcom, the UK’s communications regulator will closely monitor regulated internet services’s adherence to the Act. Breaches could result in substantial penalties, including fines of the greater of £18 million or up to 10% of global annual turnover (Sch. 13).

Balancing Act

Regulated providers face significant operational challenges:

  • Maintaining Free Speech: Striking the right balance between protecting free expression and removing harmful content is critical. Over-moderation risks alienating users, while under-moderation could attract regulatory action.

  • Transparency Burden: Producing detailed reports and disclosing algorithmic processes requires resources and technical clarity.

  • Algorithm Design: Algorithms must meet the dual demands of protecting children and fostering open debate. Regulated internet services may need to invest in redesigning its systems to comply with these requirements.

Despite concerns about notable interference in UK politics and stirring up anti-Islamic and anti-immigrant sentiment in the UK, Elon Musk is maintaining his aggression against the UK government (and the EU which has the Digital Services Act which in many respects is similar to the OSA).

In the summer of 2024, Musk personally and via his X platform helped to spread anti-immigrant, anti-Government and anti-Islamic misinformation by right-wing extremists about the tragic stabbings of a number of adults and children in Stockport. This culminated in a number of riots across the UK fed by far-right extremists. The young man responsible for the tragic events in Stockport was neither a Muslim or an immigrant.

Read: How Elon Musk Helped Fuel the U.K.’s Far-Right Riots

In respect of the EU DSA, the Commission has already found X to be in breach of misuse of verification checkmarks, blocking access for research & lack of transparency for advertising. And remains under investigation for not curbing (i) the spread of illegal content — hate speech or incitement of terrorism — (ii) information manipulation.

Despite the continuing and accelerating attacks by Musk against the EU and the UK as they try to rein in hate crimes, unlawful content, and misinformation on social media platforms, in the meantime Peter Kyle (the UK’s technology secretary) recently suggested that governments need to show a “sense of humility” with big tech companies and treat them more like nation states. 

Marietje Schaake, a former Dutch member of the European parliament and now the international policy director at Stanford University Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centred Artificial Intelligence HAI) commented on this statement as follows: 

I think it’s a baffling misunderstanding of the role of a democratically elected and accountable leader. Yes, these companies have become incredibly powerful, and as such I understand the comparison to the role of states, because increasingly these companies take decisions that used to be the exclusive domain of the state. But the answer, particularly from a government that is progressively leaning, should be to strengthen the primacy of democratic governance and oversight, and not to show humility. What is needed is self-confidence on the part of democratic government to make sure that these companies, these services, are taking their proper role within a rule of law-based system, and are not overtaking it.”

 

Hopefully the UK Government will be more aggressive in seeking to bring powerful unelected billionaires (like Elon Musk) and organisations to account. 

It is essential that Ofcom help platforms get the balance right as in many cases the right to be offended by someone else’s views is a cornerstone of a democratic society.

“If we don’t believe in freedom of expression for people we despise, we don’t believe in it at all.”

(Noam Chomsky)

 

Difference between freedom of speech and abuse of freedom of speech

Clearly there is a big difference between the right for a man or woman on the street taking to a social media platform (or the streets) to express their concern about policies and politics from the misuse of platforms (or platform data) or political processes by powerful vested interests to skew public opinion and spread misinformation or even racial or religious prejudices.

With great wealth and power should come great transparency and responsibility (though in our current political landscape it appears the opposite is true). See Democratic Threats above for more analysis on this.

Protecting us from Government Monopolies on Permitted Opinions

In addition to the risk of misinformation, bias and skewed freedom of speech and opinion by operators of social media and AI platforms and apps we must also bear in mind the significant risk of Governments seeking to have a monopoly on which opinions are permitted. This risk is always extremely high, as witnessed, for example, by the anti-scientific approach to any debate in the UK (and US) during COVID. When science meets politics, science invariably suffers.

Civil liberties should not be easily swept away simply by asserting public health grounds or national security grounds.  

Transparency and protection of democracy and free speech must also extend to the impact of indirect political and governmental influence over regulated service providers (i.e. outside of the normal permitted legal channels) and what views about ‘reality’ are permitted. Transparency reports by in-scope providers must include the impact of direct and indirect political pressure and influence.

FAQs

FAQ: Frequently Asked  Questions about the UK Online Safety Act

Impact on Major Services

What is the impact for social media platforms like X, Facebook, Instagram and Bluesky?
The UK Online Safety Act 2023 will have significant implications for services like X (formerly Twitter), Instagram, Facebook ad Bluesky particularly concerning freedom of expression, the use of algorithms, transparency, and democratic protections.
 
For example, as a large platform with a substantial UK user base, X will be classified as a Category 1 service, subjecting it to the most stringent requirements of the Act.
 

Freedom of Expression:

The Act strives to balance online safety with the protection of free speech. While requiring platforms to address harmful content, it emphasises upholding freedom of expression and ensuring that legitimate content and diverse viewpoints are not unduly restricted. However, critics have expressed concerns about the potential for overzealous content removal and a chilling effect on free speech, especially given the Act’s broad definition of “content that is harmful to children”. 
 
There are concerns that the robust safety duties might outweigh the “balancing measures” intended to protect freedom of expression.
 
The Act’s impact on freedom of expression for services like X will depend on how Ofcom interprets and enforces its provisions. Striking a balance between user safety and free speech remains a complex challenge.
 

Use of Algorithms:

While the Act doesn’t explicitly mandate transparency about how algorithms are used to manage the risk of misinformation, the emphasis on transparency suggests that algorithms used for content moderation will likely face scrutiny. Ofcom has also highlighted the potential for algorithms to repeatedly expose users, particularly children, to harmful content, emphasizing the need for providers to mitigate these risks. 
 
The Act mandates that platforms consider the risks their algorithms pose in relation to illegal content and content that is harmful to children, potentially requiring them to adjust algorithms or platform design to minimise potential harms.
 
X will need to provide information about its algorithms in transparency reports, risk assessments, and terms of service, disclosing how they identify and mitigate harmful content like hate speech and misinformation. X will also need to ensure its algorithms comply with the Act’s requirements for protecting children.
 

Transparency:

Transparency is a key theme in the Act, especially for Category 1 services like X. The Act requires X to be transparent about its content moderation practices, especially those related to content of democratic importance. This includes:
  • Publishing annual transparency reports detailing its content moderation practices, the volume of harmful content removed, the use of algorithms, and their impact on users.
  • Providing clear information in its terms of service explaining its policies on content moderation, user safety, and reporting mechanisms.
  • Disclosing the use of “proactive technology,” such as automated tools or algorithms, used to detect and remove harmful content.
These transparency requirements aim to hold platforms accountable and empower users by providing clarity about how their data is used and content is moderated.
 

Democratic Protections:

The Act includes provisions to protect content of democratic importance, such as news publisher content, journalistic content, and user-generated content that contributes to political debate. Category 1 services like X must implement systems to ensure that decisions regarding content moderation consider the importance of free expression and provide equal treatment to diverse political opinions.
 
However, the Act does not specifically address whether algorithmic requirements apply to content of democratic importance. It remains to be seen how Ofcom will address this in future guidance.
 

Conclusion:

The UK Online Safety Act 2023 will have a significant impact on platforms like X. The Act’s focus on user safety, transparency, and accountability will require X to make substantial changes to its content moderation practices, algorithmic transparency, and approach to democratic content. X’s compliance with the Act will be closely monitored by Ofcom, with potential penalties for breaches.

 
It is crucial for platforms like X to proactively engage with the Act’s requirements and Ofcom’s guidance to ensure compliance and navigate the challenges of balancing online safety with freedom of expression. The Act’s effectiveness will ultimately depend on Ofcom’s ability to enforce its provisions and adapt to the evolving online landscape.
The Act will have a significant impact on services like ChatGPT, Gemini, Perplexity, Claude and others especially given the recent concerns about Generative AI and the potential for misuse.
 
These will usually fall within  user-to-user services, potentially impacting their functionalities, transparency requirements, and approach to user safety.
 
Ofcom published an open letter on November 8, 2024, specifically addressing Generative AI and chatbots in the context of the Act. This letter emphasised the Act’s application to:
  • Services that allow users to interact with and share content generated by AI chatbots. For example, if ChatGPT allows users to share AI-generated text, images, or videos with other users, it would be considered a regulated user-to-user service.
  • Services where users can create and share AI chatbots, known as ‘user chatbots’. This means that any AI-generated content created and shared by these ‘user chatbots’ would also be regulated by the Act.

Ofcom has expressed concerns about the potential for Generative AI chatbots to be used to create harmful content, such as chatbots that mimic real people, including deceased children. These concerns highlight the Act’s focus on protecting users from harmful content generated by AI, even if it is technically ‘user-generated’ through the chatbot interface. The Impact on services like ChatGPT are set out below.

Content Moderation:

The Act will require services like ChatGPT to implement robust content moderation mechanisms to prevent the creation and dissemination of illegal content through its platform. This could include:

    • Monitoring user prompts and chatbot responses to identify and prevent the generation of harmful content, such as hate speech, child sexual abuse material (CSAM), or content promoting terrorism.
    • Developing safeguards to prevent the creation of ‘user chatbots’ that mimic real people or deceased individuals, particularly children.
    • Implementing reporting mechanisms and processes for users to flag potentially harmful chatbot interactions or content.

Transparency:

The Act’s emphasis on transparency will require services like ChatGPT to provide more information about its content moderation practices and the use of AI in its service. This could include:

  • Publishing transparency reports detailing the volume and nature of harmful content identified and removed, including AI-generated content.
  • Disclosing the use of algorithms and proactive technology to detect and mitigate harmful content.
  • Providing clear information in its terms of service about its approach to user safety and AI-generated content.
  • Risk Assessments: Services like ChatGPT will need to conduct thorough risk assessments, evaluating the specific risks associated with Generative AI and chatbots, considering factors like:
    • The likelihood of its functionalities facilitating the presence or dissemination of harmful content, identifying functionalities more likely to do so.
    • How its design and operation, including its business model and use of proactive technology, may reduce or increase the likelihood of users encountering harmful content.
    • The risk of its proactive technology breaching statutory provisions or rules concerning privacy, particularly those relating to personal data processing.

Challenges and Considerations:

Defining Harmful Content: Applying the Act’s broad definitions of harmful content to the context of Generative AI will be complex. Determining what constitutes “harmful” chatbot interactions, considering factors like context, intent, and potential for harm, will require careful consideration.
 

Balancing Safety and Innovation:

Finding a balance between protecting users from harm and fostering innovation in Generative AI will be crucial. Overly restrictive measures could stifle the development and beneficial uses of AI chatbots.

Technical Feasibility:

Implementing effective content moderation and safety measures for a service like ChatGPT, which relies on complex AI models, poses technical challenges. Developing robust and adaptable solutions to mitigate risks associated with Generative AI will require ongoing research and innovation.
 

Conclusion:

The Online Safety Act 2023 represents a significant shift in the regulation of online services, including AI-powered platforms like ChatGPT. The Act’s focus on user safety and transparency will require ChatGPT to adapt its approach, implement robust content moderation, and provide greater transparency about its operations. While the Act presents challenges, it also offers an opportunity for ChatGPT to demonstrate its commitment to responsible AI development and user safety. The evolving nature of Generative AI and the Act’s implementation will require ongoing dialogue between Ofcom, service providers like ChatGPT, and stakeholders to ensure a balanced and effective approach to online safety.
The Act will significantly impact services like Google Search, particularly due to their classification as Category 2A services – high-reach search services.
 
The Act’s focus on user safety, transparency, and accountability will require Google Search to make considerable changes to its content moderation practices, algorithmic transparency, and approach to content of democratic importance. Here’s a breakdown of the likely impacts:
 

Freedom of Expression:

The Act seeks to balance online safety with the protection of free speech. It requires platforms to tackle harmful content while upholding freedom of expression and ensuring legitimate content is not unduly restricted. However, concerns remain regarding the Act’s potential for overzealous content removal and its impact on free speech, mirroring similar concerns raised for services like X.
 

Use of Algorithms:

A key area of impact concerns the use of algorithms, especially those influencing the display, promotion, restriction, or recommendation of content.
 
Google Search will be required to consider the risks its algorithms pose in relation to illegal content and content harmful to children, potentially necessitating adjustments to its algorithms or platform design to minimize harm. The Act’s emphasis on transparency suggests Google’s algorithms will likely face scrutiny regarding content moderation, particularly how they identify and mitigate harmful content, including hate speech, misinformation, CSAM, and content encouraging suicide or self-harm.
 
Google Search will likely need to provide information about its algorithms in transparency reports, risk assessments, and terms of service, disclosing how they identify and mitigate harmful content. They will also need to ensure their algorithms comply with the Act’s requirements for protecting children.
 

Transparency:

The Act mandates transparency for all regulated services, particularly for Category 2A services like Google Search. This includes:
  • Publishing annual transparency reports detailing content moderation practices, including the volume of harmful content removed, the use of algorithms, and their impact on users.
  • Providing clear information in its terms of service explaining its content moderation policies, user safety, and reporting mechanisms.
  • Disclosing the use of “proactive technology,” such as automated tools or algorithms, used to detect and remove harmful content.
These transparency requirements are intended to hold platforms accountable and empower users by providing clarity about how their data is used and content is moderated.
 

Democratic Protections:

The Act includes provisions to safeguard content of democratic importance, such as news publisher content, journalistic content, and user-generated content that contributes to political debate. While the Act emphasizes the need to protect democratic content, it doesn’t explicitly address whether algorithmic requirements apply to such content. It remains unclear how Ofcom will address this in its guidance and how Google Search will ensure that decisions regarding content moderation on politically relevant content consider the importance of free expression and provide equal treatment to diverse political opinions.
 

Additional Considerations for Google Search:

  • Specific Risk Factors: Google Search’s risk assessment must consider specific risk factors identified in the Act, including its service type as a general search service, functionalities such as predictive search suggestions, and the presence of child users.
  • Prevalence of Illegal Content: The Act requires Google Search to assess the prevalence of illegal content and content that is harmful to children on its platform. This involves analyzing the extent of such content’s dissemination and the severity of the potential harm it poses.
  • Mitigation Measures: Google Search will need to implement proportionate measures to mitigate and manage the risks identified in its risk assessment. This could include measures like age assurance, content moderation, and user reporting mechanisms.

Enforcement and Penalties:

Ofcom will closely monitor Google Search’s compliance with the Act. Penalties for breaches could include fines, service restriction orders, and even criminal sanctions for senior managers.

Conclusion:

The UK Online Safety Act 2023 poses significant challenges and obligations for Google Search. The Act’s focus on user safety, transparency, and accountability will require substantial changes to content moderation practices, algorithmic transparency, and the approach to content of democratic importance. Google’s compliance with the Act will be closely scrutinized, emphasizing the need for a proactive and comprehensive approach to meeting its requirements.

The Act’s effectiveness will depend on Ofcom’s ability to enforce its provisions and adapt to the evolving online landscape.

The specific application of the Act to image generation platforms will depend on factors like how they are structured, their user base, and the content they host.
 

Potential impacts:

  • Illegal Content Generation: A primary concern would be the potential for Midjourney to be used to generate illegal content, such as child sexual abuse material (CSAM). The Act requires platforms to take steps to mitigate and effectively manage risks associated with illegal content, which could involve:
    • Implementing safeguards to prevent the generation of illegal images, potentially through content filtering or prompt moderation. This might involve restricting certain prompts or keywords known to be associated with illegal content.
    • Collaborating with law enforcement agencies and organizations like the Internet Watch Foundation (IWF) to identify and remove CSAM. This could include using hash-matching technology to detect known CSAM images.

Harmful Content and Algorithms:

While Midjourney’s primary function is image generation, its algorithms could still be subject to scrutiny under the Act, especially if they influence content recommendations or user exposure to potentially harmful imagery. The Act mandates that platforms consider how algorithms impact user exposure to illegal content and content harmful to children. Midjourney might need to assess how its algorithms could contribute to the spread of harmful content and implement safeguards to minimize risks. For example:

  • Analyzing user prompts and generated images to identify patterns or trends that could indicate harmful content generation.
  • Adjusting algorithms to limit the visibility or recommendation of images that are likely to be harmful.
Note: Midjourney already does take steps to prevent the automatic generation of potentially explicit or defamatory images.
 

Transparency and Accountability:

The Act’s emphasis on transparency could require Midjourney to:
  • Transparency: Disclose information about its algorithms and content moderation practices, particularly how they address illegal and harmful content. This could involve publishing transparency reports or updating its terms of service.
  • User Controls: Provide users with more control over the content they encounter, potentially through filtering options or reporting mechanisms.
  • Risk Assessments: Midjourney would likely need to conduct thorough risk assessments, specifically evaluating the risks associated with image generation and its potential to facilitate illegal or harmful content. This would involve:
    • Identifying risk factors specific to image generation, such as the ease of creating realistic imagery or the potential for deepfakes.
    • Considering how its design and operation, including its user interface, algorithms, and content moderation processes, could contribute to or mitigate risks.

Additional Considerations:

Categorization: The Act categorizes platforms based on their size, reach, and functionality. Depending on Midjourney’s user base and functionalities, it could fall under different categories, potentially influencing the specific requirements it needs to meet.
 
Emerging Technology: Image generation is a rapidly evolving field. The Act’s focus on being technology-neutral suggests it’s intended to adapt to new technologies, but the specific application to image generation platforms like Midjourney may require further clarification from Ofcom.
 
International Applicability: If Midjourney has a significant UK user base or targets UK users, the Act could apply even if the platform is based outside the UK.
 
In conclusion, Midjourney and similar platforms will need to monitor Ofcom’s guidance and adapt their practices to ensure compliance and mitigate potential risks. The evolving nature of image generation technology and the Act’s implementation will require ongoing dialogue and collaboration between Ofcom, service providers, and stakeholders to ensure a balanced and effective approach to online safety.

Key Considerations:

  • Categorization: The Act’s applicability and specific requirements hinge on how video conferencing tools are categorized. This depends on factors like user base, functionality, and whether they primarily facilitate private or public communication. If categorized as user-to-user services due to features like group chats or content sharing capabilities, they might be subject to more stringent requirements, similar to social media platforms.
 
  • Private vs. Public Communication: A core principle of the Act is the distinction between private and public communication. The Act generally avoids imposing obligations related to private communications, recognizing the importance of privacy. Video conferencing tools primarily used for private one-to-one or small group conversations might fall under this exemption. However, features enabling broader content sharing, recording, or public broadcasting could trigger additional scrutiny.
  • Illegal Content and CSAM: A significant concern is the potential misuse of video conferencing tools for illegal activities, including the creation and distribution of Child Sexual Abuse Material (CSAM). The Act requires platforms to take steps to mitigate and manage risks associated with illegal content. This could potentially impact video conferencing tools in the following ways:
  • Proactive Measures: Platforms are required to implement proactive measures to detect and prevent CSAM, potentially through partnerships with organizations like the Internet Watch Foundation (IWF) or using hash-matching technology to identify known CSAM content.
  • Content Moderation: Depending on categorization and functionalities, video conferencing tools could face obligations related to content moderation, requiring them to remove or disable access to illegal content. This could involve human moderation or automated tools.

Transparency and Reporting:

The Act mandates transparency for regulated services, potentially requiring video conferencing tools to disclose information about their content moderation practices, including the volume of illegal content removed and the use of proactive technologies. This could involve:
  • Publishing transparency reports outlining their approach to content moderation.
  • Updating terms of service to clearly explain user safety measures and reporting mechanisms.
  • Potential Impacts on Specific Features:
    • Recording and Sharing: Features enabling recording and sharing of video conferencing sessions could be subject to additional scrutiny due to the potential for misuse. Platforms might be required to implement safeguards, such as requiring user consent for recording or limiting sharing capabilities to prevent the spread of illegal content.
    • Livestreaming: If video conferencing tools offer livestreaming functionality, they might face similar obligations as video-sharing platforms, potentially requiring content moderation during livestreams and measures to prevent the broadcast of illegal or harmful content.
    • Messaging and Chat: Integrated messaging and chat functionalities could be treated similarly to other messaging services. Depending on the level of encryption and the platform’s categorization, there could be requirements related to illegal content detection and removal or transparency regarding data access for law enforcement purposes.

Additional Considerations:

Risk Assessments: Video conferencing tool providers  need to conduct comprehensive risk assessments to identify and evaluate risks specific to their platform and features.
 
Age Assurance: If platforms have a significant number of child users or offer features attractive to children, they might need to implement age assurance mechanisms to protect children from harmful content or interactions.
 
Emerging Technologies: The Act is designed to be technology-neutral and adapt to evolving technologies. This could impact video conferencing tools as new features or functionalities emerge, requiring ongoing assessment and adaptation to comply with the Act’s principles.
 

Conclusion:

The specific requirements will depend on how video conferencing tools are categorized and the functionalities they offer.
The UK Online Safety Act 2023 will have a significant impact on popular gaming environments like Roblox and Fortnite, particularly in areas related to content moderation, child safety, and transparency. In addition, as large service providers they represent a higher risk to children and will have additional obligations to take measures to protect children from illegal content and from harm.
 
Gaming environments like Roblox and Fortnite that allow chat fall under the definition of “user-to-user services” under the Act. This categorization means these platforms will have legal responsibilities for keeping users safe online, particularly children, and will need to take steps to mitigate and manage risks associated with illegal and harmful content.
 

Illegal Content:

The Act requires platforms to proactively address illegal content, including Child Sexual Abuse Material (CSAM) and terrorism-related content.
 

Platforms like Roblox and Fortnite will need to implement:

  • Content moderation: Robust content moderation systems to detect and remove illegal content. This might involve using a combination of automated tools (like hash-matching for known CSAM) and human moderators.
  • Collaboration channels: to collaborate with law enforcement agencies and organizations like the IWF to proactively identify and report illegal content.
  • Child Safety and Grooming: The Act is particularly focused on protecting children from online harms, including grooming and sexual exploitation. Roblox and Fortnite, given their large child user bases, will need to implement robust safeguards to prevent and detect grooming behaviours. This could include:
    • Enhanced age verification mechanisms to accurately identify child users.
    • Restricting direct messaging or chat functionalities for children or requiring parental consent.
    • Providing educational resources and safety tips to children and parents.
    • Proactively monitoring chat and interactions for suspicious behaviour and patterns that indicate grooming.

Harmful Content:

While not strictly illegal, certain types of content can be harmful, particularly to children. This includes content promoting self-harm, suicide, eating disorders, violence, and hate speech.
 
Gaming environments and platforms  like Roblox and Fortnite will need to develop and implement clear policies on harmful content and establish effective moderation processes to address it.
 
They will also need to consider the impact of their algorithms and recommender systems on user exposure to harmful content. This could involve:
  • Analyzing user-generated content and in-game interactions to identify and address potential risks.
  • Adjusting algorithms to limit the visibility of or recommendations for harmful content.
  • Providing users with tools to manage their online experience and filter out unwanted content.

Transparency and Reporting:

The Act emphasizes transparency and accountability, requiring platforms to be open about their content moderation practices and the risks they face.

Roblox and Fortnite will need to publish regular transparency reports detailing their efforts to combat illegal and harmful content. These reports could include:

  • Data on the volume of content removed or action taken.
  • Information about their content moderation processes and the use of automated tools.
  • Insights into emerging risks and challenges.
 
They might also need to be more transparent about their algorithms and how they impact content visibility and user experience.
 

Risk Assessments:

Providers of services like Roblox and Fortnite will need to conduct thorough risk assessments to identify and evaluate the specific risks of illegal and harmful content on their platforms.
 
These assessments should consider factors like their user base demographics, functionalities (like chat, user groups, and in-game interactions), and business models.
 
The risk assessments should inform their safety strategies and content moderation policies.
 
Overall, the UK Online Safety Act 2023 signifies a significant shift in how online platforms, including popular gaming environments, are expected to approach user safety. Roblox and Fortnite will need to invest in robust safety measures, enhance their content moderation capabilities, and be more transparent about their practices to ensure compliance and protect their users, particularly children, from online harms.
The UK Online Safety Act 2023 will have a significant impact on major pornography platforms (like OnlyFans and Pornhub).
 
These platforms are categorized as services that feature provider pornographic content under the Act, making them subject to specific duties to ensure they are not accessible to children.
 
Age Verification:
The Act requires these platforms to implement robust age-verification systems to prevent children from accessing pornographic content. This could involve:
  • Using third-party age verification providers.
  • Implementing stricter identity verification procedures.
  • Using age estimation technology.

 

Illegal Content:

Like other platforms, providers of pornography will need to take steps to mitigate and manage risks associated with illegal content, such as CSAM and extreme pornography. This includes:

  • Proactive Measures: Platforms will need to implement proactive measures to detect and prevent illegal content, including using technology to scan for known illegal content.
  • Content Moderation: Platforms might face obligations related to content moderation, requiring them to remove or disable access to illegal content. This could involve human moderation or automated tools.
  • Risk Assessments: Sites like Onlyfans and Pornhub will need to conduct comprehensive risk assessments to identify and evaluate risks specific to their platforms. Given that research indicates that user-to-user pornography services are at a higher risk of hosting intimate image abuse and extreme pornography these platforms will need to pay particular attention to:
    • User-generated Content: Both platforms host user-generated content, meaning they will need to assess the risks associated with this content, such as the potential for intimate image abuse, extreme pornography, and the exploitation of adults.
    • Functionalities: The risk assessment should consider the role of platform functionalities in facilitating illegal content. For example, the ability to post content anonymously, use direct messaging, and search for content can increase the risk of illegal content being shared.

 

Business Models:

The revenue models of these platforms could also be a risk factor. For example, platforms that rely heavily on advertising revenue may be incentivized to allow content to be uploaded in the most “friction-free” manner, potentially leading to less effective content moderation and an increased risk of illegal content.

Transparency and Reporting: Sites like Onlyfans and Pornhub will be required to be transparent about their content moderation practices and the volume of illegal content removed, potentially through transparency reports. This could involve:

  • Publishing statistics on the amount of content removed or action taken in response to illegal content reports.
  • Providing information on the processes and technologies used for content moderation, and the use of human moderators.

Additional Considerations:

  • Impact of User Restrictions: The Act emphasizes user safety, but platforms need to be mindful of the potential negative impacts of overly restrictive measures on sex workers, particularly those who rely on these platforms for income. Restrictions that push sex work further underground could increase risks of exploitation and harm.
  • Collaboration with Law Enforcement: Pornography platforms will need to cooperate with law enforcement agencies in investigations related to illegal content. This could involve providing data or assisting with content takedowns.
 

Evolving Nature of the Act:

The Act is designed to be adaptable to technological advancements and evolving online harms. This means platforms will need to stay informed about Ofcom’s guidance and adapt their practices accordingly.
 
In conclusion, the UK Online Safety Act 2023 will necessitate major changes for major pornography platforms like Onlyfans and Pornhub. They will need to implement robust age verification systems, strengthen their content moderation practices, be more transparent about their operations, and conduct thorough risk assessments. Striking a balance between user safety and the rights of adult content creators will be a key challenge for these platforms as they adapt to the new regulatory landscape.

Services in/out of scope 

What Is the territorial scope?
The OSA applies to services with “links with the UK,” even if they are based overseas (e.g. US social network, AI or search services).
 
This includes services with a significant number of UK users, services that target the UK market, and services accessible in the UK where there is a risk of harm to UK individuals.
The OSA has a broad scope, but there are specific types of online services and content that are exempt from its regulatory duties. These exemptions are narrowly defined and online services should exercise caution before assuming they can take advantage of them.
 
Exempt User-to-user Services:
  • Email, SMS, and MMS Services: User-to-user services that solely enable email, SMS messages, or MMS messages as user-generated content (excluding identifying content) are exempt from the Act.
  • Limited Functionality Services: Services that only allow users to post comments or reviews on content generated by the service provider itself are exempt. For example, this would exempt services where users can only post “below the line” comments or reviews on provider-generated articles or products.
  • One-to-one Live Aural Communications: Services that only facilitate one-to-one live audio communications are exempt.
General Exemptions for User-to-user and Search Services:
 
  • Internal Business Services: Services that function as internal resources or tools for businesses and are only accessible to a closed group of individuals connected to the business, such as employees or authorized personnel, are exempt. Examples include business intranets, productivity and collaboration tools, and content management systems.
  • Public Body Services: Services provided by public bodies or educational institutions for the purpose of carrying out their public or educational functions are exempt. This includes services provided by UK public authorities and non-UK entities that exercise public functions.
  • Education and Childcare Provider Services: Certain educational and childcare providers already subject to safeguarding duties that require them to protect children online are exempt to prevent overlapping oversight.
Exempt Content:
  • Paid-for Advertising: Paid-for advertising content is generally excluded from the scope of the Act. However, larger providers are still subject to a duty to protect users from fraudulent advertising.
  • Comments and Reviews on Provider Content: User-generated comments and reviews specifically relating to content published by the service provider are exempt. This exemption applies to comments and reviews on news publisher sites and many sites selling goods and services.
  • Combined Services: Services that combine features of both user-to-user and search services, such as a social media platform with a built-in search engine, are subject to the duties applicable to both types of services.
  • The definition of “regulated provider pornographic content” is specific and excludes content that consists solely of text, or text accompanied by emojis or non-pornographic GIFs.

Ofcom have just released a tool to check if your services are in scope:

Ofcom – check if you are in scope

News & Insights

The three pillars of the Gibraltar Authorisation Regime

The Gibraltar Authorisation Regime (GAR) & UK Market Access

The Gibraltar Authorisation Regime (GAR) represents the permanent legislative framework enabling Gibraltar-based financial services firms, including payment service providers (PSPs) and e-money institutions (EMIs), to access the UK market following Brexit. It requires transparency and careful management by Gibraltar firms benefiting from UK market access.

Our Crypto Asset & DLT Team

Peter Howitt

Peter Howitt

Managing Director

accounting, fund administration, tax filing and company set up

Heather Adamson

Head of Fiduciary

employment law, payments law, payroll, e-money and crypto assets

David Borge

Practice Director

Nicholas Borge

Nicholas Borge

Director

company administration, fund administration, outsourced compliance

Tyrene Edwards

Trainee Associate