EU Digital Services Act

AI generated image

The European Union’s Digital Services Act (DSA), adopted in 2022, represents a significant update to the legal framework governing online intermediaries, particularly concerning illegal content, transparent advertising, the use of algorithms and the spread of disinformation or misinformation.

This regulation modernises the principles established in the Electronic Commerce Directive 2000, aiming to create a more accountable digital ecosystem.

A key feature of the DSA is its tiered approach, which imposes increasing obligations on different categories of online services, with the most stringent requirements applying to Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) – like Google, Apple, X, and (more indirectly) services like ChatGPT and others – due to their substantial reach and social impact.

Services

Licencing Applications

Compliance

Global Commercialisation

Digital Services Act

A guide to the EU Digital Services Act

Digital Platforms & Search 

Introduction

Very Large Online Platform (VLOPs) – like “X” (formerly Twitter) –  and Very Large Online Search Engines (VLOSEs) – like “Google”  –  are designated by the European Commission based on having more than 45 million monthly active users in the EU. They face the most stringent obligations under the DSA and are subject to a comprehensive set of obligations under the EU DSA, designed to address the unique risks associated with platforms of its scale and influence. These obligations span various aspects of platform operation, including the handling of illegal content, transparency of content moderation, user redress, risk assessment, data access, and crisis response.

The DSA also applies to smaller online operators, although they face fewer obligations compared to VLOPs and VLOSEs. The DSA has a tiered system, meaning the requirements are proportionate to the size and nature of the services provided.

VLOPS and VLOSEs face the most stringent obligations under the DSA, including:  

  • Systemic Risk Assessment and Mitigation: They must identify, analyse, and mitigate systemic risks related to illegal content, fundamental rights (including freedom of expression, media freedom, non-discrimination, and children’s rights), public security, electoral processes, and mental and physical well-being. This includes risks associated with the design and functioning of their services, including algorithmic systems and AI tools.   

 

  • Content Moderation:
    • Implement effective and transparent mechanisms for users to report illegal content.  
    • Cooperate with “trusted flaggers” for priority handling of reported illegal content. 
    • Provide clear and specific reasons to users when their content is removed or access is restricted, with options for appeal.   
    • Publish transparency reports on their content moderation decisions. 
  • Transparency Obligations:
    • Provide greater transparency about advertising, including why a user is seeing a particular ad and who paid for it. VLOPs/VLOSEs must maintain a repository of advertisements.   
    • Offer users more control over recommender systems, including the option to opt-out of personalised content recommendations.  
    • Disclose the parameters used in their recommender systems.

 

  • Protection of Minors: Implement measures to ensure a high level of privacy, safety, and security for minors, including preventing targeted advertising based on their personal data.   

 

  • Crisis Response Mechanism: In times of crisis (e.g., pandemics, wars), they may be required to take specific measures to address the spread of harmful content.  

 

  • Independent Audits: Undergo independent audits at least annually to assess compliance with DSA obligations.   

 

  • Data Access for Researchers: Provide access to their data to vetted researchers for the purpose of studying systemic risks.

 

  • Internal Compliance Function: Establish an internal function to ensure compliance with the DSA.  

 

  • Point of Contact: Designate a single point of contact for authorities and users.

 

  • Reporting Criminal Offences: Report suspicions of serious criminal offences to law enforcement.   

 

  • Terms and Conditions: Ensure user-friendly and transparent terms and conditions.

The DSA’s impact on AI tools is more indirect and particularly impacts their use within social media platforms and search engines. These platforms must ensure that their AI systems:  

  • Function Transparently: Disclose when AI is being used in content moderation or advertising and provide information about its accuracy, error rates, and the role of human review.   
  • Are Accountable: Be subject to risk assessments to identify and mitigate potential harms, including bias and the spread of disinformation.   
  • Comply with the EU AI Act: AI systems used by VLOPs/VLOSEs must also adhere to the requirements of the EU AI Act, ensuring transparency, fairness, and accountability.  
  • Address Risks of Generative AI: Platforms whose services can be used to create or disseminate generative AI content need to assess and mitigate specific risks, such as clearly labelling AI-generated content (e.g., deepfakes).   

One of the fundamental obligations for VLOPs under the DSA concerns the management of illegal content.

Articles 14, 16, and 22 of the DSA mandate that VLOPs implement effective notice and action mechanisms that allow users to easily report the presence of illegal content on their platform. Furthermore, VLOPs are required to act promptly upon receiving orders from EU authorities to take action against illegal content or to provide relevant information.

To ensure clarity and accountability, VLOPs must establish and maintain clear and publicly accessible terms and conditions that outline their policies regarding illegal content.

The DSA also emphasises the importance of cooperation with “trusted flaggers,” specialised entities recognised for their expertise in identifying and reporting illegal content, requiring VLOPs to prioritise and act upon notifications received from these sources.

The definition of illegal content under the DSA is broad, encompassing any information that does not comply with the law of the EU or a member state. This includes categories such as child sexual abuse material, terrorist content, and certain forms of hate speech, such as the trivialisation and denial of genocide.

The DSA’s focus on these measures underscores the EU’s commitment to ensuring that VLOPs take proactive steps to combat the dissemination of illegal content and create a safer online environment.

Transparency is a cornerstone of the DSA, with Articles 15, 24, and 42 imposing significant reporting obligations on VLOPs regarding their content moderation activities.

VLOPs are required to publish transparency reports at least every six months, providing detailed information about their content moderation teams, including the qualifications and linguistic expertise of the personnel involved. These reports must also include comprehensive data on the number of orders received from EU authorities to act against illegal content, the volume of illegal content notices submitted by EU users (including those from trusted flaggers), the actions taken by the platform in response to these notices, and the time taken to implement such actions.

VLOPs also need to provide meaningful and comprehensible information about their own-initiative content moderation efforts, including the number and types of measures taken against content, and specific details on the use and accuracy of any automated content moderation tools.

In cases where content is removed or access is restricted, VLOPs, as online platforms, are obligated to provide a clear and specific statement of reasons to the affected users, explaining the grounds for the moderation decision. These statements of reasons must also be submitted in a pseudonymised format to the DSA Transparency Database, a central repository maintained by the European Commission. Additionally, VLOPs are required to report on the number of out-of-court dispute settlements initiated by users regarding content moderation decisions, the median time needed for resolution, and the proportion of disputes where the platform has implemented the decision rendered by the settlement body.

VLOPs must also disclose the number of service suspensions imposed on users who frequently provide illegal content. These extensive transparency requirements aim to ensure that VLOPs are open and accountable regarding their content moderation practices, allowing for greater scrutiny by users, regulators, and the public.

The DSA also grants users important rights through user redress mechanisms, as outlined in Articles 17 and 20. Online platforms, including VLOPs, are required to establish an internal complaint-handling system that enables users whose content has been affected by content moderation decisions to lodge electronic complaints within a reasonable timeframe. Users are also empowered to challenge content removal or restriction decisions through independent out-of-court dispute settlement mechanisms. VLOPs are obligated to report on the number of complaints received through their internal systems and the median time taken to resolve these complaints.

Given the significant potential for VLOPs to disseminate illegal content and cause societal harms, the DSA imposes stringent obligations related to risk assessment and mitigation, as detailed in Articles 34 and 35. VLOPs are required to proactively identify, analyse, and assess systemic risks associated with the design, functioning, and use of their services. These risks include the dissemination of illegal content, negative effects on fundamental rights (such as freedom of expression and media pluralism), negative impacts on civic discourse, electoral processes, public security, and public health, as well as negative effects related to gender-based violence, the protection of minors, and mental and physical well-being.

Based on these risk assessments, VLOPs are obligated to implement specific, effective, and proportionate measures to mitigate the identified risks. These measures can include adapting the design or functioning of their services, modifying their content moderation systems, or altering their recommender algorithms. To ensure ongoing compliance, VLOPs must establish an internal compliance function that oversees the implementation and effectiveness of these risk mitigation measures. They are also required to undergo an independent audit at least once a year to assess their compliance with the DSA and to adopt measures that respond to the auditor’s recommendations.

Furthermore, VLOPs have an obligation to report to the relevant authorities any criminal offences that they become aware of through their platform. These comprehensive risk assessment and mitigation requirements underscore the responsibility of VLOPs to actively manage the potential harms that can arise from their operations.

Article 40 of the DSA further emphasises transparency and accountability by requiring VLOPs to provide access to certain data for regulatory oversight and independent research. VLOPs are obligated to share their data with the European Commission and relevant national authorities to enable them to monitor and assess compliance with the DSA. VLOPs must allow vetted researchers, who meet specific criteria, to access platform data when the research contributes to the detection, identification, and understanding of systemic risks within the EU, as well as to the assessment of the adequacy, efficiency, and impacts of risk mitigation measures taken by the platform. 

The DSA also includes a crisis response mechanism in Article 36, which allows the European Commission to take swift action in extraordinary circumstances. Where a crisis occurs, defined as extraordinary circumstances leading to a serious threat to public security or public health in the EU or significant parts of it, the Commission, acting upon a recommendation of the European Board for Digital Services, may adopt a decision requiring one or more VLOPs or VLOSEs to take specific actions. 

These actions can include assessing the extent to which their services contribute to the serious threat, identifying and applying specific, effective, and proportionate measures to prevent, eliminate, or limit any such contribution, and reporting to the Commission on their assessments and the measures taken. 

When implementing such measures, VLOPs must duly consider the gravity of the threat, the urgency of the measures, and the potential implications for the rights and legitimate interests of all parties concerned. The actions required by the Commission must be strictly necessary, justified, proportionate, and limited to a period not exceeding three months. This crisis response mechanism highlights the critical role that VLOPs play in the dissemination of information during times of crisis and provides a framework for coordinated action to mitigate potential harms.

An analysis of X‘s content moderation policies and practices reveals several key dangerous characteristics that has caused multiple EU Commission investigations.

X has publicly stated its adoption of a “Freedom of Speech, not Freedom of Reach”  approach which allegedly involves restricting the visibility of posts that violate its policies rather than outright removing them from the platform.

X has also reinstated accounts of several high-profile controversial individuals who were previously suspended for violating the platform’s rules. To address misinformation and provide context to potentially misleading posts, X has utiliseD a Community Notes model, which allows users to collaboratively add notes to tweets. Reports suggest that X has reduced the overall scope of its content moderation efforts, prioritising the removal of illegal content but potentially allowing a greater volume of “lawful but awful” content, such as hate speech and harassment, to remain on the platform. The support by Elon Musk (and on the platforming of these parties on the X platform) of far-right and hard-right political parties in Europe like the AfD has forced the EU to widen its investigation into how X amplifies certain content and minimises other content.

In September 2024, X released its first transparency report since 2021, covering the period from January to June 2024. This report indicated a significant increase in account suspensions compared to the first half of 2022, with over 5.3 million accounts suspended, including a substantial number related to child sexual exploitation material. However, the report also showed a notable decrease in account suspensions for hateful conduct compared to figures reported by Twitter in 2022.

X employs a combination of automated systems, including machine learning, and human review to enforce its content moderation policies. The platform has faced scrutiny within Europe regarding its approach to content moderation, particularly in relation to the spread of misinformation and harmful content.

The European Commission has initiated formal proceedings against X to investigate potential breaches of the DSA across several areas, including risk management, content moderation practices, the use of dark patterns in its user interface, advertising transparency, and the level of data access provided to researchers. Preliminary findings from this investigation suggest that be may be in breach of the DSA due to the design and operation of its “verified” accounts, which are considered misleading (a dark pattern), the ineffectiveness of its advertising repository, and the restrictions it places on data access for researchers. X has also faced legal challenges in the EU in the form of GDPR complaints concerning its artificial intelligence training practices, with allegations that the platform is unlawfully using European users’ personal data without obtaining their consent. These various aspects of X‘s content moderation policies and the regulatory responses they have elicited highlight potential areas of non-compliance with the EU DSA.

Based on the analysis of the DSA obligations for VLOPs and X‘s current practices, several potential areas of non-compliance can be identified. The platform’s reduced scope of content moderation and its reliance on “Community Notes” as a primary mechanism for addressing harmful content may not constitute the sufficiently robust action against illegal content required by Articles 14, 16, and 22 of the DSA. This approach could potentially fall short of the obligation to take effective measures to prevent the dissemination of illegal content.

While X has released a transparency report, the ongoing EU investigation and its preliminary findings suggest potential shortcomings in advertising transparency and the accessibility of data for researchers, which may indicate non-compliance with Articles 15, 24, and 42. Additionally, the transparency of algorithm changes and the functioning of the “Community Notes” system warrant closer examination to ensure they meet the DSA’s standards.

The effectiveness and accessibility of X‘s internal complaint-handling system, and its engagement with out-of-court dispute settlement bodies, is also being assessed against the requirements of Articles 17 and 20. The EU Commission’s investigation into X‘s risk assessment report suggests potential inadequacies in identifying and mitigating systemic risks, particularly concerning the spread of disinformation and its impact on civic discourse, which could indicate a breach of Articles 34 and 35. The preliminary findings of the EU investigation explicitly cite X‘s restrictions on data access for researchers as a potential violation of Article 40.

The EU Commission’s preliminary view that the design and operation of Xs “verified” accounts constitute dark patterns that mislead users indicates a potential breach of Article 25, which prohibits deceptive user interfaces.

Finally, the behaviour of Elon Musk during this investigation has shown contempt for EU legislation and its political and legislative bodies. Musk’s relationship to the Trump administration and his ownership of strategic services like Starlink makes settling these issues highly political. The US administration has already waded in to the debate to try to avoid EU regulation of US technology platforms.

Article 52 of the DSA stipulates that breaches of its obligations can result in fines of up to 6% of the VLOP’s global annual turnover.

For the provision of incorrect, incomplete, or misleading information to regulators, fines can reach up to 1% of the worldwide annual turnover.

Additionally, the European Commission can impose periodic penalty payments of up to 5% of the average daily worldwide turnover for each day of delay in complying with remedies, interim measures, and commitments.

In cases of persistent infringement that causes serious harm and involves criminal offences threatening life or safety, the Commission has the authority to request the temporary suspension of the service as a last resort.

Notably, there are reports indicating that the EU is considering a fine against “X” in the vicinity of $1 billion for violations of the DSA. This is because it was owned by Elon Musk personally during the period in question and so the turnover of his other private companies (like Starlink and SpaceX) becomes relevant to the potential turnover fine.

The DSA has a tiered system, meaning the requirements are proportionate to the size and nature of the services provided. Smaller operators, including SMEs, generally need to:  

  • Establish a Point of Contact: Designate a single point of contact for authorities and users within the EU.
  • Implement Notice and Action Mechanisms: Put in place user-friendly mechanisms for users to report illegal content and for the platform to act on these notices.
  • Provide Transparency: Be transparent about their content moderation policies and provide reasons for content removal.
  • Comply with Traceability Requirements (for online marketplaces): Ensure that traders on their platforms are traceable.
  • Protect Minors: Implement appropriate measures to protect minors from harmful content and targeted advertising.  
  • Ban on Certain Targeted Advertising: Not target advertising at minors or based on sensitive personal data.   
  • Ban on Dark Patterns: Avoid using misleading interface designs that manipulate users’ choices.

However, the same fines are applicable to smaller operators albeit their turnover is likely to be much lower so the scale of the fines is likely to be lower.

News & Insights

Scales and gambling chips on one side

UK Gambling Law Update: Voluntary Code of Practice for Free Draw Operators

Since its initial publication, the landscape surrounding the UK Gambling White Paper, particularly concerning illegal lotteries, prize competitions, and free draws, has continued to evolve…We will delve into the latest developments and their potential impact on businesses and consumers, offering a current perspective on the ongoing efforts to refine gambling regulations and ensure a fairer, more transparent environment for all.

AI Legal & Compliance Support Assistant

Powering the AI Future with Ramparts’ Funds Team

We are seeing an artificial intelligence (AI) revolution, a transformative period marked by unprecedented technological advancement and immense investment opportunities. The strategic decisions made today will define tomorrow’s market leaders. For fund managers looking to capitalise on the AI boom, choosing the right jurisdiction is key. Gibraltar offers not just a stable and predictable legal framework, but a platform of strategic service providers that are crucial for your success.

Defamation Law in Gibraltar

Defamation Law in Gibraltar: whilst it is a comparatively small jurisdiction geographically, it is home to many prominent industry-leading organisations with big reputations to protect in an increasingly interconnected world.

Our Legal Team

Peter Howitt

Peter Howitt

Managing Director

employment law, payments law, payroll, e-money and crypto assets

David Borge

Practice Director

Ramparts are Legal & Regulatory Specialists

Andrew Tait

Head of Betting & Gaming

Steven De Lara

Steven De Lara

Head of Litigation, Trusts and Financial Services

Nicholas Borge

Nicholas Borge

Director

Fiona Young

Fiona Young

Consultant

Peter Young

Peter Young

Consultant

Tessa Rosado-Standen

Tessa Rosado-Standen

Associate

12

UK Online Safety Act 2023 (OSA)

Uk Online Safety Act
AI generated image of Elon Musk

The UK Online Safety Act 2023 (OSA or Act) is a landmark law reshaping how platforms like X (formerly Twitter), Instagram, Threads and Bluesky operate in the UK. It also impacts search applications, pornographic content, gaming platforms and AI generated content platforms.

With a strong focus on protecting democratic content, increasing transparency, and curbing hate speech, the Act imposes strict obligations on platforms to balance user safety with free expression. One of the primary focuses of the OSA is to protect children from harmful content however it also aims to protect against hate speech and threats to democracy. This know-how page explores how the changes required by the OSA will affect the policies, algorithms, and content management system for social media platforms and AI generated applications with a focus on its impact for adults and social media and social networks. Major Compliance Deadline: Providers now have a duty to assess the risk of illegal harms on their services, with a deadline of 16 March 2025. Providers will need to take the safety measures set out in the Codes or use other effective measures to protect users from illegal content and activity.

Online Safety Act

“For too long, sites and apps have been unregulated, unaccountable and unwilling to prioritise people’s safety over profits. That changes from today.” Dame Melanie Dawes Ofcom Chief Executive

Introduction

Introduction  The UK Online Safety Act 2023 (OSA or Act) imposes significant obligations on social media platforms, interactive websites and applications (including group video) and AI apps.

The OSA represents a significant regulatory shift, especially for platforms like regulated internet services that play a pivotal role in public discourse.

By enhancing transparency, refining algorithms, and protecting democratic content, regulated internet services have the opportunity to demonstrate leadership in compliance and user safety. However, navigating these new requirements will require substantial effort, resources, and innovation.

As Ofcom enforces the Act, the response of regulated internet services providers will set a precedent for how social media platforms can adapt to an evolving regulatory landscape. However, there are significant concerns that Ofcom (and by extension the UK Govt) will be slow and timid in its roll out of key aspects of the OSA and in its guidance and enforcement action. Humility is clearly not going to work with persons that wish to undermine democratic institutions and freedom of expression. Any further delays in rolling out the core obligations (particularly for Category 1 services) are going to be deeply damaging to UK democracy. Read: Time for tech firms to act: UK online safety regulation comes into force Is Ofcom about to delay action on fake and anonymous accounts until 2027?
 The OSA imposes significant obligations on social media platforms, interactive websites and applications (including group video) and AI apps affecting how these user-to-user and search service providers manage:
  • Their new ‘Duty of Care’
  • Illegal content (including hate crimes)
  • Protecting children from ‘harmful content’ including grooming, bullying and harassment
  • Ensuring transparency
  • Protecting journalistic content and democratic discourse; and
  • Managing algorithmic impacts.
Other than the duty of care and the requirement to protect all UK persons from illegal content and children from online harm, the full scope of the duties depends on the categorisation of the services provider.

The OSA requires providers to take extra measures to protect children even if the content is not illegal content.

  • The OSA requires services to proactively prevent children from encountering primary priority content, which includes pornography, content promoting self-harm, eating disorders, or suicide. This is a central focus of the legislation for protecting children from the most harmful content.
  • The OSA mandates services to protect children from priority content, such as bullying, hate speech, and violent content. Services are expected to tailor these protections to specific age groups based on the risks identified.
  • The OSA also obligates services to assess risks associated with non-designated harmful content through children’s risk assessments and implement measures to mitigate these risks.
  • Regulated Services: Encompasses internet services, user-to-user services, and search services that meet specific thresholds defined in the Act.
    • User-to-User Services: These are internet services where users can generate, upload, or share content accessible to others on the same platform.
      • Examples include social media platforms, forums, and collaborative applications.
      • This includes AI image and AI content generation platforms (like grok, chatgpt, gemini etc).
      • The definition is comprehensive, capturing services even if user interaction is not a primary feature. Exemptions apply to limited functions like private business communications and one-to-one messaging services such as email and SMS/MMS.
    • Search Services: These include any internet service functioning as a search engine, allowing users to search across websites or databases.
      • This category extends beyond traditional search engines (like Google) to any platform offering a search or filtering capability, such as websites with tag-based filtering.
      • Services operating as both user-to-user and search services are classified as “combined services” and must comply with the obligations of both categories
    • Internet Services: An internet service, other than a regulated user-to-user service or a regulated search service, that is within 80 (2) or Schedule 2 (primarily related to pornographic content).
See Frequently Asked Questions (FAQs) further below for out of scope services.

All online regulated services within scope of the OSA must protect UK users from illegal content and, where applicable, protect children from online harm. However, additional more detailed obligations apply to specified categories of service provider.

The OSA, and additional regulations to be published pursuant to it, are expected to categorise services providers as follows:
  • Category 1: Services with a significant number of UK users and functionalities that pose higher risks of harm. Ofcom has advised that this category should capture services that meet one of the following criteria:
    • Use content recommender systems and have more than 34 million UK users (approximately 50% of the UK population).
    • Allow users to forward or reshare user-generated content, use content recommender systems, and have more than 7 million UK users (approximately 10% of the UK population).
  • Category 2A: Services with a moderate reach and risk profile, likely to be the highest reach search services. Ofcom recommends that this category include search services (excluding vertical search engines) with over 7 million UK users.
  • Category 2B: Services with a moderate reach and risk profile, likely to be other user-to-user services with potentially risky functionalities or characteristics. Ofcom recommends that this category target services allowing direct messaging, with over 3 million UK users.
Once the thresholds are set, Ofcom will publish a register of categorized services in the summer of 2025. Ofcom anticipates that the final thresholds will result in 35 to 60 services being categorised.  Most in-scope service providers will not be categorised (as they will not be sufficiently large) and so will not be subject to the additional category duties (summarised below). Ofcom Guide to Categories and Requirements
Ofcom Summary
Ofcom have recently published a summary of their decisions in their Illegal Harms statement (the “Statement”) which outlines which services they relate to. It sets out:
  • The detailed measures they are recommending for user-to-user (U2U) services;
  • The detailed measures they are recommending for search services;
  • Their guidance for risk assessment duties, applicable to all U2U and search services; and
  • Their guidance for record-keeping and review duties, applicable to all U2U and search services.
The guidance sets out more than 40 safety measures that must be introduced by March 2025.
A snapshot of some of the measures, see the Statement for the full table for all service providers
Governance and Accountability
  • Regular reviews of risk management activities and internal monitoring.
  • Clear designation of an individual responsible for illegal content safety and reporting.
  • Documented responsibilities and codes of conduct for staff.
  • Tracking of emerging illegal harms.
Content Moderation
  • Implementation of content moderation systems (both human and automated).
  • Establishment of internal content policies and performance targets.
  • Prioritisation and resourcing of content review efforts.
  • Training for content moderation staff and provision of materials for volunteers.
Reporting and Complaints
  • Mechanisms for user complaints and reporting of illegal content.
  • Clear communication and timelines for handling complaints.
  • Processes for handling appeals and specific types of complaints.
User Controls and Support
  • Safety features for child users (e.g., default settings).
  • Terms of service that are clear and accessible.
  • Support services for child users.
  • Tools for user blocking and muting.
Additional Measures
  • Specific measures for recommender systems (e.g., safety metrics collection).
  • Removal of accounts associated with proscribed organizations.
  • Labeling schemes for notable users and monetized content.
  • Dedicated reporting channels for trusted flaggers.
Search-Specific Measures
  • Moderation of search results and predictive search suggestions.
  • Provision of content warnings and crisis prevention information.
  • Publicly available statements about content safety measures.
Ofcom Statement: Protecting people from illegal harms online
  • Free Speech: Regulated category 1 service providers must safeguard diverse political opinions, journalistic content, and democratic discourse while complying with moderation obligations.
  • Algorithm Transparency: All categorised service providers must provide detailed disclosures about how algorithms identify harmful content, moderate misinformation, and serve recommendations will be required.
  • Protect Children from Harm: All providers must take extra measures to protect children even if the content is not illegal content.
  • Harmful and Criminal Content Management: All providers must implement robust systems to detect and remove illegal criminal content and provide clear reporting tools for users. Category 1 providers must also take extra measures to enable adult users to reduce their exposure to legal but potentially harmful content.
  • User Control & Identity Verification: Category 1 providers must empower users with tools to manage their online experience such as use of personalised filters and ID verification.
  • Codes of Practice: Managing and understanding the practical compliance obligations for providers and users will be with reference to Ofcom guidance and codes of practice which assist to interpret the law. Under the OSA, Ofcom is required to prepare and issue the following separate Codes of Practice:
    • Codes of Practice for terrorism content
    • Codes of Practice for child sexual exploitation and abuse (CSEA) content
    • Codes of Practice for the purpose of compliance with the relevant duties relating to illegal content and harms.

Illegal content is defined broadly to encompass a wide range of what are known as priority offences. These include:

  • Terrorism: Content that promotes, glorifies, or incites terrorism
  • Child Sexual Exploitation and Abuse (CSEA): Material depicting or promoting child abuse
  • Sexual Exploitation of Adults
  • Threats, Abuse & Harassment including Hate Crimes: Content that incites violence or hatred based on protected characteristics.
  • Unlawful Pornographic Content: image based sexual offences.
  • Fraud: Deceptive or misleading content intended to defraud users.
  • Suicide: Assisting or encouraging suicide.
  • Buying/Selling unlawful items: e.g. buying or selling drugs or weapons.
See the Ofcom Background Guidance (‘Protecting people from illegal harms online’) for more information. Illegal Content Judgments
  1. Providers must conduct “suitable and sufficient” Illegal Content Risk Assessments (ICRAs) that consider the risks of users encountering illegal content, including “priority illegal content”.
  2. Providers must make illegal content judgments based on “reasonable grounds to infer,” a lower threshold than the criminal standard of “beyond reasonable doubt.” This means that there must be reasonable grounds to infer that:
    • The conduct element of a relevant offense is present or satisfied.
    • The state of mind element of that same offense is present or satisfied.
    • There are no reasonable grounds to infer that a relevant defense is present or satisfied.
    Freedom of expression and privacy must be considered when making these judgments.
  3. When service providers are alerted to the presence of illegal content or are aware of its presence in any other way, they have a duty to operate using proportionate systems and processes designed to “swiftly take down” any such content. This is referred to as the “takedown duty”.
Ofcom has issued the Illegal Content Judgements Guidance (ICJG) to support providers in understanding their regulatory obligations when making judgments about whether content is illegal under the OSA. It provides guidance on how to identify and handle illegal content, while considering freedom of expression and privacy. The ICJG outlines the legal framework for various offenses, the importance of context, jurisdictional issues, and the handling of reports and flags. It also offers specific guidance on various offense categories including the conduct and mental elements, as well as relevant defences.

User to User services: For the purposes of brevity given the scope of the OSA, I will focus on user to user services Codes and Guidance (as the most relevant category for social network platforms like X). The Illegal content Codes of Practice for search services is available here.

The draft Illegal content Codes of Practice for user-to-user services has been published with measures recommended for providers to comply with the following duties:
  • The illegal content safety duties set out in section 10(2) to (9) of the Act;
  • The duty for content reporting set out in section 20 of the Act, relating to illegal content; and
  • The duties about complaints procedures set out in section 21 of the Act, relating to the complaints requirements in section 21(4).
  • Section 3 of the document provides an index of recommended measures, including the application, relevant codes, and relevant duties for each measure. The recommended measures cover a range of areas, including governance and accountability, content moderation, reporting and complaints, recommender systems, settings, functionalities and user support, terms of service, user access, and user controls.
The recommended measures are set out in Section 4 of the document and are divided by thematic area:
  • Governance and Accountability
    • Large services should conduct an annual review of risk management activities related to illegal harm in the UK.
    • All services should designate an individual accountable for compliance with illegal content safety and reporting/complaints duties.
    • Large or multi-risk services should:
      • have written statements of responsibilities for senior managers involved in risk management.
      • have an internal monitoring and assurance function to assess the effectiveness of harm mitigation measures.
      • track and report evidence of new or increasing illegal content.
      • have a code of conduct setting standards for protecting users from illegal harm.
      • provide compliance training to individuals involved in service design and operation.
    Content Moderation
    • All services should have a content moderation function to review and assess suspected illegal content and take it down swiftly.
    • Large or multi-risk services should:
      • set and record internal content policies, performance targets, and prioritize content for review.
      • provide training and materials for content moderators (including volunteers) and use hash-matching to detect CSAM.
    Reporting and Complaints
    • All services should have accessible and user-friendly systems for reporting and complaints, and take appropriate action on complaints.
    • Larger services and those at risk of illegal harm should provide information about complaint outcomes and allow users to opt out of communications.
    • Specific requirements apply to handling complaints that are appeals or relate to proactive technology.
    Recommender Systems
    • Services conducting on-platform testing of recommender systems and at risk of multiple harms should collect and analyse safety metrics.
    Settings, Functionalities and User Support
    • Services with age-determination capabilities and at risk of grooming should implement safety defaults for child users and provide support.
    • All services should have terms of service that address illegal content and complaints, and these terms should be clear and accessible.
    User Access
    • All services should remove accounts of proscribed organizations.
    User Controls
    • Large services at risk of specific harms should offer blocking, muting, and comment-disabling features.
    Notable User and Monetised Labelling Schemes
    • Large services with labelling schemes for notable or monetized users should have policies to reduce the risk of harm associated with these schemes.
  • Implementing the recommended measures will involve the processing of personal data, and service providers are expected to comply fully with data protection law when taking measures for the purpose of complying with their online safety duties.

The purpose of ICRAs are to help service providers understand how different kinds of illegal harm could arise on their service and what safety measures need to be put in place to protect users. ICRA’s must be ‘suitable and sufficient’ for a provider to meet their OSA obligations.

The Risk Assessment Guidance and Risk Profiles recommends that service providers consider two main types of evidence when conducting a risk assessment:
  1. Core inputs: This type of evidence should be considered by all service providers and includes risk factors identified through the relevant Risk Profile, user complaints and reports, user data (such as age, language, and groups at risk), retrospective analysis of incidents of harm, relevant sections of Ofcom’s Register of Risks, evidence drawn from existing controls, and other relevant information (including other characteristics of the service that may increase or decrease the risk of harm).
  2. Enhanced inputs: This type of evidence should be considered by large service providers and those who have identified multiple specific risk factors for a kind of illegal content. Examples of enhanced inputs include results of product testing, results of content moderation systems, consultation with internal experts on risks and technical mitigations, views of independent experts, internal and external commissioned research, outcomes of external audit or other risk assurance processes, consultation with users, and results of engagement with relevant representative groups.
The different types of illegal content that must be assessed are:
  • The 17 kinds of priority illegal content: Terrorism, Child Sexual Exploitation and Abuse (CSEA) (including Grooming, Child Sexual Abuse Material (CSAM), Hate, Harassment, stalking, threats and abuse, Controlling or coercive behaviour, Intimate image abuse, Extreme pornography, Sexual exploitation of adults, Human trafficking, Unlawful immigration, Fraud and financial offences, Proceeds of crime, Drugs and psychoactive substances, Firearms, knives and other weapons, Encouraging or assisting suicide, Foreign interference, and Animal cruelty.
  • Other illegal content: This includes non-priority illegal content as described in the Register of Risks and potentially other offences depending on the specific service and evidence available.
Additional factors that service providers should consider when carrying out an illegal content risk assessment:
  • Service characteristics: The characteristics of the service, such as its user base (e.g., age, language, vulnerable groups), functionalities (e.g., live streaming, anonymous posting), and business model, can affect the level of risk.
  • Risk factors: The Risk Profiles published by Ofcom identify specific risk factors associated with each type of illegal content. Service providers should consider these risk factors and any additional factors specific to their service.
  • Likelihood and impact of harm: The assessment should consider the likelihood of each type of illegal content occurring on the service and the potential impact of that content on users and others.
  • Existing controls: The effectiveness of any existing measures to mitigate or control illegal content should be considered.
  • Evidence: Service providers should use a variety of evidence to inform their risk assessment, including user complaints, data analysis, and external research.
Categorised service providers also have the following additional duties regarding their illegal content risk assessments:
  • Publication of Summary: They must publish a summary of their most recent illegal content risk assessment. Category 1 services must include this summary in their terms of service, while Category 2A services must include it in a publicly available statement. The summary should include the findings of the assessment, including the levels of risk and the nature and severity of potential harm to individuals.
  • Provision of Assessment to Ofcom: They must provide Ofcom with a copy of their risk assessment record as soon as reasonably practicable after completing or revising it.

The Online Services Act (s.61) defines content that is harmful to children as:

  • ‘Primary priority content’ being:
    • Pornographic content
    • Content which encourages, promotes or provides instructions for suicide.
    • Content which encourages, promotes or provides instructions for an act of deliberate self-injury.
    • Content which encourages, promotes or provides instructions for an eating disorder or behaviours associated with an eating disorder.
  • Section 62 defines other priority content that can be harmful to children and must be managed appropriately. It includes:
    • Bullying and cyberbullying
    • Abusive or hateful content
    • Content depicting or encouraging serious violence
    • Content promoting dangerous stunts or challenges
    • Content encouraging the ingestion or exposure to harmful substances
    • Platforms must ensure that access to this type of content is age-appropriate and that protections are in place for children

The OSA prioritises protecting UK users from online harms.

(1)This Act provides for a new regulatory framework which has the general purpose of making the use of internet services regulated by this Act safer for individuals in the United Kingdom. (2)To achieve that purpose, this Act (among other things)—  (a)imposes duties which, in broad terms, require providers of services regulated by this Act to identify, mitigate and manage the risks of harm (including risks which particularly affect individuals with a certain characteristic) from—  (i)illegal content and activity, and  (ii)content and activity that is harmful to children, and  (b)confers new functions and powers on the regulator, OFCOM.
The Act outlines specific age and identity verification requirements, particularly for platforms categorized as Category 1 services, which are likely to have a significant number of users and offer a wide range of functionalities. In addition, platforms that are clearly aimed at pornography consumption must carry out age assurance checks.

Age Assurance

  • “Highly Effective” Age Verification or Estimation Required: The Act mandates that services likely to be accessed by children use age verification or age estimation methods that are “highly effective” at correctly determining whether a user is a child. This applies across all areas of the service, including design, operation, and content.
  • Self-Declaration Not Sufficient: Simple self-declaration of age is not considered a valid form of age verification or age estimation.
  • Ofcom Guidance on Effectiveness: Ofcom, the designated regulator, is responsible for providing guidance on what constitutes “highly effective” age assurance. This guidance will include examples of effective and ineffective methods, and principles to be considered.
  • Factors for Effective Age Assurance: Ofcom’s guidance suggests that effective age assurance methods should be technically accurate, robust, reliable, and fair. They should be easy to use and work effectively for all users, regardless of their characteristics.
  • Recommended Methods: Ofcom has recommended methods like credit card checks, open banking, and photo ID matching as potentially highly effective.
Transparency and Reporting Requirements: Platforms using age assurance must clearly explain their methods in their terms of service and provide detailed information in a publicly available statement. They must also keep written records of their age assurance practices and how they considered user privacy.

Ofcom Reports on Age Assurance Use:

Ofcom will assess how providers use age assurance and its effectiveness, reporting on any factors hindering its implementation.

Identity Verification

  • Category 1 Services Must Offer Identity Verification: The Act requires Category 1 services (like major social media platforms) to offer all adult users in the UK the option to verify their identity, unless identity verification is already necessary to access the service.
  • No Specific Method Mandated: The Act does not specify a particular method of identity verification. Platforms can choose a method that works for their service, but it must be clearly explained in their terms of service.
  • Documentation Not Required: The identity verification process does not necessarily need to involve providing documentation.

User Empowerment Features:

Identity verification is linked to user empowerment features, as platforms must offer adult users the ability to:
  • Control their exposure to harmful content.
  • Choose whether to interact with content from verified or non-verified users.
  • Filter out non-verified users.

Ofcom Guidance for Category 1 Services:

Ofcom is expected to provide guidance for Category 1 services on implementing identity verification, with a focus on ensuring availability for vulnerable adult users. General Considerations
  • The Act aims to strike a balance between online safety and freedom of expression, and this balance influences the implementation of age and identity verification requirements.
  • Specific details regarding the application of these requirements are still under development, and Ofcom is working on codes of practice and guidance to provide further clarification.
The age and identity verification requirements under the UK Online Safety Act 2023 aim to enhance online safety, particularly for children and vulnerable adults. The Act focuses on the effectiveness of these measures, transparency from platforms, and user empowerment to control their online experiences.
The UK Online Safety Act’s requirements regarding pornography varies for specialised pornography platforms and other internet services like search engines.

Specialised pronography platforms:

For specialised pornography platforms, which are classified as “services that feature provider pornographic content”, the Act imposes a duty to ensure children are not normally able to encounter regulated provider pornographic content. This means these platforms will have to implement robust age verification or age estimation systems. The Act emphasizes that these measures must be “highly effective” at determining whether a user is a child. The definition of “regulated provider pornographic content” is specific and excludes content that consists solely of text, or text accompanied by emojis or non-pornographic GIFs. However, content in image, video, or audio form that is considered pornographic would fall under this definition and trigger the age assurance obligations. The Act also mandates that these platforms, along with other user-to-user services and search services, conduct risk assessments. These assessments should identify and mitigate potential harms related to illegal content, including child sexual abuse material (CSAM) and extreme pornography.  Research indicates that user-to-user pornography services are particularly vulnerable to these types of illegal content. For example, a study found that a user-to-user pornography website hosted nearly 60,000 videos under phrases associated with intimate image abuse.  Additionally, evidence suggests that some services that host pornographic content prioritize user growth over content moderation, leading to less effective detection and removal of extreme content.

For other regulated internet services:

For other internet services like search engines the Act’s impact is more indirect. While search engines are not directly obligated to implement age verification, they are still subject to the requirement to mitigate and manage the risks of harm from illegal content and content harmful to children.This includes content that may be accessed through search results, even if the search engine itself does not host the content. For example, evidence suggests that search engines can be used to access websites offering illegal items like drugs and firearms. The Act acknowledges that search engines are often the starting point for many users’ online journeys and that they play a crucial role in making content accessible.  Search engines are also subject to risk assessments. Given the potential for users to find illegal content through search, they are expected to consider how their functionalities, like image/video search and reverse image search, might increase risks. Furthermore, even if pornography is not their core function or purpose, platforms like X (formerly Twitter) and Reddit, which allow users to share user-generated content, including pornographic material, would be classified as user-to-user services and be subject to the relevant duties under the Act. This means they would also need to conduct risk assessments, consider the risks associated with user-generated pornographic content, and implement measures to mitigate those risks. In conclusion, the Online Safety Act has significant implications for both specialised pornography platforms and other internet services that may have links to pornography. The Act aims to protect children from accessing pornographic content through robust age verification measures and seeks to reduce the prevalence of illegal content on these platforms through risk assessments and content moderation practices. The Act’s wide scope means that even platforms where pornography is not the main focus are still obligated to address the risks associated with such content.

The OSA imposes specific obligations on Category 1 services due to their reach and influence. These rules aim to safeguard the diversity of opinions and the integrity of democratic debate whilst minimising harmful speech. See also ‘Democratic Threats‘ below.

Key Requirements
  • Content of Democratic Importance: Providers must ensure moderation processes do not disproportionately suppress political opinions or stifle democratic discussion. This includes protecting content from verified news publishers, journalistic pieces, and user-generated contributions to political debates.
  • Equal Treatment of Opinions: Decisions about content moderation must respect free expression and avoid bias against particular political viewpoints. This includes avoiding overzealous removals under policies aimed at combating misinformation or hate speech.
  • Protection of Journalistic Content: Articles and posts deemed to have journalistic value must not be unjustly removed or suppressed, ensuring the platform remains a space for investigative reporting and public interest stories. Platforms must protect:
    • Verified news publishers’ content.
    • Journalistic content, even if shared by individual users.
    • User-generated contributions to political debates.
While regulated internet services are required to remove illegal or harmful content, the OSA emphasises the need to uphold free speech. Providers must develop policies and systems that balance protecting users from harm and allowing diverse viewpoints to flourish. The requirement for transparency reports that include moderation policies and actions will be crucial here
  • Detailed Reporting: all categorised regulated internet services must publish annual transparency reports explaining their algorithms’ role in content moderation and misinformation detection. These reports should detail the volume of flagged and removed content, alongside the impact of moderation algorithms on users.
  • Proactive Technology Disclosure: providers must disclose any automated systems, such as machine learning tools, used to detect harmful or illegal content.
  • Terms of Service Clarity: providers must clearly explain its policies on algorithmic decision-making, especially regarding content of democratic importance and misinformation

User Empowerment Tools

The Act promotes user choice and control by requiring platforms to provide tools that help users manage their online experience. For example:
  • Users can thereby gain more insight into how recommendation systems work.
  • Platforms could be required to offer non-personalised feeds that reduce reliance on algorithm-driven content.
  • Category 1 services must provide adult users with control features that effectively:
    • Reduce the likelihood of encountering specific types of legal but potentially harmful content, such as content promoting suicide, self-harm, or eating disorders.
    • Offer features to filter out interactions with non-verified users.
    • Clearly explain the available control features and their usage in the terms of service.
See ‘Clean up the internet’ recommendations to Ofcom

Algorithm Transparency 

Algorithms are central to how regulated internet services moderate content, serve recommendations, and filter harmful material. The Act introduces transparency and accountability measures to ensure these algorithms and systems are safe and fair:
  • Providers must be transparent about how their algorithms function and the potential impact on users’ exposure to illegal content.
    • They must include provisions in their terms of service or publicly available statements that specify how individuals are protected from illegal content, including details about the design and operation of algorithms used.
    • Additionally, they must provide information about any proactive technology used for compliance, including how it works, and ensure this information is clear and accessible.
  • Category 1 providers have an additional duty to summarise the findings of their most recent ICRAs in their terms of service, including the level of risk associated with illegal content. Factors like the speed and reach of content distribution facilitated by algorithms must be considered. These assessments must be updated regularly to reflect changes in Ofcom’s Codes of Practice (COPs), risk profiles, and the provider’s business practices.
  • Safer Algorithms for Children: If regulated internet services are accessed by children, their algorithms must minimise exposure to harmful content. This includes age-appropriate design measures and risk assessments targeting features that could harm younger users.

AI Chatbots: It is very likely services such as ChatGPT, Gemini, Perplexity etc will be categorised as a user-to-user service, as they allow users to interact with a Generative AI chatbot and share chatbot-generated text and images and other user genrated content with other users.

Art: Services such as Midjourney (Art) are also in scope. Generative AI Tools and Pornographic Content: Services featuring AI tools capable of generating pornographic material are additionally regulated and must implement highly effective age assurance measures to prevent children from accessing such content. Generative AI and Search Services: AI tools enabling searches across multiple websites or databases are considered search services under the OSA. This includes tools that modify or augment search results on existing search engines or offer live internet results on standalone platforms. Consequently, these AI-powered search services will need to comply with the relevant duties outlined in the Act. Ofcom Guidance regarding generative AI and AI chatbots

Combating hate speech is a cornerstone of the OSA. Regulated providers must take decisive measures to reduce the prevalence of illegal hate speech and implement systems for detection, reporting, and removal of hate speech.

Key Duties for Platforms
  • Illegal Content Detection: Hate speech is classified as priority illegal content, requiring regulated internet services to identify and remove such material promptly.
  • Risk Assessments: Regulated providers must evaluate the risks of hate speech on their platform and develop proportionate systems to manage and mitigate these risks.
  • Clear Reporting Mechanisms: The platform must provide users with accessible tools to flag hate speech. Reports must be acted upon swiftly, with outcomes communicated transparently.
Transparency by Moderation To meet the Act’s transparency standards, regulated services providers must:
  • Publish data on the volume and nature of hate speech flagged, reviewed, and removed.
  • Explain their systems for detecting and moderating hate speech in its transparency reports.
By addressing hate speech robustly, regulated services providers can align legal requirements with fostering a safer environment for users.

The process of bringing the Online Safety Act into law has been winding and subject to lengthy delays. Many of the provisions of the OSA came into force on January 10 2024 (including the new duty of care) for all regulated online services and many of the powers needed by Ofcom as the regulator responsible for enforcing the OSA. However, it has been subject to an implementation process which required Ofcom consultation and the issuance of Codes and guidance.

Major Deadline: All providers have a duty to assess the risk of illegal harms on their services, with a deadline of 16 March 2025. Providers will need to take the safety measures set out in the Codes or use other effective measures to protect users from illegal content and activity. Additional key protections in respect of Category 1 providers (like X) are unlikely to be in force until 2026 or 2027. Further delay on major platforms now looks very dangerous (see below ‘Democratic Threats’). The Secretary of State (Schedule 10) will determine regulations specifying Category 1, 2A, and 2B threshold conditions for different types of services. Commencement dates for remaining provisions of the Act will be set by future regulations under Section 240. Phased roll-out: Regulated service providers must take steps to comply with new duties following Ofcom guidance, which is to be published in phases: Phase 1: Illegal Harms (December 2024–March 2025)
  • December 2024: Ofcom will release the Illegal Harms Statement, including:
    • Illegal Harms Codes of Practice.
    • Guidance on illegal content risk assessments.
  • March 2025: Service providers must complete risk assessments and comply with the Codes or equivalent measures. Enforcement begins once Codes pass through Parliament.
Phase 2: Child Safety, Pornography, and Protection of Women and Girls (January–July 2025)
  • January 2025:
    • Final guidance on age assurance for publishers of pornography and children’s access assessments.
    • Services likely accessed by children must begin children’s risk assessments.
  • April 2025: Protection of Children Codes and risk assessment guidance published.
  • July 2025: Child protection duties become enforceable.
  • February 2025: Draft guidance on protecting women and girls will address specific harms affecting them.
Phase 3: Categorisation and Additional Duties (2024–2026)
  • End of 2024: Government to confirm thresholds for service categorisation (Category 1, 2A, or 2B).
  • Summer 2025: Categorised services register published; draft transparency notices follow shortly.
  • Early 2026: Proposals for additional duties on categorised services are expected to be released.
  • 2027: Implementation of the proposals for Category provider obligations.
Ofcom Roadmap: Ofcom Roadmap to Regulation Ofcom Important Dates

Ofcom state, in the 16 December 2024 Overview, that in early 2025, they will seek to enforce compliance with the rules by a combination of means, including:

  1. Supervisory engagement with the largest and riskiest providers to ensure they understand Ofcom’s expectations and come into compliance quickly, pushing for improvements where needed;
  2. Gathering and analysing the risk assessments of the largest and riskiest providers so they can consider whether they are identifying and mitigating illegal harms risks effectively;
  3. Monitoring compliance and taking enforcement action across the sector if providers fail to complete their illegal harms risk assessment by 16 March 2025;
  4. Focused engagement with certain high-risk providers to ensure they are complying with CSAM hash-matching measure, followed by enforcement action where needed; and
  5. Further targeted enforcement action for breaches of the safety duties where they identify serious ongoing issues that represent significant risks to users, to push for improved user outcomes and deter poor compliance.
“We will also use our transparency powers to shine a light on safety matters, share good practice, and highlight where improvements can be made.”
http://www.ofcom.org.uk/siteassets/resources/documents/online-safety/information-for-industry/roadmap/ofcoms-approach-to-implementing-the-online-safety-act/?v=330308 Compliance Monitoring Ofcom, the UK’s communications regulator will closely monitor regulated internet services’s adherence to the Act. Breaches could result in substantial penalties, including fines of the greater of £18 million or up to 10% of global annual turnover (Sch. 13). Balancing Act Regulated providers face significant operational challenges:
  • Maintaining Free Speech: Striking the right balance between protecting free expression and removing harmful content is critical. Over-moderation risks alienating users, while under-moderation could attract regulatory action.
  • Transparency Burden: Producing detailed reports and disclosing algorithmic processes requires resources and technical clarity.
  • Algorithm Design: Algorithms must meet the dual demands of protecting children and fostering open debate. Regulated internet services may need to invest in redesigning its systems to comply with these requirements.

Despite concerns about notable interference in UK politics and stirring up anti-Islamic and anti-immigrant sentiment in the UK, Elon Musk is maintaining his aggression against the UK government (and the EU which has the Digital Services Act which in many respects is similar to the OSA).

In the summer of 2024, Musk personally and via his X platform helped to spread anti-immigrant, anti-Government and anti-Islamic misinformation by right-wing extremists about the tragic stabbings of a number of adults and children in Stockport. This culminated in a number of riots across the UK fed by far-right extremists. The young man responsible for the tragic events in Stockport was neither a Muslim or an immigrant. Read: How Elon Musk Helped Fuel the U.K.’s Far-Right Riots In respect of the EU DSA, the Commission has already found X to be in breach of misuse of verification checkmarks, blocking access for research & lack of transparency for advertising. And remains under investigation for not curbing (i) the spread of illegal content — hate speech or incitement of terrorism — (ii) information manipulation. Despite the continuing and accelerating attacks by Musk against the EU and the UK as they try to rein in hate crimes, unlawful content, and misinformation on social media platforms, in the meantime Peter Kyle (the UK’s technology secretary) recently suggested that governments need to show a “sense of humility” with big tech companies and treat them more like nation states. Marietje Schaake, a former Dutch member of the European parliament and now the international policy director at Stanford University Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centred Artificial Intelligence HAI) commented on this statement as follows:
I think it’s a baffling misunderstanding of the role of a democratically elected and accountable leader. Yes, these companies have become incredibly powerful, and as such I understand the comparison to the role of states, because increasingly these companies take decisions that used to be the exclusive domain of the state. But the answer, particularly from a government that is progressively leaning, should be to strengthen the primacy of democratic governance and oversight, and not to show humility. What is needed is self-confidence on the part of democratic government to make sure that these companies, these services, are taking their proper role within a rule of law-based system, and are not overtaking it.”
Hopefully the UK Government will be more aggressive in seeking to bring powerful unelected billionaires (like Elon Musk) and organisations to account.

It is essential that Ofcom help platforms get the balance right as in many cases the right to be offended by someone else’s views is a cornerstone of a democratic society.

“If we don’t believe in freedom of expression for people we despise, we don’t believe in it at all.” (Noam Chomsky)
Difference between freedom of speech and abuse of freedom of speech Clearly there is a big difference between the right for a man or woman on the street taking to a social media platform (or the streets) to express their concern about policies and politics from the misuse of platforms (or platform data) or political processes by powerful vested interests to skew public opinion and spread misinformation or even racial or religious prejudices. With great wealth and power should come great transparency and responsibility (though in our current political landscape it appears the opposite is true). See Democratic Threats above for more analysis on this. Protecting us from Government Monopolies on Permitted Opinions In addition to the risk of misinformation, bias and skewed freedom of speech and opinion by operators of social media and AI platforms and apps we must also bear in mind the significant risk of Governments seeking to have a monopoly on which opinions are permitted. This risk is always extremely high, as witnessed, for example, by the anti-scientific approach to any debate in the UK (and US) during COVID. When science meets politics, science invariably suffers. Civil liberties should not be easily swept away simply by asserting public health grounds or national security grounds. Transparency and protection of democracy and free speech must also extend to the impact of indirect political and governmental influence over regulated service providers (i.e. outside of the normal permitted legal channels) and what views about ‘reality’ are permitted. Transparency reports by in-scope providers must include the impact of direct and indirect political pressure and influence.
 The UK Online Safety Act 2023 (OSA or Act) imposes significant obligations on social media platforms, interactive websites and applications (including group video) and AI apps.

The OSA represents a significant regulatory shift, especially for platforms like regulated internet services that play a pivotal role in public discourse.

By enhancing transparency, refining algorithms, and protecting democratic content, regulated internet services have the opportunity to demonstrate leadership in compliance and user safety. However, navigating these new requirements will require substantial effort, resources, and innovation.

As Ofcom enforces the Act, the response of regulated internet services providers will set a precedent for how social media platforms can adapt to an evolving regulatory landscape. However, there are significant concerns that Ofcom (and by extension the UK Govt) will be slow and timid in its roll out of key aspects of the OSA and in its guidance and enforcement action. Humility is clearly not going to work with persons that wish to undermine democratic institutions and freedom of expression. Any further delays in rolling out the core obligations (particularly for Category 1 services) are going to be deeply damaging to UK democracy. Read: Time for tech firms to act: UK online safety regulation comes into force Is Ofcom about to delay action on fake and anonymous accounts until 2027? The OSA imposes significant obligations on social media platforms, interactive websites and applications (including group video) and AI apps affecting how these user-to-user and search service providers manage:
  • Their new ‘Duty of Care’
  • Illegal content (including hate crimes)
  • Protecting children from ‘harmful content’ including grooming, bullying and harassment
  • Ensuring transparency
  • Protecting journalistic content and democratic discourse; and
  • Managing algorithmic impacts.
Other than the duty of care and the requirement to protect all UK persons from illegal content and children from online harm, the full scope of the duties depends on the categorisation of the services provider.

The OSA requires providers to take extra measures to protect children even if the content is not illegal content.

  • The OSA requires services to proactively prevent children from encountering primary priority content, which includes pornography, content promoting self-harm, eating disorders, or suicide. This is a central focus of the legislation for protecting children from the most harmful content.
  • The OSA mandates services to protect children from priority content, such as bullying, hate speech, and violent content. Services are expected to tailor these protections to specific age groups based on the risks identified.
  • The OSA also obligates services to assess risks associated with non-designated harmful content through children’s risk assessments and implement measures to mitigate these risks.
  • Regulated Services: Encompasses internet services, user-to-user services, and search services that meet specific thresholds defined in the Act.
    • User-to-User Services: These are internet services where users can generate, upload, or share content accessible to others on the same platform.
      • Examples include social media platforms, forums, and collaborative applications.
      • This includes AI image and AI content generation platforms (like grok, chatgpt, gemini etc).
      • The definition is comprehensive, capturing services even if user interaction is not a primary feature. Exemptions apply to limited functions like private business communications and one-to-one messaging services such as email and SMS/MMS.
    • Search Services: These include any internet service functioning as a search engine, allowing users to search across websites or databases.
      • This category extends beyond traditional search engines (like Google) to any platform offering a search or filtering capability, such as websites with tag-based filtering.
      • Services operating as both user-to-user and search services are classified as “combined services” and must comply with the obligations of both categories
    • Internet Services: An internet service, other than a regulated user-to-user service or a regulated search service, that is within 80 (2) or Schedule 2 (primarily related to pornographic content).
See Frequently Asked Questions (FAQs) further below for out of scope services.

All online regulated services within scope of the OSA must protect UK users from illegal content and, where applicable, protect children from online harm. However, additional more detailed obligations apply to specified categories of service provider.

The OSA, and additional regulations to be published pursuant to it, are expected to categorise services providers as follows:
  • Category 1: Services with a significant number of UK users and functionalities that pose higher risks of harm. Ofcom has advised that this category should capture services that meet one of the following criteria:
    • Use content recommender systems and have more than 34 million UK users (approximately 50% of the UK population).
    • Allow users to forward or reshare user-generated content, use content recommender systems, and have more than 7 million UK users (approximately 10% of the UK population).
  • Category 2A: Services with a moderate reach and risk profile, likely to be the highest reach search services. Ofcom recommends that this category include search services (excluding vertical search engines) with over 7 million UK users.
  • Category 2B: Services with a moderate reach and risk profile, likely to be other user-to-user services with potentially risky functionalities or characteristics. Ofcom recommends that this category target services allowing direct messaging, with over 3 million UK users.
Once the thresholds are set, Ofcom will publish a register of categorized services in the summer of 2025. Ofcom anticipates that the final thresholds will result in 35 to 60 services being categorised.  Most in-scope service providers will not be categorised (as they will not be sufficiently large) and so will not be subject to the additional category duties (summarised below). Ofcom Guide to Categories and Requirements
Ofcom Summary
Ofcom have recently published a summary of their decisions in their Illegal Harms statement (the “Statement”) which outlines which services they relate to. It sets out:
  • The detailed measures they are recommending for user-to-user (U2U) services;
  • The detailed measures they are recommending for search services;
  • Their guidance for risk assessment duties, applicable to all U2U and search services; and
  • Their guidance for record-keeping and review duties, applicable to all U2U and search services.
The guidance sets out more than 40 safety measures that must be introduced by March 2025.
A snapshot of some of the measures, see the Statement for the full table for all service providers
Governance and Accountability
  • Regular reviews of risk management activities and internal monitoring.
  • Clear designation of an individual responsible for illegal content safety and reporting.
  • Documented responsibilities and codes of conduct for staff.
  • Tracking of emerging illegal harms.
Content Moderation
  • Implementation of content moderation systems (both human and automated).
  • Establishment of internal content policies and performance targets.
  • Prioritisation and resourcing of content review efforts.
  • Training for content moderation staff and provision of materials for volunteers.
Reporting and Complaints
  • Mechanisms for user complaints and reporting of illegal content.
  • Clear communication and timelines for handling complaints.
  • Processes for handling appeals and specific types of complaints.
User Controls and Support
  • Safety features for child users (e.g., default settings).
  • Terms of service that are clear and accessible.
  • Support services for child users.
  • Tools for user blocking and muting.
Additional Measures
  • Specific measures for recommender systems (e.g., safety metrics collection).
  • Removal of accounts associated with proscribed organizations.
  • Labeling schemes for notable users and monetized content.
  • Dedicated reporting channels for trusted flaggers.
Search-Specific Measures
  • Moderation of search results and predictive search suggestions.
  • Provision of content warnings and crisis prevention information.
  • Publicly available statements about content safety measures.
Ofcom Statement: Protecting people from illegal harms online
  • Free Speech: Regulated category 1 service providers must safeguard diverse political opinions, journalistic content, and democratic discourse while complying with moderation obligations.
  • Algorithm Transparency: All categorised service providers must provide detailed disclosures about how algorithms identify harmful content, moderate misinformation, and serve recommendations will be required.
  • Protect Children from Harm: All providers must take extra measures to protect children even if the content is not illegal content.
  • Harmful and Criminal Content Management: All providers must implement robust systems to detect and remove illegal criminal content and provide clear reporting tools for users. Category 1 providers must also take extra measures to enable adult users to reduce their exposure to legal but potentially harmful content.
  • User Control & Identity Verification: Category 1 providers must empower users with tools to manage their online experience such as use of personalised filters and ID verification.
  • Codes of Practice: Managing and understanding the practical compliance obligations for providers and users will be with reference to Ofcom guidance and codes of practice which assist to interpret the law. Under the OSA, Ofcom is required to prepare and issue the following separate Codes of Practice:
    • Codes of Practice for terrorism content
    • Codes of Practice for child sexual exploitation and abuse (CSEA) content
    • Codes of Practice for the purpose of compliance with the relevant duties relating to illegal content and harms.

Illegal content is defined broadly to encompass a wide range of what are known as priority offences. These include:

  • Terrorism: Content that promotes, glorifies, or incites terrorism
  • Child Sexual Exploitation and Abuse (CSEA): Material depicting or promoting child abuse
  • Sexual Exploitation of Adults
  • Threats, Abuse & Harassment including Hate Crimes: Content that incites violence or hatred based on protected characteristics.
  • Unlawful Pornographic Content: image based sexual offences.
  • Fraud: Deceptive or misleading content intended to defraud users.
  • Suicide: Assisting or encouraging suicide.
  • Buying/Selling unlawful items: e.g. buying or selling drugs or weapons.
See the Ofcom Background Guidance (‘Protecting people from illegal harms online’) for more information. Illegal Content Judgments
  1. Providers must conduct “suitable and sufficient” Illegal Content Risk Assessments (ICRAs) that consider the risks of users encountering illegal content, including “priority illegal content”.
  2. Providers must make illegal content judgments based on “reasonable grounds to infer,” a lower threshold than the criminal standard of “beyond reasonable doubt.” This means that there must be reasonable grounds to infer that:
    • The conduct element of a relevant offense is present or satisfied.
    • The state of mind element of that same offense is present or satisfied.
    • There are no reasonable grounds to infer that a relevant defense is present or satisfied.
    Freedom of expression and privacy must be considered when making these judgments.
  3. When service providers are alerted to the presence of illegal content or are aware of its presence in any other way, they have a duty to operate using proportionate systems and processes designed to “swiftly take down” any such content. This is referred to as the “takedown duty”.
Ofcom has issued the Illegal Content Judgements Guidance (ICJG) to support providers in understanding their regulatory obligations when making judgments about whether content is illegal under the OSA. It provides guidance on how to identify and handle illegal content, while considering freedom of expression and privacy. The ICJG outlines the legal framework for various offenses, the importance of context, jurisdictional issues, and the handling of reports and flags. It also offers specific guidance on various offense categories including the conduct and mental elements, as well as relevant defences.

User to User services: For the purposes of brevity given the scope of the OSA, I will focus on user to user services Codes and Guidance (as the most relevant category for social network platforms like X). The Illegal content Codes of Practice for search services is available here.

The draft Illegal content Codes of Practice for user-to-user services has been published with measures recommended for providers to comply with the following duties:
  • The illegal content safety duties set out in section 10(2) to (9) of the Act;
  • The duty for content reporting set out in section 20 of the Act, relating to illegal content; and
  • The duties about complaints procedures set out in section 21 of the Act, relating to the complaints requirements in section 21(4).
  • Section 3 of the document provides an index of recommended measures, including the application, relevant codes, and relevant duties for each measure. The recommended measures cover a range of areas, including governance and accountability, content moderation, reporting and complaints, recommender systems, settings, functionalities and user support, terms of service, user access, and user controls.
The recommended measures are set out in Section 4 of the document and are divided by thematic area:
  • Governance and Accountability
    • Large services should conduct an annual review of risk management activities related to illegal harm in the UK.
    • All services should designate an individual accountable for compliance with illegal content safety and reporting/complaints duties.
    • Large or multi-risk services should:
      • have written statements of responsibilities for senior managers involved in risk management.
      • have an internal monitoring and assurance function to assess the effectiveness of harm mitigation measures.
      • track and report evidence of new or increasing illegal content.
      • have a code of conduct setting standards for protecting users from illegal harm.
      • provide compliance training to individuals involved in service design and operation.
    Content Moderation
    • All services should have a content moderation function to review and assess suspected illegal content and take it down swiftly.
    • Large or multi-risk services should:
      • set and record internal content policies, performance targets, and prioritize content for review.
      • provide training and materials for content moderators (including volunteers) and use hash-matching to detect CSAM.
    Reporting and Complaints
    • All services should have accessible and user-friendly systems for reporting and complaints, and take appropriate action on complaints.
    • Larger services and those at risk of illegal harm should provide information about complaint outcomes and allow users to opt out of communications.
    • Specific requirements apply to handling complaints that are appeals or relate to proactive technology.
    Recommender Systems
    • Services conducting on-platform testing of recommender systems and at risk of multiple harms should collect and analyse safety metrics.
    Settings, Functionalities and User Support
    • Services with age-determination capabilities and at risk of grooming should implement safety defaults for child users and provide support.
    • All services should have terms of service that address illegal content and complaints, and these terms should be clear and accessible.
    User Access
    • All services should remove accounts of proscribed organizations.
    User Controls
    • Large services at risk of specific harms should offer blocking, muting, and comment-disabling features.
    Notable User and Monetised Labelling Schemes
    • Large services with labelling schemes for notable or monetized users should have policies to reduce the risk of harm associated with these schemes.
  • Implementing the recommended measures will involve the processing of personal data, and service providers are expected to comply fully with data protection law when taking measures for the purpose of complying with their online safety duties.

The purpose of ICRAs are to help service providers understand how different kinds of illegal harm could arise on their service and what safety measures need to be put in place to protect users. ICRA’s must be ‘suitable and sufficient’ for a provider to meet their OSA obligations.

The Risk Assessment Guidance and Risk Profiles recommends that service providers consider two main types of evidence when conducting a risk assessment:
  1. Core inputs: This type of evidence should be considered by all service providers and includes risk factors identified through the relevant Risk Profile, user complaints and reports, user data (such as age, language, and groups at risk), retrospective analysis of incidents of harm, relevant sections of Ofcom’s Register of Risks, evidence drawn from existing controls, and other relevant information (including other characteristics of the service that may increase or decrease the risk of harm).
  2. Enhanced inputs: This type of evidence should be considered by large service providers and those who have identified multiple specific risk factors for a kind of illegal content. Examples of enhanced inputs include results of product testing, results of content moderation systems, consultation with internal experts on risks and technical mitigations, views of independent experts, internal and external commissioned research, outcomes of external audit or other risk assurance processes, consultation with users, and results of engagement with relevant representative groups.
The different types of illegal content that must be assessed are:
  • The 17 kinds of priority illegal content: Terrorism, Child Sexual Exploitation and Abuse (CSEA) (including Grooming, Child Sexual Abuse Material (CSAM), Hate, Harassment, stalking, threats and abuse, Controlling or coercive behaviour, Intimate image abuse, Extreme pornography, Sexual exploitation of adults, Human trafficking, Unlawful immigration, Fraud and financial offences, Proceeds of crime, Drugs and psychoactive substances, Firearms, knives and other weapons, Encouraging or assisting suicide, Foreign interference, and Animal cruelty.
  • Other illegal content: This includes non-priority illegal content as described in the Register of Risks and potentially other offences depending on the specific service and evidence available.
Additional factors that service providers should consider when carrying out an illegal content risk assessment:
  • Service characteristics: The characteristics of the service, such as its user base (e.g., age, language, vulnerable groups), functionalities (e.g., live streaming, anonymous posting), and business model, can affect the level of risk.
  • Risk factors: The Risk Profiles published by Ofcom identify specific risk factors associated with each type of illegal content. Service providers should consider these risk factors and any additional factors specific to their service.
  • Likelihood and impact of harm: The assessment should consider the likelihood of each type of illegal content occurring on the service and the potential impact of that content on users and others.
  • Existing controls: The effectiveness of any existing measures to mitigate or control illegal content should be considered.
  • Evidence: Service providers should use a variety of evidence to inform their risk assessment, including user complaints, data analysis, and external research.
Categorised service providers also have the following additional duties regarding their illegal content risk assessments:
  • Publication of Summary: They must publish a summary of their most recent illegal content risk assessment. Category 1 services must include this summary in their terms of service, while Category 2A services must include it in a publicly available statement. The summary should include the findings of the assessment, including the levels of risk and the nature and severity of potential harm to individuals.
  • Provision of Assessment to Ofcom: They must provide Ofcom with a copy of their risk assessment record as soon as reasonably practicable after completing or revising it.

The Online Services Act (s.61) defines content that is harmful to children as:

  • ‘Primary priority content’ being:
    • Pornographic content
    • Content which encourages, promotes or provides instructions for suicide.
    • Content which encourages, promotes or provides instructions for an act of deliberate self-injury.
    • Content which encourages, promotes or provides instructions for an eating disorder or behaviours associated with an eating disorder.
  • Section 62 defines other priority content that can be harmful to children and must be managed appropriately. It includes:
    • Bullying and cyberbullying
    • Abusive or hateful content
    • Content depicting or encouraging serious violence
    • Content promoting dangerous stunts or challenges
    • Content encouraging the ingestion or exposure to harmful substances
    • Platforms must ensure that access to this type of content is age-appropriate and that protections are in place for children

The OSA prioritises protecting UK users from online harms.

(1)This Act provides for a new regulatory framework which has the general purpose of making the use of internet services regulated by this Act safer for individuals in the United Kingdom. (2)To achieve that purpose, this Act (among other things)—  (a)imposes duties which, in broad terms, require providers of services regulated by this Act to identify, mitigate and manage the risks of harm (including risks which particularly affect individuals with a certain characteristic) from—  (i)illegal content and activity, and  (ii)content and activity that is harmful to children, and  (b)confers new functions and powers on the regulator, OFCOM. The Act outlines specific age and identity verification requirements, particularly for platforms categorized as Category 1 services, which are likely to have a significant number of users and offer a wide range of functionalities. In addition, platforms that are clearly aimed at pornography consumption must carry out age assurance checks.

Age Assurance

  • “Highly Effective” Age Verification or Estimation Required: The Act mandates that services likely to be accessed by children use age verification or age estimation methods that are “highly effective” at correctly determining whether a user is a child. This applies across all areas of the service, including design, operation, and content.
  • Self-Declaration Not Sufficient: Simple self-declaration of age is not considered a valid form of age verification or age estimation.
  • Ofcom Guidance on Effectiveness: Ofcom, the designated regulator, is responsible for providing guidance on what constitutes “highly effective” age assurance. This guidance will include examples of effective and ineffective methods, and principles to be considered.
  • Factors for Effective Age Assurance: Ofcom’s guidance suggests that effective age assurance methods should be technically accurate, robust, reliable, and fair. They should be easy to use and work effectively for all users, regardless of their characteristics.
  • Recommended Methods: Ofcom has recommended methods like credit card checks, open banking, and photo ID matching as potentially highly effective.
Transparency and Reporting Requirements: Platforms using age assurance must clearly explain their methods in their terms of service and provide detailed information in a publicly available statement. They must also keep written records of their age assurance practices and how they considered user privacy.

Ofcom Reports on Age Assurance Use:

Ofcom will assess how providers use age assurance and its effectiveness, reporting on any factors hindering its implementation.

Identity Verification

  • Category 1 Services Must Offer Identity Verification: The Act requires Category 1 services (like major social media platforms) to offer all adult users in the UK the option to verify their identity, unless identity verification is already necessary to access the service.
  • No Specific Method Mandated: The Act does not specify a particular method of identity verification. Platforms can choose a method that works for their service, but it must be clearly explained in their terms of service.
  • Documentation Not Required: The identity verification process does not necessarily need to involve providing documentation.

User Empowerment Features:

Identity verification is linked to user empowerment features, as platforms must offer adult users the ability to:
  • Control their exposure to harmful content.
  • Choose whether to interact with content from verified or non-verified users.
  • Filter out non-verified users.

Ofcom Guidance for Category 1 Services:

Ofcom is expected to provide guidance for Category 1 services on implementing identity verification, with a focus on ensuring availability for vulnerable adult users. General Considerations
  • The Act aims to strike a balance between online safety and freedom of expression, and this balance influences the implementation of age and identity verification requirements.
  • Specific details regarding the application of these requirements are still under development, and Ofcom is working on codes of practice and guidance to provide further clarification.
The age and identity verification requirements under the UK Online Safety Act 2023 aim to enhance online safety, particularly for children and vulnerable adults. The Act focuses on the effectiveness of these measures, transparency from platforms, and user empowerment to control their online experiences.The UK Online Safety Act’s requirements regarding pornography varies for specialised pornography platforms and other internet services like search engines.

Specialised pronography platforms:

For specialised pornography platforms, which are classified as “services that feature provider pornographic content”, the Act imposes a duty to ensure children are not normally able to encounter regulated provider pornographic content. This means these platforms will have to implement robust age verification or age estimation systems. The Act emphasizes that these measures must be “highly effective” at determining whether a user is a child. The definition of “regulated provider pornographic content” is specific and excludes content that consists solely of text, or text accompanied by emojis or non-pornographic GIFs. However, content in image, video, or audio form that is considered pornographic would fall under this definition and trigger the age assurance obligations. The Act also mandates that these platforms, along with other user-to-user services and search services, conduct risk assessments. These assessments should identify and mitigate potential harms related to illegal content, including child sexual abuse material (CSAM) and extreme pornography.  Research indicates that user-to-user pornography services are particularly vulnerable to these types of illegal content. For example, a study found that a user-to-user pornography website hosted nearly 60,000 videos under phrases associated with intimate image abuse.  Additionally, evidence suggests that some services that host pornographic content prioritize user growth over content moderation, leading to less effective detection and removal of extreme content.

For other regulated internet services:

For other internet services like search engines the Act’s impact is more indirect. While search engines are not directly obligated to implement age verification, they are still subject to the requirement to mitigate and manage the risks of harm from illegal content and content harmful to children.This includes content that may be accessed through search results, even if the search engine itself does not host the content. For example, evidence suggests that search engines can be used to access websites offering illegal items like drugs and firearms. The Act acknowledges that search engines are often the starting point for many users’ online journeys and that they play a crucial role in making content accessible.  Search engines are also subject to risk assessments. Given the potential for users to find illegal content through search, they are expected to consider how their functionalities, like image/video search and reverse image search, might increase risks. Furthermore, even if pornography is not their core function or purpose, platforms like X (formerly Twitter) and Reddit, which allow users to share user-generated content, including pornographic material, would be classified as user-to-user services and be subject to the relevant duties under the Act. This means they would also need to conduct risk assessments, consider the risks associated with user-generated pornographic content, and implement measures to mitigate those risks. In conclusion, the Online Safety Act has significant implications for both specialised pornography platforms and other internet services that may have links to pornography. The Act aims to protect children from accessing pornographic content through robust age verification measures and seeks to reduce the prevalence of illegal content on these platforms through risk assessments and content moderation practices. The Act’s wide scope means that even platforms where pornography is not the main focus are still obligated to address the risks associated with such content.

The OSA imposes specific obligations on Category 1 services due to their reach and influence. These rules aim to safeguard the diversity of opinions and the integrity of democratic debate whilst minimising harmful speech. See also ‘Democratic Threats‘ below.

Key Requirements
  • Content of Democratic Importance: Providers must ensure moderation processes do not disproportionately suppress political opinions or stifle democratic discussion. This includes protecting content from verified news publishers, journalistic pieces, and user-generated contributions to political debates.
  • Equal Treatment of Opinions: Decisions about content moderation must respect free expression and avoid bias against particular political viewpoints. This includes avoiding overzealous removals under policies aimed at combating misinformation or hate speech.
  • Protection of Journalistic Content: Articles and posts deemed to have journalistic value must not be unjustly removed or suppressed, ensuring the platform remains a space for investigative reporting and public interest stories. Platforms must protect:
    • Verified news publishers’ content.
    • Journalistic content, even if shared by individual users.
    • User-generated contributions to political debates.
While regulated internet services are required to remove illegal or harmful content, the OSA emphasises the need to uphold free speech. Providers must develop policies and systems that balance protecting users from harm and allowing diverse viewpoints to flourish. The requirement for transparency reports that include moderation policies and actions will be crucial here
  • Detailed Reporting: all categorised regulated internet services must publish annual transparency reports explaining their algorithms’ role in content moderation and misinformation detection. These reports should detail the volume of flagged and removed content, alongside the impact of moderation algorithms on users.
  • Proactive Technology Disclosure: providers must disclose any automated systems, such as machine learning tools, used to detect harmful or illegal content.
  • Terms of Service Clarity: providers must clearly explain its policies on algorithmic decision-making, especially regarding content of democratic importance and misinformation

User Empowerment Tools

The Act promotes user choice and control by requiring platforms to provide tools that help users manage their online experience. For example:
  • Users can thereby gain more insight into how recommendation systems work.
  • Platforms could be required to offer non-personalised feeds that reduce reliance on algorithm-driven content.
  • Category 1 services must provide adult users with control features that effectively:
    • Reduce the likelihood of encountering specific types of legal but potentially harmful content, such as content promoting suicide, self-harm, or eating disorders.
    • Offer features to filter out interactions with non-verified users.
    • Clearly explain the available control features and their usage in the terms of service.
See ‘Clean up the internet’ recommendations to Ofcom

Algorithm Transparency 

Algorithms are central to how regulated internet services moderate content, serve recommendations, and filter harmful material. The Act introduces transparency and accountability measures to ensure these algorithms and systems are safe and fair:
  • Providers must be transparent about how their algorithms function and the potential impact on users’ exposure to illegal content.
    • They must include provisions in their terms of service or publicly available statements that specify how individuals are protected from illegal content, including details about the design and operation of algorithms used.
    • Additionally, they must provide information about any proactive technology used for compliance, including how it works, and ensure this information is clear and accessible.
  • Category 1 providers have an additional duty to summarise the findings of their most recent ICRAs in their terms of service, including the level of risk associated with illegal content. Factors like the speed and reach of content distribution facilitated by algorithms must be considered. These assessments must be updated regularly to reflect changes in Ofcom’s Codes of Practice (COPs), risk profiles, and the provider’s business practices.
  • Safer Algorithms for Children: If regulated internet services are accessed by children, their algorithms must minimise exposure to harmful content. This includes age-appropriate design measures and risk assessments targeting features that could harm younger users.

AI Chatbots: It is very likely services such as ChatGPT, Gemini, Perplexity etc will be categorised as a user-to-user service, as they allow users to interact with a Generative AI chatbot and share chatbot-generated text and images and other user genrated content with other users.

Art: Services such as Midjourney (Art) are also in scope. Generative AI Tools and Pornographic Content: Services featuring AI tools capable of generating pornographic material are additionally regulated and must implement highly effective age assurance measures to prevent children from accessing such content. Generative AI and Search Services: AI tools enabling searches across multiple websites or databases are considered search services under the OSA. This includes tools that modify or augment search results on existing search engines or offer live internet results on standalone platforms. Consequently, these AI-powered search services will need to comply with the relevant duties outlined in the Act. Ofcom Guidance regarding generative AI and AI chatbots

Combating hate speech is a cornerstone of the OSA. Regulated providers must take decisive measures to reduce the prevalence of illegal hate speech and implement systems for detection, reporting, and removal of hate speech.

Key Duties for Platforms
  • Illegal Content Detection: Hate speech is classified as priority illegal content, requiring regulated internet services to identify and remove such material promptly.
  • Risk Assessments: Regulated providers must evaluate the risks of hate speech on their platform and develop proportionate systems to manage and mitigate these risks.
  • Clear Reporting Mechanisms: The platform must provide users with accessible tools to flag hate speech. Reports must be acted upon swiftly, with outcomes communicated transparently.
Transparency by Moderation To meet the Act’s transparency standards, regulated services providers must:
  • Publish data on the volume and nature of hate speech flagged, reviewed, and removed.
  • Explain their systems for detecting and moderating hate speech in its transparency reports.
By addressing hate speech robustly, regulated services providers can align legal requirements with fostering a safer environment for users.

The process of bringing the Online Safety Act into law has been winding and subject to lengthy delays. Many of the provisions of the OSA came into force on January 10 2024 (including the new duty of care) for all regulated online services and many of the powers needed by Ofcom as the regulator responsible for enforcing the OSA. However, it has been subject to an implementation process which required Ofcom consultation and the issuance of Codes and guidance.

Major Deadline: All providers have a duty to assess the risk of illegal harms on their services, with a deadline of 16 March 2025. Providers will need to take the safety measures set out in the Codes or use other effective measures to protect users from illegal content and activity. Additional key protections in respect of Category 1 providers (like X) are unlikely to be in force until 2026 or 2027. Further delay on major platforms now looks very dangerous (see below ‘Democratic Threats’). The Secretary of State (Schedule 10) will determine regulations specifying Category 1, 2A, and 2B threshold conditions for different types of services. Commencement dates for remaining provisions of the Act will be set by future regulations under Section 240. Phased roll-out: Regulated service providers must take steps to comply with new duties following Ofcom guidance, which is to be published in phases: Phase 1: Illegal Harms (December 2024–March 2025)
  • December 2024: Ofcom will release the Illegal Harms Statement, including:
    • Illegal Harms Codes of Practice.
    • Guidance on illegal content risk assessments.
  • March 2025: Service providers must complete risk assessments and comply with the Codes or equivalent measures. Enforcement begins once Codes pass through Parliament.
Phase 2: Child Safety, Pornography, and Protection of Women and Girls (January–July 2025)
  • January 2025:
    • Final guidance on age assurance for publishers of pornography and children’s access assessments.
    • Services likely accessed by children must begin children’s risk assessments.
  • April 2025: Protection of Children Codes and risk assessment guidance published.
  • July 2025: Child protection duties become enforceable.
  • February 2025: Draft guidance on protecting women and girls will address specific harms affecting them.
Phase 3: Categorisation and Additional Duties (2024–2026)
  • End of 2024: Government to confirm thresholds for service categorisation (Category 1, 2A, or 2B).
  • Summer 2025: Categorised services register published; draft transparency notices follow shortly.
  • Early 2026: Proposals for additional duties on categorised services are expected to be released.
  • 2027: Implementation of the proposals for Category provider obligations.
Ofcom Roadmap: Ofcom Roadmap to Regulation Ofcom Important Dates

Ofcom state, in the 16 December 2024 Overview, that in early 2025, they will seek to enforce compliance with the rules by a combination of means, including:

  1. Supervisory engagement with the largest and riskiest providers to ensure they understand Ofcom’s expectations and come into compliance quickly, pushing for improvements where needed;
  2. Gathering and analysing the risk assessments of the largest and riskiest providers so they can consider whether they are identifying and mitigating illegal harms risks effectively;
  3. Monitoring compliance and taking enforcement action across the sector if providers fail to complete their illegal harms risk assessment by 16 March 2025;
  4. Focused engagement with certain high-risk providers to ensure they are complying with CSAM hash-matching measure, followed by enforcement action where needed; and
  5. Further targeted enforcement action for breaches of the safety duties where they identify serious ongoing issues that represent significant risks to users, to push for improved user outcomes and deter poor compliance.
“We will also use our transparency powers to shine a light on safety matters, share good practice, and highlight where improvements can be made.”
http://www.ofcom.org.uk/siteassets/resources/documents/online-safety/information-for-industry/roadmap/ofcoms-approach-to-implementing-the-online-safety-act/?v=330308 Compliance Monitoring Ofcom, the UK’s communications regulator will closely monitor regulated internet services’s adherence to the Act. Breaches could result in substantial penalties, including fines of the greater of £18 million or up to 10% of global annual turnover (Sch. 13). Balancing Act Regulated providers face significant operational challenges:
  • Maintaining Free Speech: Striking the right balance between protecting free expression and removing harmful content is critical. Over-moderation risks alienating users, while under-moderation could attract regulatory action.
  • Transparency Burden: Producing detailed reports and disclosing algorithmic processes requires resources and technical clarity.
  • Algorithm Design: Algorithms must meet the dual demands of protecting children and fostering open debate. Regulated internet services may need to invest in redesigning its systems to comply with these requirements.

Despite concerns about notable interference in UK politics and stirring up anti-Islamic and anti-immigrant sentiment in the UK, Elon Musk is maintaining his aggression against the UK government (and the EU which has the Digital Services Act which in many respects is similar to the OSA).

In the summer of 2024, Musk personally and via his X platform helped to spread anti-immigrant, anti-Government and anti-Islamic misinformation by right-wing extremists about the tragic stabbings of a number of adults and children in Stockport. This culminated in a number of riots across the UK fed by far-right extremists. The young man responsible for the tragic events in Stockport was neither a Muslim or an immigrant. Read: How Elon Musk Helped Fuel the U.K.’s Far-Right Riots In respect of the EU DSA, the Commission has already found X to be in breach of misuse of verification checkmarks, blocking access for research & lack of transparency for advertising. And remains under investigation for not curbing (i) the spread of illegal content — hate speech or incitement of terrorism — (ii) information manipulation. Despite the continuing and accelerating attacks by Musk against the EU and the UK as they try to rein in hate crimes, unlawful content, and misinformation on social media platforms, in the meantime Peter Kyle (the UK’s technology secretary) recently suggested that governments need to show a “sense of humility” with big tech companies and treat them more like nation states. Marietje Schaake, a former Dutch member of the European parliament and now the international policy director at Stanford University Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centred Artificial Intelligence HAI) commented on this statement as follows:
I think it’s a baffling misunderstanding of the role of a democratically elected and accountable leader. Yes, these companies have become incredibly powerful, and as such I understand the comparison to the role of states, because increasingly these companies take decisions that used to be the exclusive domain of the state. But the answer, particularly from a government that is progressively leaning, should be to strengthen the primacy of democratic governance and oversight, and not to show humility. What is needed is self-confidence on the part of democratic government to make sure that these companies, these services, are taking their proper role within a rule of law-based system, and are not overtaking it.”
Hopefully the UK Government will be more aggressive in seeking to bring powerful unelected billionaires (like Elon Musk) and organisations to account.

It is essential that Ofcom help platforms get the balance right as in many cases the right to be offended by someone else’s views is a cornerstone of a democratic society.

“If we don’t believe in freedom of expression for people we despise, we don’t believe in it at all.” (Noam Chomsky)
Difference between freedom of speech and abuse of freedom of speech Clearly there is a big difference between the right for a man or woman on the street taking to a social media platform (or the streets) to express their concern about policies and politics from the misuse of platforms (or platform data) or political processes by powerful vested interests to skew public opinion and spread misinformation or even racial or religious prejudices. With great wealth and power should come great transparency and responsibility (though in our current political landscape it appears the opposite is true). See Democratic Threats above for more analysis on this. Protecting us from Government Monopolies on Permitted Opinions In addition to the risk of misinformation, bias and skewed freedom of speech and opinion by operators of social media and AI platforms and apps we must also bear in mind the significant risk of Governments seeking to have a monopoly on which opinions are permitted. This risk is always extremely high, as witnessed, for example, by the anti-scientific approach to any debate in the UK (and US) during COVID. When science meets politics, science invariably suffers. Civil liberties should not be easily swept away simply by asserting public health grounds or national security grounds. Transparency and protection of democracy and free speech must also extend to the impact of indirect political and governmental influence over regulated service providers (i.e. outside of the normal permitted legal channels) and what views about ‘reality’ are permitted. Transparency reports by in-scope providers must include the impact of direct and indirect political pressure and influence.

FAQs

FAQ: Frequently Asked Questions about the UK Online Safety Act

Impact on Major Services

What is the impact for social media platforms like X, Facebook, Instagram and Bluesky? The UK Online Safety Act 2023 will have significant implications for services like X (formerly Twitter), Instagram, Facebook ad Bluesky particularly concerning freedom of expression, the use of algorithms, transparency, and democratic protections.  For example, as a large platform with a substantial UK user base, X will be classified as a Category 1 service, subjecting it to the most stringent requirements of the Act.

Freedom of Expression:

The Act strives to balance online safety with the protection of free speech. While requiring platforms to address harmful content, it emphasises upholding freedom of expression and ensuring that legitimate content and diverse viewpoints are not unduly restricted. However, critics have expressed concerns about the potential for overzealous content removal and a chilling effect on free speech, especially given the Act’s broad definition of “content that is harmful to children”.  There are concerns that the robust safety duties might outweigh the “balancing measures” intended to protect freedom of expression. The Act’s impact on freedom of expression for services like X will depend on how Ofcom interprets and enforces its provisions. Striking a balance between user safety and free speech remains a complex challenge.

Use of Algorithms:

While the Act doesn’t explicitly mandate transparency about how algorithms are used to manage the risk of misinformation, the emphasis on transparency suggests that algorithms used for content moderation will likely face scrutiny. Ofcom has also highlighted the potential for algorithms to repeatedly expose users, particularly children, to harmful content, emphasizing the need for providers to mitigate these risks.  The Act mandates that platforms consider the risks their algorithms pose in relation to illegal content and content that is harmful to children, potentially requiring them to adjust algorithms or platform design to minimise potential harms. X will need to provide information about its algorithms in transparency reports, risk assessments, and terms of service, disclosing how they identify and mitigate harmful content like hate speech and misinformation. X will also need to ensure its algorithms comply with the Act’s requirements for protecting children.

Transparency:

Transparency is a key theme in the Act, especially for Category 1 services like X. The Act requires X to be transparent about its content moderation practices, especially those related to content of democratic importance. This includes:
  • Publishing annual transparency reports detailing its content moderation practices, the volume of harmful content removed, the use of algorithms, and their impact on users.
  • Providing clear information in its terms of service explaining its policies on content moderation, user safety, and reporting mechanisms.
  • Disclosing the use of “proactive technology,” such as automated tools or algorithms, used to detect and remove harmful content.
These transparency requirements aim to hold platforms accountable and empower users by providing clarity about how their data is used and content is moderated.

Democratic Protections:

The Act includes provisions to protect content of democratic importance, such as news publisher content, journalistic content, and user-generated content that contributes to political debate. Category 1 services like X must implement systems to ensure that decisions regarding content moderation consider the importance of free expression and provide equal treatment to diverse political opinions. However, the Act does not specifically address whether algorithmic requirements apply to content of democratic importance. It remains to be seen how Ofcom will address this in future guidance.

Conclusion:

The UK Online Safety Act 2023 will have a significant impact on platforms like X. The Act’s focus on user safety, transparency, and accountability will require X to make substantial changes to its content moderation practices, algorithmic transparency, and approach to democratic content. X’s compliance with the Act will be closely monitored by Ofcom, with potential penalties for breaches. It is crucial for platforms like X to proactively engage with the Act’s requirements and Ofcom’s guidance to ensure compliance and navigate the challenges of balancing online safety with freedom of expression. The Act’s effectiveness will ultimately depend on Ofcom’s ability to enforce its provisions and adapt to the evolving online landscape.
The Act will have a significant impact on services like ChatGPT, Gemini, Perplexity, Claude and others especially given the recent concerns about Generative AI and the potential for misuse. These will usually fall within  user-to-user services, potentially impacting their functionalities, transparency requirements, and approach to user safety. Ofcom published an open letter on November 8, 2024, specifically addressing Generative AI and chatbots in the context of the Act. This letter emphasised the Act’s application to:
  • Services that allow users to interact with and share content generated by AI chatbots. For example, if ChatGPT allows users to share AI-generated text, images, or videos with other users, it would be considered a regulated user-to-user service.
  • Services where users can create and share AI chatbots, known as ‘user chatbots’. This means that any AI-generated content created and shared by these ‘user chatbots’ would also be regulated by the Act.
Ofcom has expressed concerns about the potential for Generative AI chatbots to be used to create harmful content, such as chatbots that mimic real people, including deceased children. These concerns highlight the Act’s focus on protecting users from harmful content generated by AI, even if it is technically ‘user-generated’ through the chatbot interface. The Impact on services like ChatGPT are set out below.

Content Moderation:

The Act will require services like ChatGPT to implement robust content moderation mechanisms to prevent the creation and dissemination of illegal content through its platform. This could include:
    • Monitoring user prompts and chatbot responses to identify and prevent the generation of harmful content, such as hate speech, child sexual abuse material (CSAM), or content promoting terrorism.
    • Developing safeguards to prevent the creation of ‘user chatbots’ that mimic real people or deceased individuals, particularly children.
    • Implementing reporting mechanisms and processes for users to flag potentially harmful chatbot interactions or content.

Transparency:

The Act’s emphasis on transparency will require services like ChatGPT to provide more information about its content moderation practices and the use of AI in its service. This could include:
  • Publishing transparency reports detailing the volume and nature of harmful content identified and removed, including AI-generated content.
  • Disclosing the use of algorithms and proactive technology to detect and mitigate harmful content.
  • Providing clear information in its terms of service about its approach to user safety and AI-generated content.
  • Risk Assessments: Services like ChatGPT will need to conduct thorough risk assessments, evaluating the specific risks associated with Generative AI and chatbots, considering factors like:
    • The likelihood of its functionalities facilitating the presence or dissemination of harmful content, identifying functionalities more likely to do so.
    • How its design and operation, including its business model and use of proactive technology, may reduce or increase the likelihood of users encountering harmful content.
    • The risk of its proactive technology breaching statutory provisions or rules concerning privacy, particularly those relating to personal data processing.

Challenges and Considerations:

Defining Harmful Content: Applying the Act’s broad definitions of harmful content to the context of Generative AI will be complex. Determining what constitutes “harmful” chatbot interactions, considering factors like context, intent, and potential for harm, will require careful consideration.

Balancing Safety and Innovation:

Finding a balance between protecting users from harm and fostering innovation in Generative AI will be crucial. Overly restrictive measures could stifle the development and beneficial uses of AI chatbots.

Technical Feasibility:

Implementing effective content moderation and safety measures for a service like ChatGPT, which relies on complex AI models, poses technical challenges. Developing robust and adaptable solutions to mitigate risks associated with Generative AI will require ongoing research and innovation.

Conclusion:

The Online Safety Act 2023 represents a significant shift in the regulation of online services, including AI-powered platforms like ChatGPT. The Act’s focus on user safety and transparency will require ChatGPT to adapt its approach, implement robust content moderation, and provide greater transparency about its operations. While the Act presents challenges, it also offers an opportunity for ChatGPT to demonstrate its commitment to responsible AI development and user safety. The evolving nature of Generative AI and the Act’s implementation will require ongoing dialogue between Ofcom, service providers like ChatGPT, and stakeholders to ensure a balanced and effective approach to online safety.
The Act will significantly impact services like Google Search, particularly due to their classification as Category 2A services – high-reach search services.  The Act’s focus on user safety, transparency, and accountability will require Google Search to make considerable changes to its content moderation practices, algorithmic transparency, and approach to content of democratic importance. Here’s a breakdown of the likely impacts:

Freedom of Expression:

The Act seeks to balance online safety with the protection of free speech. It requires platforms to tackle harmful content while upholding freedom of expression and ensuring legitimate content is not unduly restricted. However, concerns remain regarding the Act’s potential for overzealous content removal and its impact on free speech, mirroring similar concerns raised for services like X.

Use of Algorithms:

A key area of impact concerns the use of algorithms, especially those influencing the display, promotion, restriction, or recommendation of content.  Google Search will be required to consider the risks its algorithms pose in relation to illegal content and content harmful to children, potentially necessitating adjustments to its algorithms or platform design to minimize harm. The Act’s emphasis on transparency suggests Google’s algorithms will likely face scrutiny regarding content moderation, particularly how they identify and mitigate harmful content, including hate speech, misinformation, CSAM, and content encouraging suicide or self-harm. Google Search will likely need to provide information about its algorithms in transparency reports, risk assessments, and terms of service, disclosing how they identify and mitigate harmful content. They will also need to ensure their algorithms comply with the Act’s requirements for protecting children.

Transparency:

The Act mandates transparency for all regulated services, particularly for Category 2A services like Google Search. This includes:
  • Publishing annual transparency reports detailing content moderation practices, including the volume of harmful content removed, the use of algorithms, and their impact on users.
  • Providing clear information in its terms of service explaining its content moderation policies, user safety, and reporting mechanisms.
  • Disclosing the use of “proactive technology,” such as automated tools or algorithms, used to detect and remove harmful content.
These transparency requirements are intended to hold platforms accountable and empower users by providing clarity about how their data is used and content is moderated.

Democratic Protections:

The Act includes provisions to safeguard content of democratic importance, such as news publisher content, journalistic content, and user-generated content that contributes to political debate. While the Act emphasizes the need to protect democratic content, it doesn’t explicitly address whether algorithmic requirements apply to such content. It remains unclear how Ofcom will address this in its guidance and how Google Search will ensure that decisions regarding content moderation on politically relevant content consider the importance of free expression and provide equal treatment to diverse political opinions.

Additional Considerations for Google Search:

  • Specific Risk Factors: Google Search’s risk assessment must consider specific risk factors identified in the Act, including its service type as a general search service, functionalities such as predictive search suggestions, and the presence of child users.
  • Prevalence of Illegal Content: The Act requires Google Search to assess the prevalence of illegal content and content that is harmful to children on its platform. This involves analyzing the extent of such content’s dissemination and the severity of the potential harm it poses.
  • Mitigation Measures: Google Search will need to implement proportionate measures to mitigate and manage the risks identified in its risk assessment. This could include measures like age assurance, content moderation, and user reporting mechanisms.

Enforcement and Penalties:

Ofcom will closely monitor Google Search’s compliance with the Act. Penalties for breaches could include fines, service restriction orders, and even criminal sanctions for senior managers.

Conclusion:

The UK Online Safety Act 2023 poses significant challenges and obligations for Google Search. The Act’s focus on user safety, transparency, and accountability will require substantial changes to content moderation practices, algorithmic transparency, and the approach to content of democratic importance. Google’s compliance with the Act will be closely scrutinized, emphasizing the need for a proactive and comprehensive approach to meeting its requirements.

The Act’s effectiveness will depend on Ofcom’s ability to enforce its provisions and adapt to the evolving online landscape.

The specific application of the Act to image generation platforms will depend on factors like how they are structured, their user base, and the content they host.

Potential impacts:

  • Illegal Content Generation: A primary concern would be the potential for Midjourney to be used to generate illegal content, such as child sexual abuse material (CSAM). The Act requires platforms to take steps to mitigate and effectively manage risks associated with illegal content, which could involve:
    • Implementing safeguards to prevent the generation of illegal images, potentially through content filtering or prompt moderation. This might involve restricting certain prompts or keywords known to be associated with illegal content.
    • Collaborating with law enforcement agencies and organizations like the Internet Watch Foundation (IWF) to identify and remove CSAM. This could include using hash-matching technology to detect known CSAM images.

Harmful Content and Algorithms:

While Midjourney’s primary function is image generation, its algorithms could still be subject to scrutiny under the Act, especially if they influence content recommendations or user exposure to potentially harmful imagery. The Act mandates that platforms consider how algorithms impact user exposure to illegal content and content harmful to children. Midjourney might need to assess how its algorithms could contribute to the spread of harmful content and implement safeguards to minimize risks. For example:
  • Analyzing user prompts and generated images to identify patterns or trends that could indicate harmful content generation.
  • Adjusting algorithms to limit the visibility or recommendation of images that are likely to be harmful.
Note: Midjourney already does take steps to prevent the automatic generation of potentially explicit or defamatory images.

Transparency and Accountability:

The Act’s emphasis on transparency could require Midjourney to:
  • Transparency: Disclose information about its algorithms and content moderation practices, particularly how they address illegal and harmful content. This could involve publishing transparency reports or updating its terms of service.
  • User Controls: Provide users with more control over the content they encounter, potentially through filtering options or reporting mechanisms.
  • Risk Assessments: Midjourney would likely need to conduct thorough risk assessments, specifically evaluating the risks associated with image generation and its potential to facilitate illegal or harmful content. This would involve:
    • Identifying risk factors specific to image generation, such as the ease of creating realistic imagery or the potential for deepfakes.
    • Considering how its design and operation, including its user interface, algorithms, and content moderation processes, could contribute to or mitigate risks.

Additional Considerations:

Categorization: The Act categorizes platforms based on their size, reach, and functionality. Depending on Midjourney’s user base and functionalities, it could fall under different categories, potentially influencing the specific requirements it needs to meet. Emerging Technology: Image generation is a rapidly evolving field. The Act’s focus on being technology-neutral suggests it’s intended to adapt to new technologies, but the specific application to image generation platforms like Midjourney may require further clarification from Ofcom. International Applicability: If Midjourney has a significant UK user base or targets UK users, the Act could apply even if the platform is based outside the UK. In conclusion, Midjourney and similar platforms will need to monitor Ofcom’s guidance and adapt their practices to ensure compliance and mitigate potential risks. The evolving nature of image generation technology and the Act’s implementation will require ongoing dialogue and collaboration between Ofcom, service providers, and stakeholders to ensure a balanced and effective approach to online safety.
Key Considerations:
  • Categorization: The Act’s applicability and specific requirements hinge on how video conferencing tools are categorized. This depends on factors like user base, functionality, and whether they primarily facilitate private or public communication. If categorized as user-to-user services due to features like group chats or content sharing capabilities, they might be subject to more stringent requirements, similar to social media platforms.
  • Private vs. Public Communication: A core principle of the Act is the distinction between private and public communication. The Act generally avoids imposing obligations related to private communications, recognizing the importance of privacy. Video conferencing tools primarily used for private one-to-one or small group conversations might fall under this exemption. However, features enabling broader content sharing, recording, or public broadcasting could trigger additional scrutiny.
  • Illegal Content and CSAM: A significant concern is the potential misuse of video conferencing tools for illegal activities, including the creation and distribution of Child Sexual Abuse Material (CSAM). The Act requires platforms to take steps to mitigate and manage risks associated with illegal content. This could potentially impact video conferencing tools in the following ways:
  • Proactive Measures: Platforms are required to implement proactive measures to detect and prevent CSAM, potentially through partnerships with organizations like the Internet Watch Foundation (IWF) or using hash-matching technology to identify known CSAM content.
  • Content Moderation: Depending on categorization and functionalities, video conferencing tools could face obligations related to content moderation, requiring them to remove or disable access to illegal content. This could involve human moderation or automated tools.

Transparency and Reporting:

The Act mandates transparency for regulated services, potentially requiring video conferencing tools to disclose information about their content moderation practices, including the volume of illegal content removed and the use of proactive technologies. This could involve:
  • Publishing transparency reports outlining their approach to content moderation.
  • Updating terms of service to clearly explain user safety measures and reporting mechanisms.
  • Potential Impacts on Specific Features:
    • Recording and Sharing: Features enabling recording and sharing of video conferencing sessions could be subject to additional scrutiny due to the potential for misuse. Platforms might be required to implement safeguards, such as requiring user consent for recording or limiting sharing capabilities to prevent the spread of illegal content.
    • Livestreaming: If video conferencing tools offer livestreaming functionality, they might face similar obligations as video-sharing platforms, potentially requiring content moderation during livestreams and measures to prevent the broadcast of illegal or harmful content.
    • Messaging and Chat: Integrated messaging and chat functionalities could be treated similarly to other messaging services. Depending on the level of encryption and the platform’s categorization, there could be requirements related to illegal content detection and removal or transparency regarding data access for law enforcement purposes.

Additional Considerations:

Risk Assessments: Video conferencing tool providers  need to conduct comprehensive risk assessments to identify and evaluate risks specific to their platform and features. Age Assurance: If platforms have a significant number of child users or offer features attractive to children, they might need to implement age assurance mechanisms to protect children from harmful content or interactions. Emerging Technologies: The Act is designed to be technology-neutral and adapt to evolving technologies. This could impact video conferencing tools as new features or functionalities emerge, requiring ongoing assessment and adaptation to comply with the Act’s principles.

Conclusion:

The specific requirements will depend on how video conferencing tools are categorized and the functionalities they offer.
The UK Online Safety Act 2023 will have a significant impact on popular gaming environments like Roblox and Fortnite, particularly in areas related to content moderation, child safety, and transparency. In addition, as large service providers they represent a higher risk to children and will have additional obligations to take measures to protect children from illegal content and from harm. Gaming environments like Roblox and Fortnite that allow chat fall under the definition of “user-to-user services” under the Act. This categorization means these platforms will have legal responsibilities for keeping users safe online, particularly children, and will need to take steps to mitigate and manage risks associated with illegal and harmful content.

Illegal Content:

The Act requires platforms to proactively address illegal content, including Child Sexual Abuse Material (CSAM) and terrorism-related content. Platforms like Roblox and Fortnite will need to implement:
  • Content moderation: Robust content moderation systems to detect and remove illegal content. This might involve using a combination of automated tools (like hash-matching for known CSAM) and human moderators.
  • Collaboration channels: to collaborate with law enforcement agencies and organizations like the IWF to proactively identify and report illegal content.
  • Child Safety and Grooming: The Act is particularly focused on protecting children from online harms, including grooming and sexual exploitation. Roblox and Fortnite, given their large child user bases, will need to implement robust safeguards to prevent and detect grooming behaviours. This could include:
    • Enhanced age verification mechanisms to accurately identify child users.
    • Restricting direct messaging or chat functionalities for children or requiring parental consent.
    • Providing educational resources and safety tips to children and parents.
    • Proactively monitoring chat and interactions for suspicious behaviour and patterns that indicate grooming.

Harmful Content:

While not strictly illegal, certain types of content can be harmful, particularly to children. This includes content promoting self-harm, suicide, eating disorders, violence, and hate speech. Gaming environments and platforms  like Roblox and Fortnite will need to develop and implement clear policies on harmful content and establish effective moderation processes to address it. They will also need to consider the impact of their algorithms and recommender systems on user exposure to harmful content. This could involve:
  • Analyzing user-generated content and in-game interactions to identify and address potential risks.
  • Adjusting algorithms to limit the visibility of or recommendations for harmful content.
  • Providing users with tools to manage their online experience and filter out unwanted content.

Transparency and Reporting:

The Act emphasizes transparency and accountability, requiring platforms to be open about their content moderation practices and the risks they face. Roblox and Fortnite will need to publish regular transparency reports detailing their efforts to combat illegal and harmful content. These reports could include:
  • Data on the volume of content removed or action taken.
  • Information about their content moderation processes and the use of automated tools.
  • Insights into emerging risks and challenges.
They might also need to be more transparent about their algorithms and how they impact content visibility and user experience.

Risk Assessments:

Providers of services like Roblox and Fortnite will need to conduct thorough risk assessments to identify and evaluate the specific risks of illegal and harmful content on their platforms. These assessments should consider factors like their user base demographics, functionalities (like chat, user groups, and in-game interactions), and business models. The risk assessments should inform their safety strategies and content moderation policies. Overall, the UK Online Safety Act 2023 signifies a significant shift in how online platforms, including popular gaming environments, are expected to approach user safety. Roblox and Fortnite will need to invest in robust safety measures, enhance their content moderation capabilities, and be more transparent about their practices to ensure compliance and protect their users, particularly children, from online harms.
The UK Online Safety Act 2023 will have a significant impact on major pornography platforms (like OnlyFans and Pornhub).  These platforms are categorized as “services that feature provider pornographic content“ under the Act, making them subject to specific duties to ensure they are not accessible to children. Age Verification: The Act requires these platforms to implement robust age-verification systems to prevent children from accessing pornographic content. This could involve:
  • Using third-party age verification providers.
  • Implementing stricter identity verification procedures.
  • Using age estimation technology.

Illegal Content:

Like other platforms, providers of pornography will need to take steps to mitigate and manage risks associated with illegal content, such as CSAM and extreme pornography. This includes:
  • Proactive Measures: Platforms will need to implement proactive measures to detect and prevent illegal content, including using technology to scan for known illegal content.
  • Content Moderation: Platforms might face obligations related to content moderation, requiring them to remove or disable access to illegal content. This could involve human moderation or automated tools.
  • Risk Assessments: Sites like Onlyfans and Pornhub will need to conduct comprehensive risk assessments to identify and evaluate risks specific to their platforms. Given that research indicates that user-to-user pornography services are at a higher risk of hosting intimate image abuse and extreme pornography these platforms will need to pay particular attention to:
    • User-generated Content: Both platforms host user-generated content, meaning they will need to assess the risks associated with this content, such as the potential for intimate image abuse, extreme pornography, and the exploitation of adults.
    • Functionalities: The risk assessment should consider the role of platform functionalities in facilitating illegal content. For example, the ability to post content anonymously, use direct messaging, and search for content can increase the risk of illegal content being shared.

Business Models:

The revenue models of these platforms could also be a risk factor. For example, platforms that rely heavily on advertising revenue may be incentivized to allow content to be uploaded in the most “friction-free” manner, potentially leading to less effective content moderation and an increased risk of illegal content. Transparency and Reporting: Sites like Onlyfans and Pornhub will be required to be transparent about their content moderation practices and the volume of illegal content removed, potentially through transparency reports. This could involve:
  • Publishing statistics on the amount of content removed or action taken in response to illegal content reports.
  • Providing information on the processes and technologies used for content moderation, and the use of human moderators.

Additional Considerations:

  • Impact of User Restrictions: The Act emphasizes user safety, but platforms need to be mindful of the potential negative impacts of overly restrictive measures on sex workers, particularly those who rely on these platforms for income. Restrictions that push sex work further underground could increase risks of exploitation and harm.
  • Collaboration with Law Enforcement: Pornography platforms will need to cooperate with law enforcement agencies in investigations related to illegal content. This could involve providing data or assisting with content takedowns.

Evolving Nature of the Act:

The Act is designed to be adaptable to technological advancements and evolving online harms. This means platforms will need to stay informed about Ofcom’s guidance and adapt their practices accordingly. In conclusion, the UK Online Safety Act 2023 will necessitate major changes for major pornography platforms like Onlyfans and Pornhub. They will need to implement robust age verification systems, strengthen their content moderation practices, be more transparent about their operations, and conduct thorough risk assessments. Striking a balance between user safety and the rights of adult content creators will be a key challenge for these platforms as they adapt to the new regulatory landscape.
The UK Online Safety Act 2023 will have significant implications for services like X (formerly Twitter), Instagram, Facebook ad Bluesky particularly concerning freedom of expression, the use of algorithms, transparency, and democratic protections.  For example, as a large platform with a substantial UK user base, X will be classified as a Category 1 service, subjecting it to the most stringent requirements of the Act.

Freedom of Expression:

The Act strives to balance online safety with the protection of free speech. While requiring platforms to address harmful content, it emphasises upholding freedom of expression and ensuring that legitimate content and diverse viewpoints are not unduly restricted. However, critics have expressed concerns about the potential for overzealous content removal and a chilling effect on free speech, especially given the Act’s broad definition of “content that is harmful to children”.  There are concerns that the robust safety duties might outweigh the “balancing measures” intended to protect freedom of expression. The Act’s impact on freedom of expression for services like X will depend on how Ofcom interprets and enforces its provisions. Striking a balance between user safety and free speech remains a complex challenge.

Use of Algorithms:

While the Act doesn’t explicitly mandate transparency about how algorithms are used to manage the risk of misinformation, the emphasis on transparency suggests that algorithms used for content moderation will likely face scrutiny. Ofcom has also highlighted the potential for algorithms to repeatedly expose users, particularly children, to harmful content, emphasizing the need for providers to mitigate these risks.  The Act mandates that platforms consider the risks their algorithms pose in relation to illegal content and content that is harmful to children, potentially requiring them to adjust algorithms or platform design to minimise potential harms. X will need to provide information about its algorithms in transparency reports, risk assessments, and terms of service, disclosing how they identify and mitigate harmful content like hate speech and misinformation. X will also need to ensure its algorithms comply with the Act’s requirements for protecting children.

Transparency:

Transparency is a key theme in the Act, especially for Category 1 services like X. The Act requires X to be transparent about its content moderation practices, especially those related to content of democratic importance. This includes:
  • Publishing annual transparency reports detailing its content moderation practices, the volume of harmful content removed, the use of algorithms, and their impact on users.
  • Providing clear information in its terms of service explaining its policies on content moderation, user safety, and reporting mechanisms.
  • Disclosing the use of “proactive technology,” such as automated tools or algorithms, used to detect and remove harmful content.
These transparency requirements aim to hold platforms accountable and empower users by providing clarity about how their data is used and content is moderated.

Democratic Protections:

The Act includes provisions to protect content of democratic importance, such as news publisher content, journalistic content, and user-generated content that contributes to political debate. Category 1 services like X must implement systems to ensure that decisions regarding content moderation consider the importance of free expression and provide equal treatment to diverse political opinions. However, the Act does not specifically address whether algorithmic requirements apply to content of democratic importance. It remains to be seen how Ofcom will address this in future guidance.

Conclusion:

The UK Online Safety Act 2023 will have a significant impact on platforms like X. The Act’s focus on user safety, transparency, and accountability will require X to make substantial changes to its content moderation practices, algorithmic transparency, and approach to democratic content. X’s compliance with the Act will be closely monitored by Ofcom, with potential penalties for breaches. It is crucial for platforms like X to proactively engage with the Act’s requirements and Ofcom’s guidance to ensure compliance and navigate the challenges of balancing online safety with freedom of expression. The Act’s effectiveness will ultimately depend on Ofcom’s ability to enforce its provisions and adapt to the evolving online landscape.The Act will have a significant impact on services like ChatGPT, Gemini, Perplexity, Claude and others especially given the recent concerns about Generative AI and the potential for misuse. These will usually fall within  user-to-user services, potentially impacting their functionalities, transparency requirements, and approach to user safety. Ofcom published an open letter on November 8, 2024, specifically addressing Generative AI and chatbots in the context of the Act. This letter emphasised the Act’s application to:
  • Services that allow users to interact with and share content generated by AI chatbots. For example, if ChatGPT allows users to share AI-generated text, images, or videos with other users, it would be considered a regulated user-to-user service.
  • Services where users can create and share AI chatbots, known as ‘user chatbots’. This means that any AI-generated content created and shared by these ‘user chatbots’ would also be regulated by the Act.
Ofcom has expressed concerns about the potential for Generative AI chatbots to be used to create harmful content, such as chatbots that mimic real people, including deceased children. These concerns highlight the Act’s focus on protecting users from harmful content generated by AI, even if it is technically ‘user-generated’ through the chatbot interface. The Impact on services like ChatGPT are set out below.

Content Moderation:

The Act will require services like ChatGPT to implement robust content moderation mechanisms to prevent the creation and dissemination of illegal content through its platform. This could include:
    • Monitoring user prompts and chatbot responses to identify and prevent the generation of harmful content, such as hate speech, child sexual abuse material (CSAM), or content promoting terrorism.
    • Developing safeguards to prevent the creation of ‘user chatbots’ that mimic real people or deceased individuals, particularly children.
    • Implementing reporting mechanisms and processes for users to flag potentially harmful chatbot interactions or content.

Transparency:

The Act’s emphasis on transparency will require services like ChatGPT to provide more information about its content moderation practices and the use of AI in its service. This could include:
  • Publishing transparency reports detailing the volume and nature of harmful content identified and removed, including AI-generated content.
  • Disclosing the use of algorithms and proactive technology to detect and mitigate harmful content.
  • Providing clear information in its terms of service about its approach to user safety and AI-generated content.
  • Risk Assessments: Services like ChatGPT will need to conduct thorough risk assessments, evaluating the specific risks associated with Generative AI and chatbots, considering factors like:
    • The likelihood of its functionalities facilitating the presence or dissemination of harmful content, identifying functionalities more likely to do so.
    • How its design and operation, including its business model and use of proactive technology, may reduce or increase the likelihood of users encountering harmful content.
    • The risk of its proactive technology breaching statutory provisions or rules concerning privacy, particularly those relating to personal data processing.

Challenges and Considerations:

Defining Harmful Content: Applying the Act’s broad definitions of harmful content to the context of Generative AI will be complex. Determining what constitutes “harmful” chatbot interactions, considering factors like context, intent, and potential for harm, will require careful consideration.

Balancing Safety and Innovation:

Finding a balance between protecting users from harm and fostering innovation in Generative AI will be crucial. Overly restrictive measures could stifle the development and beneficial uses of AI chatbots.

Technical Feasibility:

Implementing effective content moderation and safety measures for a service like ChatGPT, which relies on complex AI models, poses technical challenges. Developing robust and adaptable solutions to mitigate risks associated with Generative AI will require ongoing research and innovation.

Conclusion:

The Online Safety Act 2023 represents a significant shift in the regulation of online services, including AI-powered platforms like ChatGPT. The Act’s focus on user safety and transparency will require ChatGPT to adapt its approach, implement robust content moderation, and provide greater transparency about its operations. While the Act presents challenges, it also offers an opportunity for ChatGPT to demonstrate its commitment to responsible AI development and user safety. The evolving nature of Generative AI and the Act’s implementation will require ongoing dialogue between Ofcom, service providers like ChatGPT, and stakeholders to ensure a balanced and effective approach to online safety.The Act will significantly impact services like Google Search, particularly due to their classification as Category 2A services – high-reach search services.  The Act’s focus on user safety, transparency, and accountability will require Google Search to make considerable changes to its content moderation practices, algorithmic transparency, and approach to content of democratic importance. Here’s a breakdown of the likely impacts:

Freedom of Expression:

The Act seeks to balance online safety with the protection of free speech. It requires platforms to tackle harmful content while upholding freedom of expression and ensuring legitimate content is not unduly restricted. However, concerns remain regarding the Act’s potential for overzealous content removal and its impact on free speech, mirroring similar concerns raised for services like X.

Use of Algorithms:

A key area of impact concerns the use of algorithms, especially those influencing the display, promotion, restriction, or recommendation of content.  Google Search will be required to consider the risks its algorithms pose in relation to illegal content and content harmful to children, potentially necessitating adjustments to its algorithms or platform design to minimize harm. The Act’s emphasis on transparency suggests Google’s algorithms will likely face scrutiny regarding content moderation, particularly how they identify and mitigate harmful content, including hate speech, misinformation, CSAM, and content encouraging suicide or self-harm. Google Search will likely need to provide information about its algorithms in transparency reports, risk assessments, and terms of service, disclosing how they identify and mitigate harmful content. They will also need to ensure their algorithms comply with the Act’s requirements for protecting children.

Transparency:

The Act mandates transparency for all regulated services, particularly for Category 2A services like Google Search. This includes:
  • Publishing annual transparency reports detailing content moderation practices, including the volume of harmful content removed, the use of algorithms, and their impact on users.
  • Providing clear information in its terms of service explaining its content moderation policies, user safety, and reporting mechanisms.
  • Disclosing the use of “proactive technology,” such as automated tools or algorithms, used to detect and remove harmful content.
These transparency requirements are intended to hold platforms accountable and empower users by providing clarity about how their data is used and content is moderated.

Democratic Protections:

The Act includes provisions to safeguard content of democratic importance, such as news publisher content, journalistic content, and user-generated content that contributes to political debate. While the Act emphasizes the need to protect democratic content, it doesn’t explicitly address whether algorithmic requirements apply to such content. It remains unclear how Ofcom will address this in its guidance and how Google Search will ensure that decisions regarding content moderation on politically relevant content consider the importance of free expression and provide equal treatment to diverse political opinions.

Additional Considerations for Google Search:

  • Specific Risk Factors: Google Search’s risk assessment must consider specific risk factors identified in the Act, including its service type as a general search service, functionalities such as predictive search suggestions, and the presence of child users.
  • Prevalence of Illegal Content: The Act requires Google Search to assess the prevalence of illegal content and content that is harmful to children on its platform. This involves analyzing the extent of such content’s dissemination and the severity of the potential harm it poses.
  • Mitigation Measures: Google Search will need to implement proportionate measures to mitigate and manage the risks identified in its risk assessment. This could include measures like age assurance, content moderation, and user reporting mechanisms.

Enforcement and Penalties:

Ofcom will closely monitor Google Search’s compliance with the Act. Penalties for breaches could include fines, service restriction orders, and even criminal sanctions for senior managers.

Conclusion:

The UK Online Safety Act 2023 poses significant challenges and obligations for Google Search. The Act’s focus on user safety, transparency, and accountability will require substantial changes to content moderation practices, algorithmic transparency, and the approach to content of democratic importance. Google’s compliance with the Act will be closely scrutinized, emphasizing the need for a proactive and comprehensive approach to meeting its requirements.

The Act’s effectiveness will depend on Ofcom’s ability to enforce its provisions and adapt to the evolving online landscape.

The specific application of the Act to image generation platforms will depend on factors like how they are structured, their user base, and the content they host.

Potential impacts:

  • Illegal Content Generation: A primary concern would be the potential for Midjourney to be used to generate illegal content, such as child sexual abuse material (CSAM). The Act requires platforms to take steps to mitigate and effectively manage risks associated with illegal content, which could involve:
    • Implementing safeguards to prevent the generation of illegal images, potentially through content filtering or prompt moderation. This might involve restricting certain prompts or keywords known to be associated with illegal content.
    • Collaborating with law enforcement agencies and organizations like the Internet Watch Foundation (IWF) to identify and remove CSAM. This could include using hash-matching technology to detect known CSAM images.

Harmful Content and Algorithms:

While Midjourney’s primary function is image generation, its algorithms could still be subject to scrutiny under the Act, especially if they influence content recommendations or user exposure to potentially harmful imagery. The Act mandates that platforms consider how algorithms impact user exposure to illegal content and content harmful to children. Midjourney might need to assess how its algorithms could contribute to the spread of harmful content and implement safeguards to minimize risks. For example:
  • Analyzing user prompts and generated images to identify patterns or trends that could indicate harmful content generation.
  • Adjusting algorithms to limit the visibility or recommendation of images that are likely to be harmful.
Note: Midjourney already does take steps to prevent the automatic generation of potentially explicit or defamatory images.

Transparency and Accountability:

The Act’s emphasis on transparency could require Midjourney to:
  • Transparency: Disclose information about its algorithms and content moderation practices, particularly how they address illegal and harmful content. This could involve publishing transparency reports or updating its terms of service.
  • User Controls: Provide users with more control over the content they encounter, potentially through filtering options or reporting mechanisms.
  • Risk Assessments: Midjourney would likely need to conduct thorough risk assessments, specifically evaluating the risks associated with image generation and its potential to facilitate illegal or harmful content. This would involve:
    • Identifying risk factors specific to image generation, such as the ease of creating realistic imagery or the potential for deepfakes.
    • Considering how its design and operation, including its user interface, algorithms, and content moderation processes, could contribute to or mitigate risks.

Additional Considerations:

Categorization: The Act categorizes platforms based on their size, reach, and functionality. Depending on Midjourney’s user base and functionalities, it could fall under different categories, potentially influencing the specific requirements it needs to meet. Emerging Technology: Image generation is a rapidly evolving field. The Act’s focus on being technology-neutral suggests it’s intended to adapt to new technologies, but the specific application to image generation platforms like Midjourney may require further clarification from Ofcom. International Applicability: If Midjourney has a significant UK user base or targets UK users, the Act could apply even if the platform is based outside the UK. In conclusion, Midjourney and similar platforms will need to monitor Ofcom’s guidance and adapt their practices to ensure compliance and mitigate potential risks. The evolving nature of image generation technology and the Act’s implementation will require ongoing dialogue and collaboration between Ofcom, service providers, and stakeholders to ensure a balanced and effective approach to online safety. Key Considerations:
  • Categorization: The Act’s applicability and specific requirements hinge on how video conferencing tools are categorized. This depends on factors like user base, functionality, and whether they primarily facilitate private or public communication. If categorized as user-to-user services due to features like group chats or content sharing capabilities, they might be subject to more stringent requirements, similar to social media platforms.
  • Private vs. Public Communication: A core principle of the Act is the distinction between private and public communication. The Act generally avoids imposing obligations related to private communications, recognizing the importance of privacy. Video conferencing tools primarily used for private one-to-one or small group conversations might fall under this exemption. However, features enabling broader content sharing, recording, or public broadcasting could trigger additional scrutiny.
  • Illegal Content and CSAM: A significant concern is the potential misuse of video conferencing tools for illegal activities, including the creation and distribution of Child Sexual Abuse Material (CSAM). The Act requires platforms to take steps to mitigate and manage risks associated with illegal content. This could potentially impact video conferencing tools in the following ways:
  • Proactive Measures: Platforms are required to implement proactive measures to detect and prevent CSAM, potentially through partnerships with organizations like the Internet Watch Foundation (IWF) or using hash-matching technology to identify known CSAM content.
  • Content Moderation: Depending on categorization and functionalities, video conferencing tools could face obligations related to content moderation, requiring them to remove or disable access to illegal content. This could involve human moderation or automated tools.

Transparency and Reporting:

The Act mandates transparency for regulated services, potentially requiring video conferencing tools to disclose information about their content moderation practices, including the volume of illegal content removed and the use of proactive technologies. This could involve:
  • Publishing transparency reports outlining their approach to content moderation.
  • Updating terms of service to clearly explain user safety measures and reporting mechanisms.
  • Potential Impacts on Specific Features:
    • Recording and Sharing: Features enabling recording and sharing of video conferencing sessions could be subject to additional scrutiny due to the potential for misuse. Platforms might be required to implement safeguards, such as requiring user consent for recording or limiting sharing capabilities to prevent the spread of illegal content.
    • Livestreaming: If video conferencing tools offer livestreaming functionality, they might face similar obligations as video-sharing platforms, potentially requiring content moderation during livestreams and measures to prevent the broadcast of illegal or harmful content.
    • Messaging and Chat: Integrated messaging and chat functionalities could be treated similarly to other messaging services. Depending on the level of encryption and the platform’s categorization, there could be requirements related to illegal content detection and removal or transparency regarding data access for law enforcement purposes.

Additional Considerations:

Risk Assessments: Video conferencing tool providers  need to conduct comprehensive risk assessments to identify and evaluate risks specific to their platform and features. Age Assurance: If platforms have a significant number of child users or offer features attractive to children, they might need to implement age assurance mechanisms to protect children from harmful content or interactions. Emerging Technologies: The Act is designed to be technology-neutral and adapt to evolving technologies. This could impact video conferencing tools as new features or functionalities emerge, requiring ongoing assessment and adaptation to comply with the Act’s principles.

Conclusion:

The specific requirements will depend on how video conferencing tools are categorized and the functionalities they offer.The UK Online Safety Act 2023 will have a significant impact on popular gaming environments like Roblox and Fortnite, particularly in areas related to content moderation, child safety, and transparency. In addition, as large service providers they represent a higher risk to children and will have additional obligations to take measures to protect children from illegal content and from harm. Gaming environments like Roblox and Fortnite that allow chat fall under the definition of “user-to-user services” under the Act. This categorization means these platforms will have legal responsibilities for keeping users safe online, particularly children, and will need to take steps to mitigate and manage risks associated with illegal and harmful content.

Illegal Content:

The Act requires platforms to proactively address illegal content, including Child Sexual Abuse Material (CSAM) and terrorism-related content. Platforms like Roblox and Fortnite will need to implement:
  • Content moderation: Robust content moderation systems to detect and remove illegal content. This might involve using a combination of automated tools (like hash-matching for known CSAM) and human moderators.
  • Collaboration channels: to collaborate with law enforcement agencies and organizations like the IWF to proactively identify and report illegal content.
  • Child Safety and Grooming: The Act is particularly focused on protecting children from online harms, including grooming and sexual exploitation. Roblox and Fortnite, given their large child user bases, will need to implement robust safeguards to prevent and detect grooming behaviours. This could include:
    • Enhanced age verification mechanisms to accurately identify child users.
    • Restricting direct messaging or chat functionalities for children or requiring parental consent.
    • Providing educational resources and safety tips to children and parents.
    • Proactively monitoring chat and interactions for suspicious behaviour and patterns that indicate grooming.

Harmful Content:

While not strictly illegal, certain types of content can be harmful, particularly to children. This includes content promoting self-harm, suicide, eating disorders, violence, and hate speech. Gaming environments and platforms  like Roblox and Fortnite will need to develop and implement clear policies on harmful content and establish effective moderation processes to address it. They will also need to consider the impact of their algorithms and recommender systems on user exposure to harmful content. This could involve:
  • Analyzing user-generated content and in-game interactions to identify and address potential risks.
  • Adjusting algorithms to limit the visibility of or recommendations for harmful content.
  • Providing users with tools to manage their online experience and filter out unwanted content.

Transparency and Reporting:

The Act emphasizes transparency and accountability, requiring platforms to be open about their content moderation practices and the risks they face. Roblox and Fortnite will need to publish regular transparency reports detailing their efforts to combat illegal and harmful content. These reports could include:
  • Data on the volume of content removed or action taken.
  • Information about their content moderation processes and the use of automated tools.
  • Insights into emerging risks and challenges.
They might also need to be more transparent about their algorithms and how they impact content visibility and user experience.

Risk Assessments:

Providers of services like Roblox and Fortnite will need to conduct thorough risk assessments to identify and evaluate the specific risks of illegal and harmful content on their platforms. These assessments should consider factors like their user base demographics, functionalities (like chat, user groups, and in-game interactions), and business models. The risk assessments should inform their safety strategies and content moderation policies. Overall, the UK Online Safety Act 2023 signifies a significant shift in how online platforms, including popular gaming environments, are expected to approach user safety. Roblox and Fortnite will need to invest in robust safety measures, enhance their content moderation capabilities, and be more transparent about their practices to ensure compliance and protect their users, particularly children, from online harms.The UK Online Safety Act 2023 will have a significant impact on major pornography platforms (like OnlyFans and Pornhub).  These platforms are categorized as “services that feature provider pornographic content” under the Act, making them subject to specific duties to ensure they are not accessible to children. Age Verification: The Act requires these platforms to implement robust age-verification systems to prevent children from accessing pornographic content. This could involve:
  • Using third-party age verification providers.
  • Implementing stricter identity verification procedures.
  • Using age estimation technology.

Illegal Content:

Like other platforms, providers of pornography will need to take steps to mitigate and manage risks associated with illegal content, such as CSAM and extreme pornography. This includes:
  • Proactive Measures: Platforms will need to implement proactive measures to detect and prevent illegal content, including using technology to scan for known illegal content.
  • Content Moderation: Platforms might face obligations related to content moderation, requiring them to remove or disable access to illegal content. This could involve human moderation or automated tools.
  • Risk Assessments: Sites like Onlyfans and Pornhub will need to conduct comprehensive risk assessments to identify and evaluate risks specific to their platforms. Given that research indicates that user-to-user pornography services are at a higher risk of hosting intimate image abuse and extreme pornography these platforms will need to pay particular attention to:
    • User-generated Content: Both platforms host user-generated content, meaning they will need to assess the risks associated with this content, such as the potential for intimate image abuse, extreme pornography, and the exploitation of adults.
    • Functionalities: The risk assessment should consider the role of platform functionalities in facilitating illegal content. For example, the ability to post content anonymously, use direct messaging, and search for content can increase the risk of illegal content being shared.

Business Models:

The revenue models of these platforms could also be a risk factor. For example, platforms that rely heavily on advertising revenue may be incentivized to allow content to be uploaded in the most “friction-free” manner, potentially leading to less effective content moderation and an increased risk of illegal content. Transparency and Reporting: Sites like Onlyfans and Pornhub will be required to be transparent about their content moderation practices and the volume of illegal content removed, potentially through transparency reports. This could involve:
  • Publishing statistics on the amount of content removed or action taken in response to illegal content reports.
  • Providing information on the processes and technologies used for content moderation, and the use of human moderators.

Additional Considerations:

  • Impact of User Restrictions: The Act emphasizes user safety, but platforms need to be mindful of the potential negative impacts of overly restrictive measures on sex workers, particularly those who rely on these platforms for income. Restrictions that push sex work further underground could increase risks of exploitation and harm.
  • Collaboration with Law Enforcement: Pornography platforms will need to cooperate with law enforcement agencies in investigations related to illegal content. This could involve providing data or assisting with content takedowns.

Evolving Nature of the Act:

The Act is designed to be adaptable to technological advancements and evolving online harms. This means platforms will need to stay informed about Ofcom’s guidance and adapt their practices accordingly. In conclusion, the UK Online Safety Act 2023 will necessitate major changes for major pornography platforms like Onlyfans and Pornhub. They will need to implement robust age verification systems, strengthen their content moderation practices, be more transparent about their operations, and conduct thorough risk assessments. Striking a balance between user safety and the rights of adult content creators will be a key challenge for these platforms as they adapt to the new regulatory landscape.

Services in/out of scope

What Is the territorial scope? The OSA applies to services with “links with the UK,” even if they are based overseas (e.g. US social network, AI or search services). This includes services with a significant number of UK users, services that target the UK market, and services accessible in the UK where there is a risk of harm to UK individuals.
The OSA has a broad scope, but there are specific types of online services and content that are exempt from its regulatory duties. These exemptions are narrowly defined and online services should exercise caution before assuming they can take advantage of them. Exempt User-to-user Services:
  • Email, SMS, and MMS Services: User-to-user services that solely enable email, SMS messages, or MMS messages as user-generated content (excluding identifying content) are exempt from the Act.
  • Limited Functionality Services: Services that only allow users to post comments or reviews on content generated by the service provider itself are exempt. For example, this would exempt services where users can only post “below the line” comments or reviews on provider-generated articles or products.
  • One-to-one Live Aural Communications: Services that only facilitate one-to-one live audio communications are exempt.
General Exemptions for User-to-user and Search Services:
  • Internal Business Services: Services that function as internal resources or tools for businesses and are only accessible to a closed group of individuals connected to the business, such as employees or authorized personnel, are exempt. Examples include business intranets, productivity and collaboration tools, and content management systems.
  • Public Body Services: Services provided by public bodies or educational institutions for the purpose of carrying out their public or educational functions are exempt. This includes services provided by UK public authorities and non-UK entities that exercise public functions.
  • Education and Childcare Provider Services: Certain educational and childcare providers already subject to safeguarding duties that require them to protect children online are exempt to prevent overlapping oversight.
Exempt Content:
  • Paid-for Advertising: Paid-for advertising content is generally excluded from the scope of the Act. However, larger providers are still subject to a duty to protect users from fraudulent advertising.
  • Comments and Reviews on Provider Content: User-generated comments and reviews specifically relating to content published by the service provider are exempt. This exemption applies to comments and reviews on news publisher sites and many sites selling goods and services.
  • Combined Services: Services that combine features of both user-to-user and search services, such as a social media platform with a built-in search engine, are subject to the duties applicable to both types of services.
  • The definition of “regulated provider pornographic content” is specific and excludes content that consists solely of text, or text accompanied by emojis or non-pornographic GIFs.
Ofcom have just released a tool to check if your services are in scope: Ofcom – check if you are in scope
The OSA applies to services with “links with the UK,” even if they are based overseas (e.g. US social network, AI or search services). This includes services with a significant number of UK users, services that target the UK market, and services accessible in the UK where there is a risk of harm to UK individuals.The OSA has a broad scope, but there are specific types of online services and content that are exempt from its regulatory duties. These exemptions are narrowly defined and online services should exercise caution before assuming they can take advantage of them. Exempt User-to-user Services:
  • Email, SMS, and MMS Services: User-to-user services that solely enable email, SMS messages, or MMS messages as user-generated content (excluding identifying content) are exempt from the Act.
  • Limited Functionality Services: Services that only allow users to post comments or reviews on content generated by the service provider itself are exempt. For example, this would exempt services where users can only post “below the line” comments or reviews on provider-generated articles or products.
  • One-to-one Live Aural Communications: Services that only facilitate one-to-one live audio communications are exempt.
General Exemptions for User-to-user and Search Services:
  • Internal Business Services: Services that function as internal resources or tools for businesses and are only accessible to a closed group of individuals connected to the business, such as employees or authorized personnel, are exempt. Examples include business intranets, productivity and collaboration tools, and content management systems.
  • Public Body Services: Services provided by public bodies or educational institutions for the purpose of carrying out their public or educational functions are exempt. This includes services provided by UK public authorities and non-UK entities that exercise public functions.
  • Education and Childcare Provider Services: Certain educational and childcare providers already subject to safeguarding duties that require them to protect children online are exempt to prevent overlapping oversight.
Exempt Content:
  • Paid-for Advertising: Paid-for advertising content is generally excluded from the scope of the Act. However, larger providers are still subject to a duty to protect users from fraudulent advertising.
  • Comments and Reviews on Provider Content: User-generated comments and reviews specifically relating to content published by the service provider are exempt. This exemption applies to comments and reviews on news publisher sites and many sites selling goods and services.
  • Combined Services: Services that combine features of both user-to-user and search services, such as a social media platform with a built-in search engine, are subject to the duties applicable to both types of services.
  • The definition of “regulated provider pornographic content” is specific and excludes content that consists solely of text, or text accompanied by emojis or non-pornographic GIFs.
Ofcom have just released a tool to check if your services are in scope: Ofcom – check if you are in scope

News & Insights

More >>

Our Crypto Asset & DLT Team

Peter Howitt

Peter Howitt

Managing Director

peterhowitt@ramparts.gi accounting, fund administration, tax filing and company set up

Heather Adamson

Head of Fiduciary

heatheradamson@ramparts.gi employment law, payments law, payroll, e-money and crypto assets

David Borge

Practice Director

davidborge@ramparts.gi Nicholas Borge

Nicholas Borge

Senior Associate

nicholasborge@ramparts.gi company administration, fund administration, outsourced compliance

Tyrene Edwards

Trainee Lawyer

tyreneedwards@ramparts.gi Danielle Curtis

Danielle Curtis

Associate

daniellecurtis@ramparts.gi