AI Legal & Compliance
Knowledge Hub

Information on the main legal issues arising from the  development of AI (including copyright, data protection, security, and AI compliance issues)

Information on the main legal issues arising from the  development of AI (including copyright, data protection, security, and AI compliance issues)

AI Law, Society & Ethics Playlist

 

Law & Guidance

Evolving Regulatory Milestones

European Union: Staggered Implementation of the EU AI Act

The EU AI Act entered into force on August 1, 2024, marking a significant shift to binding law. Its implementation is proceeding in phases:   

  • February 2, 2025: Prohibitions on specific AI systems and obligations for AI literacy become applicable. This makes practices like “social scoring” illegal and requires organizations to ensure employees have the skills to deploy AI responsibly.

  • August 2, 2025: Rules for General Purpose AI (GPAI) models and the governance framework are now applicable.

  • August 2, 2026: Most other provisions of the Act will be fully applicable, including the full obligations for high-risk AI systems.

  • December 2026: Member States must implement the new Product Liability Directive into their national laws.   

 

United States: A Patchwork of State Laws and Judicial Precedent

The U.S. regulatory landscape is characterized by a decentralized, state-by-state approach.

  • 2024: A flurry of state-level legislation was introduced or enacted across topics such as deepfakes, consumer protection, and government use of AI. Federal agencies like the SEC and FINRA issued guidance applying existing rules to AI. The New York Child Data Protection Act was enacted to protect minors’ data from being used for AI training without consent.   

  • 2025: Landmark court rulings in cases like Bartz v. Anthropic and Thomson Reuters v. Ross Intelligence provided crucial, though at times contrasting, guidance on the “fair use” of copyrighted materials for training data. The New York Child Data Protection Act will go into effect on June 20, 2025.   

  • 2026: California’s AB 2013 and Colorado’s comprehensive AI law (SB 205) are set for full implementation.   

 

Other Key Jurisdictions:

  • China: While a unified AI law is on the legislative agenda for 2024-2025, the government has enacted a number of interim standards, with additional national standards set to take effect on November 1, 2025.

  • United Kingdom: The UK has no overarching AI regulations but is pursuing a “policy-first” approach. The government’s “AI Opportunities Action Plan” was launched in January 2025, and regulators like the FCA and Bank of England have created their own AI labs to inform future policy.

  • Singapore: Taking an “agility” approach, Singapore has focused on targeted legislation. The “Elections (Integrity of Online Advertising) (Amendment) Act 2024,” which bans manipulated online election ads, was passed on October 15, 2024. The Cybersecurity Agency also released guidelines for securing AI systems on that date.

  • Japan: The “AI Promotion Act” was approved on May 28, 2025, with most of its provisions becoming effective on June 4, 2025. This legislation shifts Japan from a soft-law approach to a formal framework that prioritizes innovation.

Major Country Overviews

People's Republic  of China

Chinese AI Law: A Multi-Layered Approach

China’s legal framework governing Artificial Intelligence is not a single comprehensive law but a multi-layered system encompassing data privacy legislation and increasingly specific regulations targeting AI technologies.

Data Privacy

  • Cybersecurity Law (CSL)
    • The foundation of China’s cybersecurity framework
    • Requires network operators to secure networks & protect user data
    • Addresses critical information infrastructure & cross-border data transfers
  • Data Security Law (DSL)
    • Focuses on data security & management
    • Categorises data by importance, with stricter rules for sensitive data
    • Emphasises data localisation & security assessments for cross-border transfers
  • Personal Information Protection Law (PIPL)
    • China’s comprehensive data protection law
    • Sets strict rules for personal data handling: consent, impact assessments, security
    • Closely aligned to the EU’s GDPR in protecting individual privacy.
  • AI-Specific and Related Regulations:

    • Regulations on the Management of Algorithmic Recommendation in Internet Information Services: These rules govern the use of algorithms to recommend content online, focusing on user rights, transparency, and preventing harmful information.
    • Administrative Provisions on Deep Synthesis Internet Information Services: These regulations address technologies like deepfakes, mandating labelling and measures against misinformation and the misuse of individuals’ likeness.
    • Cybersecurity Review Measures: These measures can apply to AI systems within critical infrastructure or those handling significant sensitive data, requiring security reviews.
    • Science and Technology Progress Law: This broader law promotes innovation in science and technology, including AI, and emphasises ethical considerations.

China has implemented an algorithm regulation pursuant to:

  • Information Protection Law of the PRC
  • The Measures on the Administration of Internet Information Services

The law prohibits algorithmic generation of fake news on online news services and also requires service providers to take special care to address the needs of older users and to prevent fraud. The regulations also prohibit providers from using algorithms to unreasonably restrict other providers or engage in anti-competitive behaviour. The law regulates internet information services algorithmic activities and is intended to:

  • Carry forward the Core Socialist Values
  • Preserve national security and the societal public interest
  • Protect the lawful rights and interests of citizens, legal persons, and other organisations
  • Promote the healthy and orderly development of internet information services.

Providers are required to preserve network records, cooperate with cybersecurity and informatisation requirements, telecommunications, public security, market regulation, and for other sectors where security assessment and supervision is required.

Summary

  • Regulates recommendation algorithms used by internet information service providers.
  • Requires algorithmic transparency and explainability, especially for those affecting public opinion or social mobilisation
  • Prohibits algorithmic discrimination and the use of algorithms to manipulate user choices or induce addiction
  • Mandates user control, allowing users to opt out of personalised recommendations or access explanations for algorithmic decisions
  • Imposes compliance obligations, including algorithm filing and regular security assessments

Translation: Algorithm Recommendation Regulation

Purpose:

The GAI Measures regulate the development and use of generative AI services in China. The Measures aim to promote the healthy development of generative AI services, protect the safety of personal data and public interests, and prevent the use of generative AI services for unlawful purposes.

Scope:

The GAI Measures apply to all organisations and individuals that provide generative AI services in China.

Generative AI services are defined as services that use AI to generate a wide range of content including text, images, audio, or video content.

Key Provisions for Providers:

  • Registration: Register with the relevant authorities.
  • Content Review: Review the content generated using their services to ensure it is lawful.
  • User Controls: Provide users with controls to manage the content that is generated for them.
  • Data Security: Protect the security of personal data and other sensitive data.
  • Prohibited Activities: The GAI Measures prohibit the use of generative AI services for various activities, including for creating content that is obscene, violent, or discriminatory.
 

Purpose:

The Deep Synthesis Provisions relate to the GAI Measures and provide that generative AI content must be properly labelled to avoid the risks of deepfake technologies.

They apply to generative AI providers and users of deep synthesis technology. 

The provisions define deep synthesis technology as that which employs deep learning, virtual reality, and other synthetic algorithms to produce various content (text, images, audio, video, virtual scenes, and other network information).

They impose various risk assessment, risk management labelling and disclosure obligations on the providers and users of deep synthesis technology, which uses mixed datasets and algorithms to produce synthetic content.

China’s approach to AI governance focuses on social order and stability and economic development. Key aspects:

  • Social Stability: China prioritises using AI to maintain social order and stability. This includes deploying AI in surveillance and public security systems. The government emphasises the need for AI systems to be transparent, fair, and free from bias to prevent social unrest.
  • Economic Development: AI is seen as a critical driver of economic growth. China aims to become a global leader in AI by investing heavily in AI research and development, fostering innovation, and supporting AI startups. The government encourages integrating AI across various industries to boost productivity and competitiveness.

CAC is central to AI regulation. It is responsible for:

European Union

The European Economic Area includes 27 EU member states as well as Norway, Iceland and Lichtenstein.

The EU is renowned for its advanced regulation including in financial services, cryptoassets, personal data processing, online social networks and platforms and AI tools with the new EU AI Act. However, it is also focused on becoming a stronger market for commercial development of AI and of adoption of AI.

“To become a global leader in artificial intelligence (AI) is the objective of the AI Continent Action Plan launched today. As set out by President von der Leyen at the AI Action Summit in February 2025 in Paris, this ambitious initiative is set to transform Europe’s strong traditional industries and its exceptional talent pool into powerful engines of AI innovation and acceleration.” (AI Action Plan)

 

AI Action Plan

In addition, Europe is trying to catch up with  China and America in the AI commercial and military arms-race. To this end it recently announced more details about its AI Action Plan this includes:

  1. Support for giant data centres known as ‘gigafactories’ –  large-scale facilities equipped with approximately 100,000 state-of-the-art AI chips, four times more than current AI factories. Private investment in Gigafactories will be further stimulated through the InvestAI, which will mobilise €20 billion investment for up to five AI Gigafactories across the Union.
  2. Encouraging datalabs to procure quality data. A comprehensive Data Union Strategy will be launched in 2025 to create a true internal market for data that can scale up AI solutions.
  3. AI apply program to encourage business uptake – only 13.5% of companies in the EU have adopted AI. To develop tailored AI solutions, boost their industrial use and full adoption in EU strategic public and private sectors, the Commission will launch the Apply AI Strategy. European AI innovation infrastructure, including notably the AI Factories and the European Digital Innovation Hubs (EDIHs) will play an important role.
  4. Encourage R&D focused migration from outside the EU – facilitate international recruitment of highly skilled AI experts and researchers through initiatives such as the Talent Pool, the Marie Skłodowska-Curie Action ‘MSCA Choose Europe’ and AI fellowships schemes offered by the upcoming AI Skills Academy. Enabling  legal migration pathways for highly skilled non-EU workers in the AI sector.
  5. AI regulatory simplification – The AI Act raises citizens’ trust in technology and provides investors and entrepreneurs with the legal certainty they need to scale up and deploy AI throughout Europe. The Commission will launch the AI Act Service Desk, to help businesses comply with the AI Act. It will serve as the central point of contact and hub for information and guidance on the AI Act.

The European Union’s Artificial Intelligence Act (AI Act) is the most comprehensive federal level act that aims to foster trustworthy AI development by way of rules on the development, deployment, and use of AI systems.

The EU AI Act is a landmark regulatory framework designed to address the risks associated with the development, deployment, and use of AI across the European Union. It is part of the European Commission’s broader strategy to position the EU as a leader in trustworthy AI while ensuring that the technology is safe and respects fundamental rights. 

The AI Act classifies AI applications into risk categories with corresponding regulations.

Source: https://www.ceps.eu/wp-content/uploads/2021/04/AI-Presentation-CEPS-Webinar-L.-Sioli-23.4.21.pdf?. 

Risk Classifications and Regulations

  • Unacceptable Risk (Banned) – Article 5: AI systems deemed a clear threat to fundamental rights or safety are prohibited. This includes social scoring by governments (e.g. see some of China’s  models).

  • High-Risk (Strict Requirements) – Articles 6, 53, 84 Section B, Annex II & Annex III: Applications posing significant risks require adherence to stringent regulations. Examples include recruitment tools using AI for candidate evaluation and biometric identification systems  These regulations cover:

    • Risk Assessment and Mitigation: Developers must conduct comprehensive risk assessments and implement plans to mitigate identified risks.
    • Data Governance: High-risk AI systems require high-quality, representative datasets to minimize bias and discrimination. Data governance plans outlining collection, storage, and access controls become mandatory.
    • Transparency and Explainability: Clear and accessible explanations for AI decision-making processes are necessary. This ensures accountability and allows individuals to challenge AI-driven decisions.
    • Human Oversight: Human oversight mechanisms must be implemented to prevent algorithmic bias and ensure human control over critical decisions.
    • Post-Market Monitoring: Developers are obligated to monitor the performance of high-risk AI systems after deployment and report any serious incidents.
  • Minimal Risk (Light Touch): The vast majority of AI applications, such as spam filters and AI-powered video games, fall under this category. These applications face minimal or no regulations, allowing for innovation.

  • General Purpose AI (GPAI) – Article 52: Providers of GPAI models have specific documentation and compliance requirements, especially if the model presents systemic risk. 

The AI Act is designed to work alongside other major European regulations, such as the General Data Protection Regulation (GDPR), ensuring that AI systems respect privacy rights, data security, and human dignity.

GDPR

While not solely focused on AI, the GDPR has significant implications for the processing of personal data in AI systems, particularly concerning consent, data minimisation, and the rights of data subjects. For example, Article 22 of the GDPR (automated decision-making) is particularly relevant to AI data processing and provides that;

The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

Digital Services Act (DSA): This act regulates online platforms and includes provisions relevant to algorithmic content and recommendations, transparency and the spread of illegal content, which intersects with AI applications.

  • Disclosure: Systems interacting with humans or manipulating content (e.g., deepfakes) must disclose their nature as AI.
  • Record-keeping: Developers and users of high-risk AI systems are required to maintain logs and documentation on their systems’ development, deployment, and outcomes. This ensures traceability and allows authorities to audit compliance with the Act.
  • Human Oversight: High-risk AI systems must be designed to enable human oversight, ensuring that decisions made by AI can be overridden or intervened upon in cases of malfunction or unintended outcomes.
  • Data and Algorithm Governance: The AI Act mandates that datasets used to train high-risk AI systems must be:
    • Relevant and representative
    • Free from bias and discrimination
    • Of high quality to minimise risks and ensure accuracy
    • Proper documentation and traceability are required to track the dataset’s origin and use.
  • The AI Act sets up a European Artificial Intelligence Board (EAIB) to oversee its implementation and ensure compliance across member states. 
  • Each member state will designate national authorities responsible for enforcement, market surveillance, and risk assessments. 
  • The maximum penalties for non-compliance depend on the nature of the infringement:
    • For non-compliance of the prohibited activities (as per Article 5): administrative fines of up to €35m or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
    • For other non-compliance: administrative fines of up to €15m or, if the offender is an undertaking, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher
    • The supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request shall be subject to administrative fines of up to €7.5m or, if the offender is an undertaking, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Extraterritorial Impact: The AI Act applies not only to organisations operating within the EU but also to those outside the EU if their AI systems affect individuals within the EU. This ensures that global companies deploying AI in or for Europe must comply with the regulation.

An AI Code of Practice has been drafted by 13 AI experts to help providers of general-purpose AI (GAI) models comply with the AI Act.

The EU AI Code of Practice details the rules for GAI providers, including transparency and copyright-related rules, as well as systemic risk taxonomy, risk assessment, and mitigation measures.

Compliance with the Code of Practice will be an important tool for demonstrating compliance with the AI Act.

The Commission has also issued Guidelines on the scope of obligations for providers of general-purpose AI models under the AI Act

The EU has also established a new group of AI experts to advise on its AI strategy.

Further reading:

European Commission appoints 13 experts to draft AI Code | Euronews

AI Act: Participate in the drawing-up of the first General-Purpose AI Code of Practice | Shaping Europe’s digital future

European Artificial Intelligence Act comes into force

The AI Act entered into force on 1 August 2024 and will be effective from 2 August 2026 (Art. 11), except for the following specific provisions listed in Art. 113:

  1. Enforcement of Chapters I and II (general provisions, definitions, and rules regarding prohibited uses of AI): 2 February 2025 (Art. 113)
  2. Enforcement of certain requirements (including notification obligations, governance, rules on GPAI models, confidentiality, and penalties (other than penalties for providers of GPAI models)): from 2 August 2025 (Art. 113).
  3. Providers of GPAI models placed on the EU market before 2 August 2025 are grandfathered for compliance purposes until 2 August 2027 (Art. 111)
  4. Enforcement of Art. 6 (and the corresponding obligations regarding high-risk AI systems) commences on  2 August 2027 (Art. 113)

Further reading:

Long awaited EU AI Act becomes law after publication in the EU’s Official Journal

The liability framework for AI systems in the EU is undergoing a notable transformation. Traditionally, liability for harm caused by products was anchored in fault-based theories such as negligence, requiring claimants to prove that the producer or developer fell below a standard of care. However, with the EU’s adoption of the new Product Liability Directive (Directive (EU) 2024/2853), the regime has shifted toward strengthening strict liability principles. This means that manufacturers, developers, and other economic operators can be held liable for defective AI systems and software without a claimant needing to demonstrate negligence.

A significant innovation of the updated Directive is the explicit recognition of software and AI systems as “products.” This brings them squarely within the scope of strict product liability, aligning digital and algorithmic risks with those of tangible goods. The definition of defectiveness has also been broadened: it no longer refers solely to traditional design or manufacturing flaws but extends to issues such as cybersecurity vulnerabilities, failures in post-market updates, and risks arising from systems that continue to learn and evolve over time. This expansion is particularly important for AI, where defects may emerge dynamically after a system is placed on the market.

The Directive also addresses the practical difficulty of proving defectiveness and causation in the context of complex digital systems. Where the complexity of AI makes it disproportionately difficult for injured parties to access or understand technical evidence, the law introduces rebuttable presumptions of defectiveness and causation. In these situations, the burden can shift to the manufacturer or provider to disprove liability. While these presumptions are not absolute, they considerably ease the claimant’s task and reflect the EU’s intention to rebalance the evidentiary asymmetry between individuals and powerful technology providers.

Taken together, these changes represent a substantial strengthening of consumer protection in the age of AI. They do not entirely displace negligence-based liability, which will continue to play a role in areas outside the product liability framework, but they signal a clear policy direction: when AI systems cause harm, the risks should rest primarily with those who design, market, or profit from the technology, rather than with those who suffer from its defects.

United  States of  America

The US approach to AI regulation is currently more fragmented and sector-specific compared to the EU. There is no single comprehensive federal AI law. The focus is on utilising existing regulatory frameworks and issuing guidelines, with increasing activity at the state level. The absence of a single federal framework has not created a vacuum but rather has spurred a flurry of targeted legislative action at the state level. This has resulted in a complex and varied “patchwork” of laws addressing specific high-risk areas, such as deepfakes and algorithmic discrimination, making cross-state compliance a significant challenge for companies

Key Characteristics:

  • Sector-Specific Approach: Regulation tends to address AI applications within specific sectors (e.g., finance, healthcare, employment) rather than a broad horizontal law.
  • Emphasis on Guidance and Principles: The federal government has issued executive orders and frameworks emphasizing principles of safety, security, trustworthiness, and responsible innovation in AI development and deployment.
  • Focus on Existing Regulatory Bodies: Agencies like the Federal Trade Commission (FTC), Equal Employment Opportunity Commission (EEOC), and Consumer Financial Protection Bureau (CFPB) are asserting their existing authority to address AI-related issues within their respective domains. 
  • Growing State-Level Activity: In the absence of comprehensive federal legislation, many states are taking the initiative to enact AI-specific laws, often focusing on areas like algorithmic bias in hiring, deepfakes, and consumer protection.
  • Emphasis on Innovation: There’s a strong emphasis on fostering AI innovation and avoiding regulations that could stifle technological advancement.

Key Laws, Regulations, and Initiatives:

  • The National AI Initiative Act of 2021 established a coordinated federal strategy for AI research and development which includes federal investment in research and development. 
  • Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (October 2023): This broad executive order directs federal agencies to establish AI safety and security standards, protect privacy, advance civil rights, and promote innovation. It covers areas like critical infrastructure, cybersecurity, and the use of AI in federally funded projects.
  • AI Risk Management Framework (NIST): The National Institute of Standards and Technology (NIST) has developed a voluntary framework to help organisations manage risks associated with AI systems.
  • AI Bills of Rights Blueprint (White House, October 2022): This outlines principles for the design, use, and deployment of automated systems to protect the public.
  • State-Level Legislation: Various states have enacted or proposed laws addressing specific AI issues. Examples include:
    • Colorado’s AI Act (2024): A more comprehensive state-level law.
    • New York City’s law on automated employment decision tools (2023): Requires bias audits for AI used in hiring and promotion.
    • Tennessee’s ELVIS Act (2024): Protects against audio deepfakes and voice cloning.
    • California Privacy Protection Act (CPPA): Includes provisions relevant to automated decision-making.
  • Proposed Federal Legislation: Several AI-related bills have been introduced in Congress, focusing on areas like research and development, safety, and transparency, but no comprehensive federal AI law has been enacted yet.

Additional potential federal and state initiatives to regulate AI:

  • The Algorithmic Accountability Act of 2022, if passed, would require the Federal Trade Commission to require large tech companies to conduct AI impact assessments for bias and effectiveness of their automated decision systems. 
  • The Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (AIRIA), introduced in November 2023, is a bipartisan legislative proposal aimed at establishing a governance framework for AI in the U.S. Its key goals are to foster innovation while ensuring transparency, accountability, and security in AI applications, particularly in high-risk areas like critical infrastructure.
  • The U.S. Department of Defense’s Joint Artificial Intelligence Center (JAIC) and the National Security Commission on AI (NSCAI) have been advancing AI research and application for defence purposes. The 2021 report from the NSCAI emphasises the need for U.S. leadership in AI, particularly in military and national security contexts.
  • In addition, various federal agencies are tasked with supporting the AI Initiatives. For example, the Equal Employment Opportunities Commission (EEOC) has issued technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions. The United States Patent and Trademark Office (USPTO) is also supporting AI innovation in intellectual property. 
US AI law tracker: White & Case

The act establishes a National Artificial Intelligence Research and Development Strategic Plan, developed by the National Science and Technology Council (NSTC). An AI Advisory Committee (NAIAC) has also been set up to advise the President on AI policy and strategy. 

The NAIAC goal is to advise the President on the intersection of AI and innovation, competition, societal issues, the economy, law, international relations, and other areas that can and will be impacted by AI in the near and long term.

The NAIIA funds several AI programs and workstreams, including:

  • National AI Research Institutes:
    • The focus is on developing AI data sets and testbeds;
    • researching the social, economic, health, scientific, and national security implications of AI;
    • broadening participation in AI research and training through outreach.
  • AI Innovation Hub program
  • AI Workforce Development program
  • National AI Ethics Board
  • AI Risk Assessment Framework
  • AI Transparency and Accountability Framework

The act establishes a number of new requirements and programs, including:

  • A requirement for all federal agencies to develop AI governance plans
  • A new AI R&D program at the National Science Foundation (NSF)
  • A new AI workforce development program at the Office of Personnel Management (OPM)
  • A new AI ethics board
  • A new AI risk assessment framework
  • A new AI transparency and accountability framework

The AI in Government Act is designed to help the US maintain its leadership in AI while also ensuring that AI is used responsibly and ethically. 

Some of the key goals of the AI in Government Act:

  • To accelerate the development and use of safe and beneficial AI technologies
  • To ensure that the US government maintains its leadership in AI research and development
  • To promote the responsible and ethical use of AI
  • To prepare the US workforce for the AI economy

 In the 2024 and 2025 legislative sessions, nearly all U.S. states and territories introduced or enacted legislation on AI, with a particular focus on three key areas: deceptive media (deepfakes), consumer protection, and government use.

  • Deepfakes and Election Integrity. Lawmakers have prioritized addressing the fraudulent use of AI to create deceptive audio or visual media, particularly in the context of elections. States like New Hampshire and New York have created new crimes or civil penalties for the distribution of materially deceptive media intended to influence an election or injure a candidate’s reputation.8 These laws often require a disclosure that a political advertisement was “generated or substantially altered using AI“. Beyond politics, legislation in states like South Dakota and California has expanded existing child pornography laws to include “digitally altered or generated” matter depicting minors.
  • Algorithmic Discrimination and Consumer Protection. The need for consumer protection from biased or harmful AI has driven a number of state laws. Colorado’s landmark comprehensive AI law requires developers and deployers of high-risk AI systems to “use reasonable care to prevent algorithmic discrimination,” which is defined as any unlawful differential treatment based on a protected class. California’s AB 2013 requires generative AI developers to publicly disclose the data used for training their systems before making them available to state residents. Utah’s Artificial Intelligence Policy Act also requires certain entities to disclose their use of generative AI. In the area of privacy, new laws like the New York Child Data Protection Act and amendments to the California Consumer Privacy Act (CCPA) prohibit the use of a minor’s personal data to train an AI system without explicit consent.
  • Government Use. A third major trend is the regulation of AI within state government operations. Many states have established task forces or committees to study the impact of AI and develop best practices. Maryland, for instance, requires its Department of Information Technology to adopt policies for the development, procurement, and deployment of AI systems by state government units. Alaska has mandated that its Office of Information Technology submit a prioritized plan for the use and benefits of AI projects.

This state-by-state approach presents a complex compliance challenge, as different states address unique aspects of AI governance with varying requirements. A company operating across multiple jurisdictions must navigate this intricate landscape, creating a fragmented compliance burden that is a stark contrast to the harmonized, top-down EU model.

State

Bill/Act Name

Key Provision

Status

Alabama

H 172

Makes it a crime to distribute materially deceptive media to influence an election.

Enacted

California

A 2013

Requires developers of generative AI systems to post documentation about the data used for training on their website.

To governor

California

A 2355

Requires political advertisements generated or substantially altered with AI to include a specified disclosure.

To governor

California

A 1873

Makes it a misdemeanor or felony to knowingly develop, duplicate, or exchange AI-generated representations depicting a minor in sexual conduct.

Pending

Colorado

SB 205

Requires developers and deployers of high-risk AI systems to use reasonable care to prevent algorithmic discrimination.

Enacted

New York

A 7904

Requires disclosure of AI use in political communications and directs the state board of elections to define AI-generated content.

Pending

Utah

SB 149 (Artificial Intelligence Policy Act)

Requires specific entities to disclose the use of generative AI.

Enacted

New York

A 10494

Imposes liability for financial or other demonstrable harm resulting from misleading, incorrect, or contradictory information from a chatbot.

Pending

New Hampshire

Fraudulent use of Deepfakes

Creates the crime of fraudulent use of deepfakes and establishes a cause of action.

Enacted



The central legal debate in the intellectual property space revolves around whether the use of copyrighted works to train generative AI models constitutes “fair use” under U.S. copyright law. Recent court rulings in 2025 have provided significant, though at times seemingly contradictory, guidance on this issue.

In landmark rulings in the Northern District of California, judges in two cases, Bartz v. Anthropic and Kadrey v. Meta, issued decisions that partially favored the defendant AI companies. The courts found that using copyrighted works to train generative AI models was “exceedingly transformative” and could therefore qualify as fair use. The rationale was that the purpose of the training was not to reproduce the original works but to teach the model to generate new, unrelated text. However, these decisions were not unqualified victories for the AI industry. The court in Bartz v. Anthropic explicitly found that a different act—downloading pirated copies of books to build a training “library”—was not fair use, creating a crucial distinction between the act of training and the legality of the data’s provenance. This ruling establishes that while the act of training itself might be considered transformative, the method of acquiring the training data—specifically, through piracy—is not protected and could expose developers to “crippling statutory damages”.

A contrasting precedent was established in the long-running case of Thomson Reuters v. Ross Intelligence. In a decision issued in February 2025, a federal judge rejected the fair use defense for an AI startup that scraped copyrighted legal content from Thomson Reuters’ Westlaw platform. The court found that Ross’s use was not transformative, was commercial in nature, and, crucially, harmed the potential market for AI training data. The judge explicitly rejected the “circular” argument that since the use might be fair use, there is no market to license the content for that purpose.

These cases collectively demonstrate that fair use decisions are “highly fact-specific” and depend heavily on the evidence presented.19 They highlight a fundamental tension in the legal landscape: courts must weigh the “transformative” nature of AI training against its potential to “significantly dilute the market” for the original works.21 The legality of a company’s training data acquisition methods is now a paramount consideration that can undermine a fair use defense regardless of the transformative nature of the AI model.

 

Comparative Analysis of Landmark AI Copyright Cases (2025)

 

Case

Court/Jurisdiction

Key Question

Ruling on Fair Use

Rationale

Broader Implications

Bartz v. Anthropic

N.D. California

Is training an LLM on copyrighted books fair use?

Yes (for training on purchased works); No (for using pirated copies).

The training process was “exceedingly transformative.” However, the illegal acquisition of data from pirate sites was not fair use, exposing the company to significant liability.

The legality of training data provenance is now a critical factor.

Kadrey v. Meta

N.D. California

Is training an LLM on copyrighted books fair use?

Yes (for training).

The use was deemed “highly transformative” but the ruling was heavily based on the plaintiffs’ failure to provide evidence of market harm.

The decision provides a “roadmap” for plaintiffs to win by providing evidence of market dilution.

Thomson Reuters v. Ross Intelligence

District of Delaware

Is scraping copyrighted legal headnotes to train a legal research AI fair use?

No.

The use was commercial, not transformative, and harmed the potential market for licensing training data.

Establishes a precedent that an “AI training data market” exists and that scraping can constitute direct infringement.



United  Kingdom

Recent & Emerging Developments

  1. AI Opportunities Action Plan (Jan 2025) fully adopted
    The UK Government published the AI Opportunities Action Plan on 13 January 2025 and committed to implementing all 50 of its recommendations. These focus on (a) investing in AI foundations, (b) boosting cross-economy adoption, and (c) fostering home-grown AI innovation. (GOV.UK) For example, the government has committed to establishing AI growth zones to speed up planning for AI infrastructure, and there are targets for job creation and private investment as part of this strategy. (GOV.UK)

  2. Artificial Intelligence (Regulation) Bill [HL] (reintroduced in 2025)
    A Private Member’s Bill was reintroduced on 4 March 2025 (by Lord Holmes) aiming to introduce AI-specific regulation. Key proposals include creating an AI Authority to coordinate regulatory responses, introducing regulatory sandboxes for businesses to test AI safely, and requiring an “AI officer” in organisations developing, deploying, or using AI. (Kennedys Law

  3. Data (Use and Access) Act 2025 & copyright / training data issues
    The Data (Use and Access) Act 2025 received Royal Assent on 19 June 2025. (Wikipedia) There were amendments proposed in the House of Lords about AI / copyright / training data transparency (for example, requiring disclosure of training data sources), but those were removed in the House of Commons. (Wikipedia) Also, the government ran a consultation (December 2024 to February 2025) on how UK copyright law should be reformed to deal with AI training data, including proposals for a data mining exception and improved transparency for rights holders. (Global Practice Guides)

  4. Regulatory and Institutional Enhancements

    • The Regulatory Innovation Office (RIO) was established (October 2024) to help bridge between government, regulators and business, to speed up regulatory decisions in emerging tech, including AI. (Wikipedia)

    • Regulators are being asked to publish strategic updates about their approaches to AI. The government has written to sector regulators asking them to outline how they are interpreting the AI White Paper principles in their regulatory regimes. (White & Case)

    • On transparency and accountability in public sector algorithmic decision-making, standards like the Algorithmic Transparency Recording Standard are being applied, especially for systems that significantly affect the public. (Global Practice Guides)

  5. Legislative timing & delays
    There has been reporting that the government’s broader UK AI Bill (to provide statutory regulation beyond the private members’ bill) has been delayed until summer 2026. The delay appears to be due in part to needing more time to align with other jurisdictions (especially copyright issues) and to respond to stakeholder concerns from creative sectors. (kslaw.com)

Key Trends & Implications

  • The UK is still favouring a principles-based, sector-led regulatory framework over a comprehensive, AI-specific law. The White Paper (2023) and its follow-on documents remain foundational. (Global Practice Guides)

  • There’s growing pressure, both from Parliament and industry/creative sector stakeholders, to codify certain principles (especially around safety, transparency, copyright/data access), and to give binding effect to commitments currently voluntary. The AI (Regulation) Bill reflects that pressure. (Kennedys Law)

  • The handling of copyright & training data is becoming a major flashpoint: proposed data-mining exceptions, transparency around which copyrighted works are used to train AI, and possible obligations on model providers are being debated. The removal of amendments in the Commons in the Data (Use and Access) Act suggests cautious approach but ongoing tension. (Wikipedia)

  • The UK is closely watching regulatory developments elsewhere (notably the EU AI Act), both as a benchmark and in order to ensure UK companies can operate across jurisdictions. There is some convergence emerging, particularly around “high risk” or frontier/general purpose AI models. (KPMG)

Leading AI Knowledge Hubs

Questions

FAQ: Frequently Asked  Questions about AI & Law

Copyright

Who owns AI artwork?

Check the terms of your licence: however if you pay for an AI service usually you will own the work. For example, Midjourney give ownership of AI generated art to paid subscribers but free users do not have ownership rights in their work.

Copyright arises automatically in your creative work, assuming it is sufficiently original and creative.

However, at the moment the US Copyright Office will not let you register AI-generated work for additional copyright protection unless the use of AI is minimal. We expect this to change soon, but in the meantime your ability to sue for infringement in the USA is limited.

In the UK, no registration is required. Creative people using such tools (i.e. inputting chosen words or seeds) should be able to benefit from copyright protection for their AI-created works if it is sufficiently original (i.e. a minimal amount of creative work is involved). This is a technology-neutral test, unlike the US test that currently requires a minimal use of AI.

Under EU law (according to Directive 93/98 and Directive 2006/116) only human creations are protected, which can also include those for which the person employs a technical aid, such as a camera. “In the case of a photo, this means that the photographer utilises available formative freedom and thus gives it originality“.

Eva-Maria Painer v Standard VerlagsGmbH and Others.

In fact, EU law expressly provides that photographs do not need to meet any additional originality threshold. It remains to be seen whether a specific requirement will be introduced for AI-created works but in the meantime, under EU law, works made by humans using AI technologies can meet the threshold for copyright protection insofar as they are original and have some human authorship.

Under US law, entirely automated AI works (art, photography, music) can not be registered for copyright protection. In the UK, the developer of the AI program can be the ‘author’ in such cases for copyright purposes and no registration is required.

In the UK, the creator of an AI tool may be able to benefit from copyright protection for 50 years from the year of creation for any autonomously created work.

In the EU, (according to Directive 93/98 and Directive 2006/116) only human creations are protected. 

Yes, whether you have registered copyright or unregistered copyright, if you have the licence rights to the work then you can use it commercially. Check the licence the AI tool grants you and which rights it reserves.

This is a very complex area of law. It depends on the amount of use you make of other work, whether the other work or style is protected by law and whether the work you make is considered derivative.

Firstly check the the licence of the work you are using. Is it public, or creative commons with restrictions? Often creative commons licensed work will require that you pass on the same licence for work you do using someone else’s creative commons licensed work. Sometimes commercial use is permitted and other times it is not. 

We suggest you read our detailed article on Copyright & AI Art to start with, and seek advice if needed.

You can also ask Bard or Co-Pilot 🙂

Yes. If you use AI tools to make derivative works using copyright material (i.e. that is not public work) you can be sued for infringement by the copyright holder.

If you are a developer and your tools are considered to encourage, support or enable copyright breach then you could face infringement proceedings for complicity or facilitation of breach copyright laws.

Recent cases in the US have however suggested that the fact LLMs, foundation models and generative art AI tools are trained on public data – which includes copyright data – is not sufficient in itself to prove copyright infringement.

For example, in Andersen et al. v. Stability AI LTD et al., N.D. Cal. 3:23-cv-00201 a Californian court struck out a number of claims as part of a class action suit made against Midjourney, Stability AI and DeviantArt on the basis that: (i) many of the copyright images were not registered (as required under US law for any legal action for infringement) and (ii) it is almost impossible to draw an inference of infringing output from such a large image data set: 

“The other problem for plaintiffs is that it is simply not plausible that every Training Image used to train Stable Diffusion was copyrighted (as opposed to copyrightable), or that all DeviantArt users’ Output Images rely upon (theoretically) copyrighted Training Images, and therefore all Output images are derivative images. Even if that clarity is provided and even if plaintiffs narrow their allegations to limit them to Output Images that draw upon Training Images based upon copyrighted images, I am not convinced that copyright claims based [on] derivative theory can survive absent ‘substantial similarity’ type allegations. The cases plaintiffs rely on appear to recognize that the alleged infringer’s derivative work must still bear some similarity to the original work or contain the protected elements of the original work.” (Judge William H. Orrick)

Andersen vs Stability AI et al


The judge rightly states that since it is almost impossible to produce an identical image that exists within the training data, it will be very difficult for artists to prove that an image produced using Midjourney et al was based on their work. Simply training AI models on large datasets is not in itself sufficient to create infringing derivative works, i.e. in simple terms copyright infringement requires material or substantial copying.

 

The judicial front has transitioned from a period of intellectual property and copyright litigation defined by broad arguments to one of nuanced, fact-specific court rulings.

Landmark cases in the U.S. are now creating a body of case law that refines the application of “fair use” doctrine, particularly in the context of training data, and is shaping a new legal standard for generative AI.

In the United States, the legal treatment of generative AI and training data is moving from broad, theoretical arguments to more fact-specific rulings. Courts are increasingly applying the traditional fair use doctrine to AI disputes, refining its contours for modern technologies. This shift began with Google LLC v. Oracle America, Inc. (2021), where the Supreme Court held that Google’s reuse of Java API code to build Android was fair use. The Court emphasised that even verbatim copying may qualify as fair use when it is repurposed in a highly transformative way that enables new innovation, laying groundwork for later arguments about AI training data.

The Supreme Court then tightened the boundaries of “transformative” use in Andy Warhol Foundation v. Goldsmith (2023). There, Warhol’s use of a photograph of Prince was not considered fair use when licensed to a magazine, because both works served the same commercial purpose: creating portraits of Prince for publication. The ruling underscored that transformation requires more than a change in style; the use must serve a different purpose or market function. This case is often cited in AI disputes as a warning that merely altering an expressive work is not enough to secure fair use protection if the output competes with the original market.

More recently, generative AI has been directly tested in the courts. In Bartz v. Anthropic (2025), a federal judge ruled that training AI models on lawfully acquired copyrighted works can qualify as fair use, calling such training “quintessentially transformative.” However, the same court held that training on pirated works is not fair use and must proceed to trial. This nuanced decision highlights how the legality of AI training depends on factual details such as the source of the data, the way it is processed, and the extent to which outputs reproduce original expression.

A contrasting precedent was established in the long-running case of Thomson Reuters v. Ross Intelligence. In a decision issued in February 2025, a federal judge rejected the fair use defense for an AI startup that scraped copyrighted legal content from Thomson Reuters’ Westlaw platform.

Together, these cases show a clear evolution: U.S. courts are no longer treating AI and copyright disputes as broad policy questions alone, but are developing case-by-case standards. The emerging pattern suggests that fair use in AI will turn on whether training creates a new and distinct purpose (as in Google and Bartz) or merely repackages existing expression for the same commercial use (as in Warhol). The result is a growing body of precedent that is beginning to shape a new, more precise legal standard for generative AI.

Data 

What legal issues should I consider when developing and licensing AI tools?

There are a range of issues related to data protection, data security, trademarks, copyright, and ethics to consider when developing or licensing AI.

Data protection and human rights issues are the key areas of focus to be able to evolve safe natural language personalised AI assistants.

One of the greatest risks arises from leakage of personally identifiable information and data that can be used when combined with other data to identify a person. That data could then be seen by other users of the AI tools or it could inform the responses of the AI tools.

The developers of Bard, ChatGPT and other tools are still figuring out how to build and deploy personalised digital assistants using natural language AI in a safe way. Once they do this, it will make AI much more useful and engaging for people. See: The Singularity Approaches for a brief consideration of this issue.

Additional risks relate to how selective data is used or how data is used selectively in a way that discriminates unfairly against people. There have already been concerns raised about people losing their benefits and other civil rights due to automated decision-making involving AI tools.

UK officials use AI to decide on issues from benefits to marriage licences

New AI laws, such as the EU AI Act, should provide greater clarity to copyright holders as to whether their copyright material is being used for training GPAI (general purpose AI models) as part of the AI transparency obligations. This cross-referencing provides additional support for copyright holders claiming breach in the use of their data when they have opted out of permitting such use.

For example, the EU AI Act explicitly states that GPAI providers must observe opt-outs made by rights holders under the Text and Data Mining exception of Art. 4(3) of Directive (EU) 2019/790 where an opt-out has been declared in a machine-readable form by an organisation. The effect of a valid opt-out is that such content may not be used for AI training of GPAI.

Ethics

Is it ethical to use AI to create artwork?

Yes, humans have used technology since the dawn of time and AI tools are just a new technology created by humans. What matters is how we use it and how we ensure it is used fairly and responsibly, particularly given the potential impact on jobs and income for many professionals (including creative artists and lawyers!).

There are a very wide range of issues related to data protection, data security, trademarks, copyright, law and ethics to consider when developing or licensing AI. Review our AI Governance Guide (above) for more information and also our guide to Copyright, AI and Generative Art

Yes, there can be – particularly if part of the work can be seen or heard in your work. The law of ‘fair use’ (or ‘fair dealing’ in the UK) can be very confusing, as there is no black-and-white test.

If you use an excerpt or part of someone else’s work then whether that is fair depends on the context, your intention and whether it causes harm to the creator or owner. Ultimately, the test should be a sensible personal one – ask yourself questions like:

Would it be OK if someone did the same with my work?

Am I willing to take it down if the artist that I admire is upset?

We should not get so caught up in what we do that we would be willing to ignore how it affects artists that we admire.

See the article Copyright, AI and Generative Art for more info.

  • Access: we need to mitigate the risks of economic discrimination.
    • Given the extraordinary competitive advantage these tools give, how will we ensure that all people are able to afford access to at least a basic level of these powerful tools?
  • Bias: The quality of output is a function of the data that goes in and what is selected for (the weighting).
    • How do we protect against the use of selective data or of data selectively, in a way that discriminates unfairly against people?  
  • Coding:
    • Who decides on the prime laws for AI and the universal framework core code-base required for ethical purposes?
    • What are the appropriate ways to mitigate the risks of autonomous algorithmic iteration?
  • Control:
    • Are we sure how AI tools work and how to maintain control over them?
    • Could they become autonomous? 
    • How will we know when algorithmic tools may be close to creating their own identities or self-coded purposes?
  • Competition (Reality check):
    • How do we ensure that AI is developed in such a way that ethical AI has a competitive advantage and can be used to protect against misuse of AI by unethical humans?
  • Impact: AI will have a major impact on low-skilled and higher-skilled jobs.
    • Where will the newly unemployed find work and ensure they have sufficient income? 
  • Privacy: AI tools could have access to significant amounts of personal data depending on how they are structured and interact with other technologies you use. This makes knowledge, consent and security the key issues to be considered.
  • Safety: Given their extraordinary power, AI tools need to be better at protecting humans than humans. This is relevant across all business sectors, in manufacturing and in armaments.

AI Governance

Overview

The term “AI” has been broadly applied to various technologies, many predating the recent surge of interest in generative AI.  Algorithmic data processing and machine learning tools for pattern recognition and forecasting have been utilised for years, even before the introduction of prominent generative AI tools like ChatGPT, Gemini, Co-pilot, and Midjourney.

This broad application of the term can lead to confusion in categorising and managing these technologies, as some might not strictly fit the definition of AI, with the term sometimes being used loosely for marketing purposes. 

  • AI is the overarching field, while Machine Learning (ML) is a specific approach within AI.
  • Traditional algorithms are rigid and follow predefined rules, while AI systems (particularly those using ML) can adapt and learn from new data.
  • AI systems often use complex algorithms, but they go beyond simple computational processes to mimic human thought and behaviour.

Leaving aside these definition issues, there are a significant number of laws and regulatory frameworks that impact the use of AI systems and AI tools. One of the most relevant is the General Data Protection Regulation (GDPR). The European Union’s framework governing how organisations handle personal data. See the Ramparts GDPR Information Page.

As artificial intelligence (AI) technologies advance, organisations must align AI use with GDPR to ensure compliance, avoid penalties, and protect individual privacy. In addition we have a significant number of AI specific frameworks in China and the EU, whilst the US is also moving forward with Federal, state-level and Governmental AI policies and laws. 

AI introduces unique challenges in data processing and decision-making due to its handling of large datasets, which often contain personal data and result in automated outputs (and potentially automated decision-making). While anonymisation tools and processes can mitigate these issues, the best use cases for AI tools and systems often involve personalised responses to user queries, content creation, and interaction. The real value of AI assistants for consumers will only be unlocked when the AI constructs know our interests, dislikes, abilities, weaknesses and loves and have persistent memory to help us in our life journey (and they understand our idiosyncrasies).

Further reading:

White & Case – AI Watch Global Regulatory Tracker

Ramparts’ comprehensive guide to understanding AI Governance & AI Governance Frameworks:

PDF version:

AI Governance

Webpage version: AI Governance

AI has permeated various industries, transforming how businesses operate, from multinational corporations to small and medium-sized enterprises (SMEs). The adoption and development of AI differ significantly between these two groups, driven by resources, scale, and strategic focus. 

 

Multinationals

Multinationals typically have substantial resources and a global footprint, allowing them to invest heavily in AI research and development. They leverage AI across their vast operations to enhance efficiency, drive innovation, and gain a competitive edge.  

Customer Service & Experience: 

  • AI-powered chatbots, virtual assistants and personalised videos and messaging  tools are deployed to handle customer queries, provide personalised recommendations and even intervene when there are indications of problem behaviours.  
  • Use Cases: 
    • McDonalds has developed automated ordering using IBM’s Watson. 
    • Amazon has integrated personalised recommendations and reorder prompts relying on AI enhanced tools. 
    • YouTube, Spotify and many other social and entertainment platforms use AI to personalise recommended content.
    • Online gambling companies are experimenting with the use of AI videos and messaging to provide personalised intervention when they see potential problem gambling behaviours.
  • Supply Chain Optimisation: Machine learning algorithms analyse vast amounts of data to predict demand, optimise inventory levels, and streamline logistics, resulting in cost savings and improved operational efficiency.  
  • Use Cases:
    • Walmart deploys AI technologies to manage inventory levels more effectively and enhance customer service. AI systems predict product demand to optimise stock levels. Walmart has experimented with AI-driven robots to assist in inventory management and customer service.
  • Product Development & Innovation: AI is used to analyse market trends, consumer preferences, and competitor activities to accelerate product development and identify new market opportunities.  Perhaps the most transformative uses of AI so far have been in pharmaceutical R&D and in the exploration of new chemical structures.  
  • Use Cases:
    • Google’s DeepMind AI algorithms can design entirely new chemical structures with target properties, unlocking the potential for novel therapeutics that may not be discovered through traditional approaches.  
    • Roche and other pharmaceutical companies are using AI to analyse much more data and to predict the effectiveness of different compounds, enhancing and  accelerating drug discovery and time to market for new drugs.
  • Risk Management & Fraud Detection: AI algorithms identify patterns and anomalies in financial transactions, helping detect fraud and mitigate risks effectively. 
  • Use Cases:
    • Financial intermediaries and institutions like MasterCard, Visa, Amex, HSBC, and JP Morgan use AI tools to analyse, monitor and even intervene in real time to prevent potential fraudulent transactions.
  • Talent Acquisition & Management: AI-powered tools screen resumes, identify suitable candidates, and provide insights into employee performance and engagement, enhancing the recruitment and retention process.  

SMEs

While SMEs may have limited resources compared to multinationals, they are increasingly adopting AI to optimise operations, improve customer experiences, and compete with larger players in all of the areas in which those larger organisations use AI.

In an interesting recent report published in April 2024, the Bipartisan Policy Center highlighted trends in use of AI for SMEs with accounting, customer management and marketing being the leading use cases followed by consumer insights, workforce management, project management and other.

SMEs tend to focus on easy to deploy, cloud based AI tools that provide a quick ROI and enable them to scale without continuous addition to their employee base or ever increasing outsourcing costs.

Further reading:

The AI Governance Alliance’s debut report lays out strategies for equitable AI | World Economic Forum

101 real-world gen AI use cases from the world’s leading organizations | Google Cloud Blog

One of the newest uses of AI tools is in respect of generative AI. Generative AI refers to AI systems that can create new content, such as text, images, music, and code, by learning patterns from existing data. Unlike traditional AI, which typically analyses or classifies data, generative AI can produce original outputs based on prompts. It uses techniques like deep learning and neural networks, particularly models like GPT or GANs, to generate content that mimics human creativity. However it has also given rise to significant concerns about the impact on human creativity, jobs and whether copyright and other intellectual property laws need to be updated to exclude or include such AI generated works.

One key reason businesses adopt generative AI is efficiency. It significantly reduces the time and resources required for repetitive or creative tasks, enabling teams to focus on higher-level strategy and innovation. For instance, in marketing, AI-generated content like social media posts or product descriptions can be produced at scale, quickly adapting to customer trends. Generative AI can also provide personalised customer experiences. By analysing large datasets, it helps create tailored solutions, from chatbots to product recommendations, enhancing customer engagement.

We anticipate the growing use of generative AI in highly technical fields such as medicine, accounting, tax advisory, law (particularly in litigation), and financial analysis, as well as in sectors like engineering, architecture, and scientific research. These fields benefit from AI’s ability to process vast amounts of complex, diverse data quickly and efficiently, enabling professionals to generate preliminary assessments, insights, or recommendations. As AI evolves, it will increasingly assist in automating data analysis, enhancing decision-making, and improving accuracy across a wide range of industries that rely on the synthesis and interpretation of large datasets.

Further reading:

Copyright, AI and Generative Art – Ramparts

AI governance refers to the frameworks, rules, and standards that guide the development and use of AI systems to ensure they are safe, ethical, and aligned with societal values. It involves:

  • Establishing oversight mechanisms to address risks like bias, privacy infringement, and misuse
  • Fostering innovation and trust in AI technologies
  • Engaging diverse stakeholders in the governance process
  • Mitigating human biases and errors in AI development
  • Implementing responsible and ethical AI practices
  • Addressing risks through policy, regulation, and data governance
  • Aligning AI behaviours with ethical standards and societal expectations

The goal is to create a structured approach that maximises AI’s benefits while minimising potential harms and ensuring accountability in AI development and deployment.

The rapid advancement of AI brings numerous ethical implications, particularly in areas like digital amplification, cybersecurity, bias, job displacement, data privacy, the ‘digital divide’ and in modern warfare. 

Digital Amplification

Digital amplification refers to AI’s ability to enhance the reach and influence of digital content, often through algorithms that prioritise certain information, shape public opinion, and amplify specific voices. This phenomenon raises ethical concerns about fairness, transparency, and potential misinformation. To counteract negative effects, businesses can encourage diverse participation in data collection and decision-making, promote open dialogue, and regularly review AI systems for fairness.

Digital Divide 

The digital divide refers to the gap between those who have access to modern information and communication technology and those who do not. AI can exacerbate this divide, as access to AI technologies often requires significant resources. For instance, advanced AI tools and education are more accessible in developed countries, leaving developing nations at a disadvantage. This disparity can lead to unequal opportunities in education, healthcare, and economic growth. Efforts to bridge this divide include initiatives like Google’s AI for Social Good, which aims to make AI technologies more accessible and beneficial to underserved communities. 

Job Displacement

AI’s ability to automate tasks traditionally performed by humans raises significant concerns about job displacement. For instance, in manufacturing, robots and AI systems can perform repetitive tasks more efficiently than humans, leading to reduced demand for human labour. A notable example is Amazon’s use of AI-driven robots in warehouses, which has streamlined operations but also led to concerns about job losses. While AI can create new job opportunities, the transition period can be challenging for workers needing to reskill. 

Bias and Discrimination 

The risk of perpetuating existing unfair human bias is high. For example, in 2018, Amazon developed an AI-powered recruiting tool that showed bias against female candidates, highlighting how AI can perpetuate existing biases in hiring processes. Generative AI Systems built on preexisting human data and decisions could become a high-tech echo chamber for our historic prejudices. 

Cybersecurity 

AI plays a dual role in cybersecurity, both mitigating and potentially promoting cybersecurity and spyware issues. 

  • Mitigation: AI enhances cybersecurity by enabling real-time threat detection and response. Machine learning algorithms can analyse vast amounts of data to identify patterns and anomalies indicative of cyber threats. For example, AI-driven systems can detect and respond to phishing attacks, malware, and unauthorised access attempts more swiftly than traditional methods. AI can also automate routine security tasks, freeing up human experts to focus on more complex issues. Companies like Darktrace use AI to create self-learning cybersecurity systems that adapt to new threats autonomously. 
  • Bad Actors: Conversely, AI can also be exploited by cybercriminals. AI-powered tools can automate and enhance the sophistication of cyberattacks. For instance, AI can be used to develop more effective phishing schemes, create adaptive malware, and conduct large-scale attacks like Distributed Denial of Service (DDoS) and deepfakes more efficiently. Additionally, AI can be used to bypass traditional security measures by learning and mimicking legitimate user behaviour or by hidden attacks on the very AI systems that are being used to manage email and data storage cybersecurity risks. Balancing these aspects requires robust AI governance and ethical guidelines to ensure AI technologies are used responsibly and effectively to protect against cyber threats while minimising the risk of misuse.

Privacy Concerns 

AI systems often rely on vast amounts of data to function effectively, which can lead to privacy issues. For example, facial recognition technology used by law enforcement agencies can enhance security but also raises concerns about surveillance and the potential misuse of personal data. 

Political Actor Misuse 

The Cambridge Analytica scandal is a prominent case where AI algorithms were used to harvest and exploit personal data from millions of Facebook users without their consent, highlighting the need for stringent data protection regulations and the risk of misuse by bad political and governmental agencies. 

AI In Warfare 

AI is increasingly integrated into military operations, offering both significant benefits and notable risks. 

  • Uses and Benefits: AI enhances military capabilities through improved decision-making, autonomous systems, and predictive maintenance. For example, AI-driven drones can conduct surveillance and reconnaissance missions, reducing the risk to human soldiers. AI algorithms can analyse vast amounts of data to provide real-time intelligence, helping military leaders make informed decisions quickly. Predictive maintenance, as used by the US Air Force, helps identify potential equipment failures before they occur, ensuring operational readiness. 
  • Risks: However, using AI in warfare also presents significant risks. Autonomous weapons systems, which can identify and engage targets without human intervention, raise ethical and legal concerns. There is a risk of AI systems making errors in target identification, potentially leading to unintended civilian casualties5. The weaponization of AI can lead to an arms race, with nations developing increasingly advanced AI technologies to gain a strategic advantage. This could destabilise global security and increase the likelihood of conflicts. Balancing these benefits and risks requires robust international regulations and ethical guidelines to ensure AI technologies are used responsibly in military contexts.

Addressing these ethical implications requires a balanced approach, involving robust regulations, ethical guidelines, and initiatives to ensure that AI benefits all of society equitably.

Further reading:

We have created a comprehensive guide to understanding AI Governance:

PDF version: AI Governance

Webpage version: AI Governance

  • Bias and Discrimination: AI systems trained on biased data sets can amplify existing societal biases, leading to discriminatory outcomes. This can damage an organisation’s reputation and cause legal liabilities.
  • Lack of Accountability: Without clear governance structures, it may be difficult to hold anyone accountable when AI systems fail. Organisations may struggle to explain or rectify situations where AI decisions negatively impact individuals or society.
  • Data Protection and Privacy Violations: Many AI systems rely on large amounts of personal data. If not properly governed, organisations may inadvertently violate data protection laws, such as GDPR, exposing themselves to hefty fines and reputational damage.
  • Human-centric AI and Human Rights: AI should be developed and deployed in a way that is good for the end-users and society more generally. Respect for and protection of fundamental human rights, the desire for human autonomy and the right of human creators to benefit economically from use of their work should be integral to the AI Governance Framework. 
  • Ethical Violations: AI decisions that conflict with societal norms or ethical standards can cause public outrage. For example, using AI in surveillance or facial recognition can raise concerns about privacy and civil liberties.
  • Operational Risks: Poorly governed AI systems can malfunction, make incorrect decisions, or disrupt critical operations. This could lead to costly mistakes, such as erroneous financial transactions, medical misdiagnosis, or faulty product recommendations.

AI Governance frameworks need to be process driven. The Alan Turing Institute has helpfully set out a detailed guide to Process Based AI Governance in Action and recommends the following key requirements:

  • Document ethical considerations and decisions: AI ethics guides moral conduct in AI development and use. The key principles are: Sustainability, Safety, Accountability, Fairness, Explainability and Data Stewardship.
  • AI Project Lifecycle: Outlines the stages where ethical questions arise and decisions are made, from project design and model development to system deployment and monitoring.
  • Risk Assessment and Mitigation: Organisations should conduct context-based risk assessments (COBRA) to identify and mitigate potential ethical risks throughout the project lifecycle.
  • Stakeholder Engagement: Engaging stakeholders is crucial for understanding the potential impacts of AI systems and ensuring that diverse perspectives are considered.
  • Operationalising Ethical Principles: Organisations need to translate ethical principles into practical guidelines, policies, and procedures. Core attributes help specify and operationalize principles within the project context.
  • Bias Self-Assessment: Conducting bias self-assessments helps identify and address potential biases in AI systems, particularly concerning fairness.
  • Data Protection and Intellectual Property: Organisations must consider data protection, privacy, transparency, and intellectual property implications when developing and deploying AI systems.
  • Monitoring, Evaluation, and Communication: Ongoing monitoring and evaluation are essential to ensure that AI systems remain aligned with ethical principles and to maintain public trust.

In addition, AI Governance frameworks need to consider the following key issues:

  • Scope of the AI Governance Officer or the AI Governance Committee (including procedures for decision making)
  • Permitted use of approved third party AI tools within your organisation 
  • Policy, procedure and rules for developing AI solutions internally and when working with suppliers
  • Independent standards which your organisation may wish to deploy (e.g. ISO/IEC 42001:2023 and ISO 37000:2021) 
  • Policies and Procedures:
    • Need for continuous risk assessment processes
    • Identification and analysis of risks (health, safety, fundamental rights)
    • Risk evaluation (scored for severity and likelihood) and mitigation strategies
    • Record-keeping (e.g. an AI Risk Register)
    • Reporting requirements to the AI Governance Committee / executive governing body
    • Periodicity of review
  • Stakeholder Compliance: 
    • Situating your AI Governance Framework within the various regulatory frameworks and sectoral guidance
    • Reporting requirements to regulators or capital markets.

Small and medium enterprises will find it difficult to implement and maintain and full blown AI Governance Framework given the resources required. Therefore they should focus on the essential requirements of an AI Governance Framework detailing how the organisation will deploy AI, identify an AI Officer and the reporting lines and ensure there is a Risk Register in place. 

It is essential that SMEs look for freely available guidance and tap into similarly resource constrained organisations within their sector that face the same technology and regulatory compliance challenges.

Fundamental AI compliance steps for SMEs:

  • Identify an AI Governance expert and overall AI risk controller: choose the best person or group within the organisation to develop and manage your framework and report to the Board.
  • Identify Regulations: Begin by understanding the specific regulations they must comply with, such as GDPR, HIPAA, AML, or industry-specific rules.
  • Undertake the Risk Assessment: Evaluate which areas of the business have the highest compliance risks (e.g., data privacy, financial reporting, customer due diligence).
  • Create and maintain the Risk Register: start to identify, monitor and manage AI risks. This should include the risks of issues such as model drift when AI systems can deviate from their intended performance over time as real-world data evolves but the underlying AI system training data is not keeping up, leading to incorrect predictions, biases, or other risks. 
  • Funding: see whether there is any SME funding available for AI use and compliance
  • Industry Support: evaluate industry and sector specific support (e.g. industry associations)
  • Identify third party AI compliance solutions that: 
    • Meet your core requirements
    • Are regularly updated for changes in law and compliance standards
    • Have the highest levels of data security and comply with key certification standards
    • Can integrate with other databases (HR, sales, customer service, reporting) 
    • Have a good user interface (so can be used by users of varying technical ability) and provide staff training
    • Do not lock you into their environments ( i.e., will not be difficult to migrate from or substitute)

Regulated organisations have experience of monitoring, managing and mitigating risks as part of their regulatory obligations to ensure they understand and manage risks appropriately. They will usually maintain a risk register that identifies known risks and potential risks and evaluate their likelihood and impact.

Risk registers will usually include:

  • The cause of the risk 
  • A description
  • Action taken (if any) to manage the risk (controls)
  • Pre-control and post-control:
    • Impact score
    • Likelihood score
  • Who is responsible for controlling the risk identified?
  • What KPIs or reporting processes will be put in place to measure the success of the Framework?

See below a simplified example of a risk register (with 1 being low and 6 high).

AI Use Case

Identified Risks

Impact (1-6)

Likelihood (1-6)

Mitigation Strategies

Customer Service Chatbot

Data Privacy Breach

3

5

Implement data encryption and access controls

Predictive Analytics Tool

Bias in Decision Making

6

2

Regular audits of algorithms for fairness



  • OECD AI Principles: These are the first intergovernmental standards on AI promoting innovative, trustworthy AI that respects human rights and democratic values. Composed of five values-based principles and five recommendations that provide practical and flexible guidance for policymakers and AI actors.
  • AI Fairness 360: IBM’s open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle.
  • Explainable AI: IBM, Google and others provide toolkits on explainable AI.
  • AI Governance Resources & Toolkits: the UK AI Standards Hub, Google, the European Commission, IBM, Microsoft, Stanford University and the Alan Turing Institute (as well as many other organisations and commercial advisors) provide detailed resources, toolkits and/or training on AI and Ethics, Responsible AI Development and AI Governance.

Future Trends

Future AI Development

Emerging Trends in AI

Risk Based Compliance 

There has been a shift towards risk-based approaches, where governance frameworks prioritise high-risk AI Systems and AI Tools that could significantly affect individuals or society. This allows for more targeted and efficient regulation while fostering innovation in lower-risk areas.

The AI arms race is accelerating and this ensures that only the very best AI system cybersecurity providers will be able to protect organisations, governments and people from increasingly sophisticated spyware, hacking, phishing and other attack forms. Newer AI systems look to exploit both human weaknesses and AI systemic vulnerabilities.

AlphaZero showed us that training AI to emulate human knowledge, experience and judgement is not the best way to get extraordinary results. New forms of AI Systems will likely use similar non-human centred means to evaluate information and find optimal routes to success with deeper, more unusual decision-trees.

New AI systems are adopting a hybrid approach to enhance accuracy and explainability. In this approach, machine learning and non-LLM AI handle complex data analysis, while LLM-based generative AI focuses on interpreting and communicating the findings in a clear, understandable manner. This method reduces the risk of “hallucinations” or inaccurate outputs, which is particularly crucial in fields that demand high levels of precision and trust, such as science, healthcare, law, and accounting. For example in medical diagnosis, a hybrid AI system could analyse patient data (e.g., medical history, test results) using machine learning algorithms to identify potential health risks. Then, the generative AI component could explain these findings to the physician (or even a patient) in a concise, understandable report, highlighting key factors and potential treatment options. 

Another emerging trend is the emphasis on transparency and explainability in AI systems. Policymakers and organisations are increasingly recognizing the importance of understanding how AI makes decisions, leading to the development of tools and methodologies for AI auditing and interpretation.

International collaboration is gaining traction as countries recognise the global nature of AI development and deployment. Efforts to establish common standards and principles for AI governance are underway, aiming to create a more cohesive regulatory landscape.

Ethical considerations will remain at the forefront of AI governance discussions. Addressing issues such as bias, fairness, and the potential existential risks of advanced AI systems will require ongoing dialogue and collaboration between stakeholders. The development and use of AI will ultimately challenge many of our deepest held prejudices and assumptions and so it is likely to lead to a philosophical and ethical revolution over time.

AI for Social Good involves using AI to address societal challenges like climate change, healthcare access, and poverty. This is gaining traction, with initiatives from governments and organisations and is a key area where new technology may provide innovative solutions that go beyond our current abilities and conceptual frameworks.

Another significant challenge will be balancing innovation with regulation. As AI becomes more pervasive, consumers and business leaders will expect policymakers to strike the right balance between fostering technological progress and ensuring adequate safeguards are in place.

The rapid pace of AI advancement continues to outstrip regulatory efforts, creating a persistent gap between technology and governance. This dynamic environment will require adaptive and flexible frameworks that can evolve alongside AI capabilities. The increasing complexity of AI systems, particularly foundation models and large language models, will pose challenges for transparency and accountability. Developing effective governance mechanisms for these sophisticated systems will be crucial for maintaining public trust and ensuring responsible AI development.

AI systems may play an increasingly important role in ensuring regulatory compliance. These tools could automatically monitor and analyse vast amounts of data to identify potential violations, reducing human error and improving efficiency. For example, AI could be used to detect patterns indicative of money laundering or fraud in financial transactions, helping organisations stay compliant with anti-money laundering regulations.

Blockchain tools may be integrated into AI governance frameworks to enhance transparency and traceability. By recording key decisions, data sources, and model updates on a blockchain, organisations could create an immutable audit trail of AI operations. This could help address concerns about AI transparency and accountability, particularly in sensitive applications like healthcare or criminal justice.

AI systems and tools will become increasingly linked to each of our hopes, dreams, fears, skills, weaknesses and interests.  These AI tools will become ‘friends’ and trusted advisors for life and for all major life decisions (university, potential partners or lovers, house, car, choice of doctor, investment decisions etc). Even meeting strangers casually in the future will involve automatic due diligence using available public sources. These tools will also enable us to feel like we can speak with the dead again. The risks will increase exponentially as our reliance on AI systems continues to grow.

Google appears to have been early but not wrong with its Glasses project (which may well be revived in another form). Recently Meta has partnered with Ray-Ban to offer smart glasses and this is an area where Elon Musk is likely to have a major impact given his interests in neural augmentation and AI systems. It is inevitable that humans will be augmented with AI technologies and that will  likely not be with handheld phones. A future where humans use AI to enhance every part of their day to day lives is inevitable.

The advent of quantum computing could significantly impact AI governance. Quantum computers may be able to process complex AI models much faster than classical computers and with greater inherent security from a cybercrime perspective, potentially accelerating AI development and deployment. This could necessitate new governance frameworks to keep pace with rapid advancements. Additionally, quantum computing could pose new security challenges, requiring updated encryption methods to protect AI systems and data.

News & Insights

AI Legal & Compliance Support Assistant

Powering the AI Future with Ramparts’ Funds Team

We are seeing an artificial intelligence (AI) revolution, a transformative period marked by unprecedented technological advancement and immense investment opportunities. The strategic decisions made today will define tomorrow’s market leaders. For fund managers looking to capitalise on the AI boom, choosing the right jurisdiction is key. Gibraltar offers not just a stable and predictable legal framework, but a platform of strategic service providers that are crucial for your success.