AI Legal & Compliance
Knowledge Hub

Play Video about AI Knowledge Hub mobile

Information on the main legal issues arising from the  development of AI (including copyright, data protection, security, and AI compliance issues)

Play Video about AI Knowledge Hub

Information on the main legal issues arising from the  development of AI (including copyright, data protection, security, and AI compliance issues)

Law & Guidance

International Standards & Guidance

Alan Turing Institute – Artificial intelligence

AI Standards Hub

Berkman Klein Center – Vectors of AI Governance

Bletchley Declaration 

Future of Humanity Institute – Institutionalizing ethics in AI through broader impact requirements

IEEE – Artificial Intelligence Standards Committee

IJIMAI – Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI

OECD – AI Principles overview

Partnership on AI – Guidelines for AI and Shared Prosperity

Stanford University HAI – AI Index Report 2023

UNESCO – Ethics of Artificial Intelligence



There are a wide range of initiatives currently being developed around the world in China, the EU, the US and the UK. The US and China have taken the lead with federal-level laws regulating (and in the case of the US) encouraging the development of AI.

The EU is currently voting on a new AI law and the UK is participating in that process whilst considering its own initiatives. Recently, the UK hosted an AI Safety Summit at which many delegates signed up to the Bletchley Declaration.

Developments are also taking place at UNOECD and G7 levels. This diversity of jurisdictional approaches – in addition to sector-specific guidance (e.g. financial services, medical devices, healthcare) – makes it challenging for organisations to understand how to navigate the compliance requirements and various AI standards when seeking to develop or implement AI solutions.

The new G7 Code (Hiroshima Process) contains 11 agreed principles and requirements. It is a non-exhaustive list of actions that builds on the existing OECD AI Principles and is intended to help us enjoy the benefits and address the risks and challenges of AI technologies. 

Organizations are expected to apply these actions to all stages of the programming lifecycle to cover, when and as applicable, the design, development, deployment and use of advanced AI systems:

  1. Take appropriate measures throughout the development of advanced AI systems… to identify, evaluate, and mitigate risks across the AI lifecycle. 
  2. Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment including placement on the market. 
  3. Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate… use, to support…transparency and increase[d] accountability. 
  4. Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems 
  5. Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach… 
  6. Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle. 
  7. Develop and deploy reliable content authentication and provenance mechanisms… such as watermarking or other techniques to enable users to identify AI-generated content 
  8. Prioritise research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures. 
  9. Prioritise the development of advanced AI systems to address the world’s greatest challenges, notably…the climate crisis, global health and education 
  10. Advance the development of and, where appropriate, adoption of international technical standards 
  11. Implement appropriate data input measures and protections for personal data and intellectual property 

Understandably the codes, law, guidance and standards are focused on the principles, standards and compliance and risk framework that developers and organisations need to consider when using AI technologies. In many cases, the current state of play is that most AI risk and compliance frameworks are voluntary. International agreement is needed on core advanced AI requirements that are both technically astute (showing an understanding of how AI works and is evolving), outcomes-focused and game-theoretically structured for how real agents are likely to use AI.

All of the international standards and some of the national and regional laws share the following core principles:

  • Human-centred: The importance of human agency in the development and use of AI so that humans remain in control of AI systems. AI systems should be designed to support human values and goals.
  • Robustness, transparency, and responsibility: All emphasise the importance of developing AI systems that are robust, transparent, and responsible.
    • Robustness – AI systems should be able to operate reliably in the real world.
    • Transparency – AI systems should be understandable to humans.
    • Responsibility – there should be clear accountability for the development and use of AI systems.
  • Fairness, non-discrimination, and respect for human rights: All emphasise the importance of developing AI systems that are fair, non-discriminatory, and that respect human rights. This means that AI systems should not be used to discriminate against people or to violate their human rights.
  • High Risk Requirements: The OECD AI Principles, the Bletchley Park Declaration and the EU AI Act contain provisions on the use of AI systems in high-risk applications.
    High-risk applications (or ‘Frontier models’) are applications where the use of AI could have a significant impact on people’s lives, such as in healthcare, transportation, and law enforcement. Both require developers of AI systems to take additional steps to mitigate the risks associated with high-risk applications.
  • International cooperation: The US AI laws, the G7 Hiroshima Statement and the Bletchley Declaration contain provisions on the development of international cooperation on AI and recognise that the development and use of AI is a global issue and that international cooperation is necessary to ensure that AI is used for good purposes and outcomes.


China has implemented a range of AI-specific laws and must be considered  the leader in the regulation of AI:

  • Algorithm Regulations
  • Provisional Provisions on Management of Generative Artificial Intelligence Services (GAI Measures) 
  • Provisions on Management of Deep Synthesis in Internet Information Service (Deep Synthesis Provisions)

China has implemented an algorithm regulation pursuant to :

  •  Information Protection Law of the PRC
  • The Measures on the Administration of Internet Information Services


The law regulates internet information services algorithmic activities and is intended to:

  • Carry forward the Core Socialist Values
  • Preserve national security and the societal public interest
  • Protect the lawful rights and interests of citizens, legal persons, and other organizations
  • Promote the healthy and orderly development of internet information services.


Providers are required to preserve network records, cooperate with cybersecurity and informatization requirements, telecommunications, public security, market regulation, and for other sectors where security assessment and supervision is required.

The law prohibits algorithmic generation of fake news on online news services and also requires service providers to take special care to address the needs of older users and to prevent fraud.

The regulations also prohibit providers from using algorithms to unreasonably restrict other providers or engage in anti-competitive behaviour.

Algorithm Recommendation Regulation


The GAI Measures regulate the development and use of generative AI services in China. The Measures aim to promote the healthy development of generative AI services, protect the safety of personal data and public interests, and prevent the use of generative AI services for unlawful purposes.


The GAI Measures apply to all organisations and individuals that provide generative AI services in China.

Generative AI services are defined as services that use AI to generate a wide range of content including text, images, audio, or video content.

Key Provisions for Providers:

  • Registration: Register with the relevant authorities.
  • Content Review: Review the content generated using their services to ensure it is lawful.
  • User Controls: Provide users with controls to manage the content that is generated for them.
  • Data Security: Protect the security of personal data and other sensitive data.
  • Prohibited Activities: The GAI Measures prohibit the use of generative AI services for various activities, including for creating content that is obscene, violent, or discriminatory.


The Deep Synthesis Provisions relate to the GAI Measures and provide that generative AI content must be properly labelled to avoid the risks of deepfake technologies.

They apply to generative AI providers and users of deep synthesis technology. 

The provisions define deep synthesis technology as that which employs deep learning, virtual reality, and other synthetic algorithms to produce various content (text, images, audio, video, virtual scenes, and other network information).

They impose various risk assessment, risk management labelling and disclosure obligations on the providers and users of deep synthesis technology, which uses mixed datasets and algorithms to produce synthetic content.



No specific EU AI laws. However, Article 22 of the GDPR (automated decision-making) is relevant to AI data processing and provides that;

“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

The European Union’s Artificial Intelligence Act (AI Act) is a proposed regulation currently being reviewed. It is at the final trilogue negotiation stage and is not expected to be adopted until 2024:

iapp – Contentious areas in the EU AI Act trilogues

The EU AI Act includes a set of rules on the development, deployment, and use of AI systems. The AI Act takes a risk-based approach and defines four categories of risk:

Unacceptable risk: AI systems that pose an unacceptable risk will be banned. This includes AI systems that are used for social scoring or that can manipulate people’s behaviour.

High risk: AI systems that pose a high risk will be subject to strict requirements, such as:

  • Undergo a conformity assessment before going to market
  • Registered in a public database
  • Have a human monitor and be able to intervene to override the AI system’s decisions


Limited risk: AI systems that pose a limited risk will be subject to less stringent requirements, such as:

  • providing information to users about the AI system’s capabilities and limitations
  • taking measures to mitigate the risks posed by the AI system


Minimal risk: AI systems that pose a minimal risk will not be subject to any specific requirements.

The AI Act also includes a number of other provisions, such as a requirement for AI systems to be:

  • transparent and accountable
  • non-discriminatory
  • environmentally sustainable


No specific AI laws yet.

The UK also has automated decision-making restrictions under Article 22 of the GDPR (see EU).

New general AI legislation is also expected in the next year.

The UK government is also developing sector-specific guidance for AI applications in areas such as healthcare, intellectual property and big data, transport, and financial services. This guidance will provide more detailed and tailored advice for organisations developing and using AI in these sectors.



There are two major pieces of legislation:

  • The National AI Initiative Act of 2020 (NAIIA) is a US federal law that became law in 2021. It aims to promote the development and responsible use of AI and provides the architecture for US excellence in AI. 
  • The AI In Government Act promotes the responsible development and use of AI in the US federal government.


In addition, a number of existing laws can be applied to AI.

There are also several proposed laws that will specifically regulate AI issues including:

  • The development and use of AI systems
  • The transparency and accountability of AI systems
  • The safety and security of AI systems
  • The ethical implications of AI

The act establishes a National Artificial Intelligence Research and Development Strategic Plan, developed by the National Science and Technology Council (NSTC). An AI Advisory Committee (NAIAC) has also been set up to advise the President on AI policy and strategy. 

The NAIAC goal is to advise the President on the intersection of AI and innovation, competition, societal issues, the economy, law, international relations, and other areas that can and will be impacted by AI in the near and long term.

The NAIIA funds several AI programs and workstreams, including:

  • National AI Research Institutes:
    • The focus is on developing AI data sets and testbeds;
    • researching the social, economic, health, scientific, and national security implications of AI;
    • broadening participation in AI research and training through outreach.
  • AI Innovation Hub program
  • AI Workforce Development program
  • National AI Ethics Board
  • AI Risk Assessment Framework
  • AI Transparency and Accountability Framework


The act establishes a number of new requirements and programs, including:

  • A requirement for all federal agencies to develop AI governance plans
  • A new AI R&D program at the National Science Foundation (NSF)
  • A new AI workforce development program at the Office of Personnel Management (OPM)
  • A new AI ethics board
  • A new AI risk assessment framework
  • A new AI transparency and accountability framework

The AI in Government Act is designed to help the US maintain its leadership in AI while also ensuring that AI is used responsibly and ethically. 

Some of the key goals of the AI in Government Act:

  • To accelerate the development and use of safe and beneficial AI technologies
  • To ensure that the US government maintains its leadership in AI research and development
  • To promote the responsible and ethical use of AI
  • To prepare the US workforce for the AI economy

AI – Law, Compliance and Ethics Playlist


Leading AI Knowledge Hubs


FAQ: Frequently Asked  Questions about AI & Law


Who owns AI artwork?

Check the terms of your licence: however if you pay for an AI service usually you will own the work. For example, Midjourney give ownership of AI generated art to paid subscribers but free users do not have ownership rights in their work.

Copyright arises automatically in your creative work, assuming it is sufficiently original and creative.

However, at the moment the US Copyright Office will not let you register AI-generated work for additional copyright protection unless the use of AI is minimal. We expect this to change soon, but in the meantime your ability to sue for infringement in the USA is limited.

In the UK, no registration is required. Creative people using such tools (i.e. inputting chosen words or seeds) should be able to benefit from copyright protection for their AI-created works if it is sufficiently original (i.e. a minimal amount of creative work is involved). This is a technology-neutral test, unlike the US test that currently requires a minimal use of AI.

Under EU law (according to Directive 93/98 and Directive 2006/116) only human creations are protected, which can also include those for which the person employs a technical aid, such as a camera. “In the case of a photo, this means that the photographer utilises available formative freedom and thus gives it originality“.

Eva-Maria Painer v Standard VerlagsGmbH and Others.

In fact, EU law expressly provides that photographs do not need to meet any additional originality threshold. It remains to be seen whether a specific requirement will be introduced for AI-created works but in the meantime, under EU law, works made by humans using AI technologies can meet the threshold for copyright protection insofar as they are original and have some human authorship.

Under US law, entirely automated AI works (art, photography, music) can not be registered for copyright protection. In the UK, the developer of the AI program can be the ‘author’ in such cases for copyright purposes and no registration is required.

In the UK, the creator of an AI tool may be able to benefit from copyright protection for 50 years from the year of creation for any autonomously created work.

In the EU, (according to Directive 93/98 and Directive 2006/116) only human creations are protected. 

Yes, whether you have registered copyright or unregistered copyright, if you have the licence rights to the work then you can use it commercially. Check the licence the AI tool grants you and which rights it reserves.

This is a very complex area of law. It depends on the amount of use you make of other work, whether the other work or style is protected by law and whether the work you make is considered derivative.

Firstly check the the licence of the work you are using. Is it public, or creative commons with restrictions? Often creative commons licensed work will require that you pass on the same licence for work you do using someone else’s creative commons licensed work. Sometimes commercial use is permitted and other times it is not. 

We suggest you read our detailed article on Copyright & AI Art to start with, and seek advice if needed.

You can also ask Bard or Co-Pilot 🙂

Yes. If you use AI tools to make derivative works using copyright material (i.e. that is not public work) you can be sued for infringement by the copyright holder.

If you are a developer and your tools are considered to encourage, support or enable copyright breach then you could face infringement proceedings for complicity or facilitation of breach copyright laws.


What legal issues should I consider when developing and licensing AI tools?

There are a range of issues related to data protection, data security, trademarks, copyright, and ethics to consider when developing or licensing AI. Data protection and security is the key area of focus to be able to evolve natural language personalised AI assistants.

One of the greatest risks arises from leakage of personally identifiable information and data that can be used when combined with other data to identify a person. That data could then be seen by other users of the AI tools or it could inform the responses of the AI tools.

The developers of Bard, ChatGPT and other tools are still figuring out how to build and deploy personalised digital assistants using natural language AI in a safe way. Once they do this, it will make AI much more useful and engaging for people. See: The Singularity Approaches for a brief consideration of this issue.

Additional risks relate to how selective data is used or how data is used selectively in a way that discriminates unfairly against people. There have already been concerns raised about people losing their benefits and other civil rights due to automated decision-making involving AI tools.

UK officials use AI to decide on issues from benefits to marriage licences


Is it ethical to use AI to create artwork?

Yes, humans have used technology since we first became humans and AI tools are just a new technology created by humans.

There are a range of issues related to data protection, data security, trademarks, copyright, law and ethics to consider when developing or licensing AI.

Yes, there can be – particularly if part of the work can be seen or heard in your work. The law of ‘fair use’ (or ‘fair dealing’ in the UK) can be very confusing, as there is no black-and-white test.

If you use an excerpt or part of someone else’s work then whether that is fair depends on the context, your intention and whether it causes harm to the creator or owner. Ultimately, the test should be a sensible personal one – ask yourself questions like:

Would it be OK if someone did the same with my work?

Am I willing to take it down if the artist that I admire is upset?

We should not get so caught up in what we do that we would be willing to ignore how it affects artists that we admire.

See the article Copyright, AI and Generative Art for more info.

  • Access: we need to mitigate the risks of economic discrimination.
    • Given the extraordinary competitive advantage these tools give, how will we ensure that all people are able to afford access to at least a basic level of these powerful tools?
  • Bias: The quality of output is a function of the data that goes in and what is selected for (the weighting).
    • How do we protect against the use of selective data or of data selectively, in a way that discriminates unfairly against people?  
  • Coding:
    • Who decides on the prime laws for AI and the universal framework core code-base required for ethical purposes?
    • What are the appropriate ways to mitigate the risks of autonomous algorithmic iteration?
  • Control:
    • Are we sure how AI tools work and how to maintain control over them?
    • Could they become autonomous? 
    • How will we know when algorithmic tools may be close to creating their own identities or self-coded purposes?
  • Competition (Reality check):
    • How do we ensure that AI is developed in such a way that ethical AI has a competitive advantage and can be used to protect against misuse of AI by unethical humans?
  • Impact: AI will have a major impact on low-skilled and higher-skilled jobs.
    • Where will the newly unemployed find work and ensure they have sufficient income? 
  • Privacy: AI tools could have access to significant amounts of personal data depending on how they are structured and interact with other technologies you use. This makes knowledge, consent and security the key issues to be considered.
  • Safety: Given their extraordinary power, AI tools need to be better at protecting humans than humans. This is relevant across all business sectors, in manufacturing and in armaments.


News & Insights

Hockney inspired Gibraltar

Copyright, AI and Generative Art

What are the key elements of creative work? Where is the boundary between human creativity and automated work? What is the difference between AI work and AI art?