The Limits of Language and the Search for Understanding in Artificial Intelligence Natural Language Processing
Paper exploring the limits of language and the search for understanding in AI LLMs
Alan Turing Institute – Artificial intelligence
Berkman Klein Center – Vectors of AI Governance
Future of Humanity Institute – Institutionalizing ethics in AI through broader impact requirements
IEEE – Artificial Intelligence Standards Committee
IJIMAI – Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI
OECD – AI Principles overview
Partnership on AI – Guidelines for AI and Shared Prosperity
Stanford University HAI – AI Index Report 2023
UNESCO – Ethics of Artificial Intelligence
There are a wide range of initiatives currently being developed around the world in China, the EU, the US and the UK. The US and China have taken the lead with federal-level laws regulating (and in the case of the US) encouraging the development of AI.
The EU is currently voting on a new AI law and the UK is participating in that process whilst considering its own initiatives. Recently, the UK hosted an AI Safety Summit at which many delegates signed up to the Bletchley Declaration.
Developments are also taking place at UN, OECD and G7 levels. This diversity of jurisdictional approaches – in addition to sector-specific guidance (e.g. financial services, medical devices, healthcare) – makes it challenging for organisations to understand how to navigate the compliance requirements and various AI standards when seeking to develop or implement AI solutions.
The new G7 Code (Hiroshima Process) contains 11 agreed principles and requirements. It is a non-exhaustive list of actions that builds on the existing OECD AI Principles and is intended to help us enjoy the benefits and address the risks and challenges of AI technologies.
Organizations are expected to apply these actions to all stages of the programming lifecycle to cover, when and as applicable, the design, development, deployment and use of advanced AI systems:
Understandably the codes, law, guidance and standards are focused on the principles, standards and compliance and risk framework that developers and organisations need to consider when using AI technologies. In many cases, the current state of play is that most AI risk and compliance frameworks are voluntary. International agreement is needed on core advanced AI requirements that are both technically astute (showing an understanding of how AI works and is evolving), outcomes-focused and game-theoretically structured for how real agents are likely to use AI.
All of the international standards and some of the national and regional laws share the following core principles:
China has implemented a range of AI-specific laws and must be considered the leader in the regulation of AI:
China has implemented an algorithm regulation pursuant to :
The law regulates internet information services algorithmic activities and is intended to:
Providers are required to preserve network records, cooperate with cybersecurity and informatization requirements, telecommunications, public security, market regulation, and for other sectors where security assessment and supervision is required.
The law prohibits algorithmic generation of fake news on online news services and also requires service providers to take special care to address the needs of older users and to prevent fraud.
The regulations also prohibit providers from using algorithms to unreasonably restrict other providers or engage in anti-competitive behaviour.
Purpose:
The GAI Measures regulate the development and use of generative AI services in China. The Measures aim to promote the healthy development of generative AI services, protect the safety of personal data and public interests, and prevent the use of generative AI services for unlawful purposes.
Scope:
The GAI Measures apply to all organisations and individuals that provide generative AI services in China.
Generative AI services are defined as services that use AI to generate a wide range of content including text, images, audio, or video content.
Key Provisions for Providers:
Purpose:
The Deep Synthesis Provisions relate to the GAI Measures and provide that generative AI content must be properly labelled to avoid the risks of deepfake technologies.
They apply to generative AI providers and users of deep synthesis technology.
The provisions define deep synthesis technology as that which employs deep learning, virtual reality, and other synthetic algorithms to produce various content (text, images, audio, video, virtual scenes, and other network information).
They impose various risk assessment, risk management labelling and disclosure obligations on the providers and users of deep synthesis technology, which uses mixed datasets and algorithms to produce synthetic content.
Stanford University – China’s ‘New Generation Artificial Intelligence Development Plan’
Allen & Overy – China brings into force Regulations on the Administration of Deep Synthesis of Internet Technology
Carnegie Endowment for International Peace – What China’s Algorithm Registry Reveals about AI Governance
Carnegie Endowment for International Peace – China’s AI Regulations and How They Get Made
Deacons – Generative AI Regulations Officially Released in China
Latham & Watkins LLP – China’s New AI Regulations
National Law Review – Provisions on Administration of Algorithmic Recommendation
Oxford Martin School – ‘China’s Deepfake Regulations: navigating security, misinformation and innovation’ (Video)
No specific EU AI laws. However, Article 22 of the GDPR (automated decision-making) is relevant to AI data processing and provides that;
“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
The European Union’s Artificial Intelligence Act (AI Act) is a proposed regulation currently being reviewed. It is at the final trilogue negotiation stage and is not expected to be adopted until 2024:
The EU AI Act includes a set of rules on the development, deployment, and use of AI systems. The AI Act takes a risk-based approach and defines four categories of risk:
Unacceptable risk: AI systems that pose an unacceptable risk will be banned. This includes AI systems that are used for social scoring or that can manipulate people’s behaviour.
High risk: AI systems that pose a high risk will be subject to strict requirements, such as:
Limited risk: AI systems that pose a limited risk will be subject to less stringent requirements, such as:
Minimal risk: AI systems that pose a minimal risk will not be subject to any specific requirements.
The AI Act also includes a number of other provisions, such as a requirement for AI systems to be:
EFPIA (Pharma industry) – Position Paper on Artificial Intelligence
EU Commission – A European approach to artificial intelligence
EU Commission Strategy – Artificial Intelligence for Europe
CFA Institute – The EU Artificial Intelligence Act and Financial Services
Carnegie Endowment for International Peace – A Letter to the EU’s Future AI Office
EY – The EU AI Act: What it means for your business
TaylorWessing – Artifical Intelligence Act
MoFo – EU AI Act
Simmons & Simmons – EU draft AI regulation: a practical guide
Time – E.U. Takes a Step Closer to Passing the World’s Most Comprehensive AI Regulation
No specific AI laws yet.
The UK also has automated decision-making restrictions under Article 22 of the GDPR (see EU).
New general AI legislation is also expected in the next year.
The UK government is also developing sector-specific guidance for AI applications in areas such as healthcare, intellectual property and big data, transport, and financial services. This guidance will provide more detailed and tailored advice for organisations developing and using AI in these sectors.
Ashurst – AI and IP: Copyright – the wider picture and practical considerations for businesses
Herbert Smith Freehill – The IP in AI: Does copyright protect AI-generated works?
Kluwer – The UK government’s steps towards a code of practice on copyright and AI
Ramparts – Copyright, AI & Generative Art
Squire Patton Boggs – Copyright protection for AI works: UK vs US
Simmons & Simmons – Generative AI – the copyright issues.
There are two major pieces of legislation:
In addition, a number of existing laws can be applied to AI.
There are also several proposed laws that will specifically regulate AI issues including:
The act establishes a National Artificial Intelligence Research and Development Strategic Plan, developed by the National Science and Technology Council (NSTC). An AI Advisory Committee (NAIAC) has also been set up to advise the President on AI policy and strategy.
The NAIAC goal is to advise the President on the intersection of AI and innovation, competition, societal issues, the economy, law, international relations, and other areas that can and will be impacted by AI in the near and long term.
The NAIIA funds several AI programs and workstreams, including:
The act establishes a number of new requirements and programs, including:
The AI in Government Act is designed to help the US maintain its leadership in AI while also ensuring that AI is used responsibly and ethically.
Some of the key goals of the AI in Government Act:
National Artificial Intelligence Initiative
See also the White House’s ‘Blueprint for an AI Bill of Rights’
Brookings – the EU and US diverge on AI regulation
Center for Strategic and International Studies – AI Regulation is Coming- What is the Likely Outcome?
EFF – How We Think About Copyright and AI Art
EPIC – The State of State AI Laws: 2023
Ramparts – Copyright, AI & Generative Art
The New York Times – In U.S., Regulating A.I. Is in Its ‘Early Days’
Check the terms of your licence: however if you pay for an AI service usually you will own the work. For example, Midjourney give ownership of AI generated art to paid subscribers but free users do not have ownership rights in their work.
Copyright arises automatically in your creative work, assuming it is sufficiently original and creative.
However, at the moment the US Copyright Office will not let you register AI-generated work for additional copyright protection unless the use of AI is minimal. We expect this to change soon, but in the meantime your ability to sue for infringement in the USA is limited.
In the UK, no registration is required. Creative people using such tools (i.e. inputting chosen words or seeds) should be able to benefit from copyright protection for their AI-created works if it is sufficiently original (i.e. a minimal amount of creative work is involved). This is a technology-neutral test, unlike the US test that currently requires a minimal use of AI.
Under EU law (according to Directive 93/98 and Directive 2006/116) only human creations are protected, which can also include those for which the person employs a technical aid, such as a camera. “In the case of a photo, this means that the photographer utilises available formative freedom and thus gives it originality“.
Eva-Maria Painer v Standard VerlagsGmbH and Others.
In fact, EU law expressly provides that photographs do not need to meet any additional originality threshold. It remains to be seen whether a specific requirement will be introduced for AI-created works but in the meantime, under EU law, works made by humans using AI technologies can meet the threshold for copyright protection insofar as they are original and have some human authorship.
Under US law, entirely automated AI works (art, photography, music) can not be registered for copyright protection. In the UK, the developer of the AI program can be the ‘author’ in such cases for copyright purposes and no registration is required.
In the UK, the creator of an AI tool may be able to benefit from copyright protection for 50 years from the year of creation for any autonomously created work.
In the EU, (according to Directive 93/98 and Directive 2006/116) only human creations are protected.
Yes, whether you have registered copyright or unregistered copyright, if you have the licence rights to the work then you can use it commercially. Check the licence the AI tool grants you and which rights it reserves.
This is a very complex area of law. It depends on the amount of use you make of other work, whether the other work or style is protected by law and whether the work you make is considered derivative.
Firstly check the the licence of the work you are using. Is it public, or creative commons with restrictions? Often creative commons licensed work will require that you pass on the same licence for work you do using someone else’s creative commons licensed work. Sometimes commercial use is permitted and other times it is not.
We suggest you read our detailed article on Copyright & AI Art to start with, and seek advice if needed.
You can also ask Bard or Co-Pilot 🙂
Yes. If you use AI tools to make derivative works using copyright material (i.e. that is not public work) you can be sued for infringement by the copyright holder.
If you are a developer and your tools are considered to encourage, support or enable copyright breach then you could face infringement proceedings for complicity or facilitation of breach copyright laws.
There are a range of issues related to data protection, data security, trademarks, copyright, and ethics to consider when developing or licensing AI. Data protection and security is the key area of focus to be able to evolve natural language personalised AI assistants.
One of the greatest risks arises from leakage of personally identifiable information and data that can be used when combined with other data to identify a person. That data could then be seen by other users of the AI tools or it could inform the responses of the AI tools.
The developers of Bard, ChatGPT and other tools are still figuring out how to build and deploy personalised digital assistants using natural language AI in a safe way. Once they do this, it will make AI much more useful and engaging for people. See: The Singularity Approaches for a brief consideration of this issue.
Additional risks relate to how selective data is used or how data is used selectively in a way that discriminates unfairly against people. There have already been concerns raised about people losing their benefits and other civil rights due to automated decision-making involving AI tools.
UK officials use AI to decide on issues from benefits to marriage licences
Yes, humans have used technology since we first became humans and AI tools are just a new technology created by humans.
There are a range of issues related to data protection, data security, trademarks, copyright, law and ethics to consider when developing or licensing AI.
Yes, there can be – particularly if part of the work can be seen or heard in your work. The law of ‘fair use’ (or ‘fair dealing’ in the UK) can be very confusing, as there is no black-and-white test.
If you use an excerpt or part of someone else’s work then whether that is fair depends on the context, your intention and whether it causes harm to the creator or owner. Ultimately, the test should be a sensible personal one – ask yourself questions like:
Would it be OK if someone did the same with my work?
Am I willing to take it down if the artist that I admire is upset?
We should not get so caught up in what we do that we would be willing to ignore how it affects artists that we admire.
See the article Copyright, AI and Generative Art for more info.
Paper exploring the limits of language and the search for understanding in AI LLMs
A short personal blog considering the emergence, risks and benefits of AI.
‘AI’ video companion piece to the article on Copyright, AI & Generative Art
What are the key elements of creative work? Where is the boundary between human creativity and automated work? What is the difference between AI work and AI art?