ARTICLE
9 August 2024

The EU AI Act: Implications For UK Businesses And The Future Of Artificial Intelligence ("AI") – Part 2

R
Rosenblatt

Contributor

Rosenblatt was established in the City of London in 1989 and is a trading division of RBG Legal Services Limited, part of RBG Holdings plc (formerly Rosenblatt Group plc). In 2018 we listed on the London Stock Exchange’s AIM market. Central to every relationship that we build is a firm commitment to our clients’ success.

In Part 2, we consider the obligations imposed on developers and users of AI systems and reflect on key legal and practical obstacles in the Act's way.
United Kingdom Technology
To print this article, all you need is to be registered or login on Mondaq.com.

In Part 1, we explained the scope, key principles, and penalties under the new EU AI Act (the "Act"), the most ambitious regulatory framework so far devised for artificial intelligence. In Part 2, we consider the obligations imposed on developers and users of AI systems and reflect on key legal and practical obstacles in the Act's way.

Q&A

Q: What are the obligations for high-risk AI systems?

A: They differ between developers (Providers) and users (Deployers) of high-risk systems. See below:

Providers1 Deployers2

Ensure systems comply with technical requirements including:

  • Risk management;
  • Quality management;
  • Data governance;
  • Technical documentation; and
  • Automatic record-keeping3
Use the system in accordance with the Provider's instructions. Monitor its operation and inform the Provider, Distributor/Importer and relevant authority if they have reason to think it poses a risk at a national level, or have identified a serious incident4. Retain the system's automatically generated logs for a period appropriate to its intended purpose. This will be at least 6 months, subject to other EU/domestic obligations (e.g. data protection)5.

Design systems to ensure their operation:

  1. Is transparent to users;
  2. Allows effective human oversight; and
  3. Achieves an appropriate level of accuracy, robustness, and cybersecurity6
Arrange oversight from a person with the necessary competence, training and authority7. Ensure input data is relevant and representative, insofar as the Deployer controls this8.
Register systems in an EU database before placing on the market or into service in the EU9. Register systems for critical infrastructure, at national level 10. Inform individuals that they plan to use a high-risk AI system to make decisions11. Inform affected workers if they intend to use a high-risk AI system in the workplace12.
Prior to placing the system on the market or putting into service in the EU13: Subject the system to a 'conformity assessment' procedure. Draw up an EU declaration of conformity. Affix the 'CE' marking to the system, its packaging, and/or its documentation. Affix the Provider's name, trade mark, and contact address likewise. Prior to making use of the system: Carry out a data protection impact assessment. Carry outa fundamental rights impact assessment if required. This will apply if the Deployer is a public body or providing public services14.
Ensure the system complies with accessibility requirements for persons with disabilities15. If the decision results in legal or otherwise significant effects, and if requested, provide a clear and meaningful explanation of the system's role in the decision-making (see further below)16.

Q: What rights do consumers and others affected by AI have under the Act?

A: Natural or legal persons may17:

  • Submit complaints regarding infringement of the Act to the relevant 'market surveillance authority'.
  • Request from a Deployer an explanation of any decision based on an output from a high-risk AI system, which has legal or otherwise significant effects for that person so as to have an adverse impact on their health, safety or fundamental rights. This explanation must be 'clear and meaningful' and must explain the role played by the AI system in the decision-making procedure and the main elements of the decision taken.
  • Rely on whistleblower protections under the EU Whistleblowing Directive in reporting any infringements of the Act.

Q: What are the obligations for limited-risk AI systems and GPAI Models?

A: The obligations again differ between developers (Providers) and users (Deployers). They are as follows:

Providers18 Deployers19
Ensure people know they are interacting with an AI system (unless obvious to a "reasonably well-informed, observant and circumspect person"; subject to law enforcement exemption). Inform people if they are being subjected to an emotion recognition or biometric categorisation system (subject to a law enforcement exemption).
Ensure the system's outputs are labelled in machine-readable format as artificially generated or manipulated. This obligation is targeted at deepfakes. It is subject to exemptions for law enforcement or assistive standard editing. For an AI system that generates or manipulates: "images, audio or video content constituting a deep fake" – disclose the content has been artificially generated or manipulated. This obligation is diluted where the content is part of "an evidently artistic, creative, satirical, fictional or analogous work or programme". "text published to inform the public on matters of public interest" – make the same disclosure. This is subject to exceptions for law enforcement, or where content has undergone human review or editorial control and a human/company has editorial responsibility for publication.

Q: What obligations apply to GPAI models generally?

A: Providers must comply with the obligations set out below; they may rely in part on compliance with codes of practice.

  • Keep up to date technical documentation and make it available to providers of AI systems who intend to integrate the GPAI model into their AI system.20
  • "Put in place a policy to comply with [EU] law on copyright" and publish a summary of the content used to train the model.21
  • Cooperate with relevant authorities22.
  • If based in a third country, appoint an 'authorised representative' within the EU, to maintain relevant technical documentation and cooperate with the AI Office and relevant authorities23.

Q: What are the obligations for systemic-risk GPAI models?

A: Providers of 'systemic-risk' GPAI models must comply with the obligations set out below24. They may rely on compliance with codes or practice to do so:

  • Evaluate the model based on "standardised protocols and tools reflecting the state of the art", to identify and mitigate systemic risks.
  • Assess and mitigate systemic risks at the EU level, that may arise from the development, placing on market, or use of systemic-risk GPAI models.
  • Report serious incidents & possible correctives to the AI Office (and national authorities).
  • Ensure an adequate level of cybersecurity protection based on the risk of the model and its physical infrastructure.

Q: How does the Act encourage innovation?

A: National authorities are required to provide companies with 'regulatory sandboxes', to provide a controlled environment for development, testing and validation of innovative AI systems, under real-world conditions25. SMEs and start-ups will be given priority access to these sandboxes26, and there will be derogations involving simplified compliance for 'microenterprises'27, meaning businesses that employ fewer than 10 persons and have an annual turnover and/or annual balance sheet under €2m28.

COMMENTARY

Challenge 1: Harm done by AI systems

As is clear from our survey of the Act, the EU has developed an extensive framework for the design, development, and commercialisation of AI systems. However, what happens if the operation of an AI system is not merely 'non-compliant', but actually causes serious harm? Who is liable? Some (simple) cases will involve harm obviously caused by bad design (the Provider) or reckless use (the Deployer). Other cases will involve unforeseen and/or unforeseeable factors, making it difficult to apportion liability – but it will nonetheless be important for public policy that someone be held liable. This question will only grow in importance, given the myriad harms AI may cause as it is rolled out across society, everywhere from self-driving cars to smart homes to fruit-picking machines.

AI liability is far from straightforward. Many factors muddle up the causal chain, including: the complexity of AI systems, device connectivity and cybersecurity risks, the opacity of AI algorithms (the 'black box' problem), and frequent modifications to the systems, both through software updates and autonomous learning. Any potential Claimant will thus face substantial challenges proving causation. How can a Claimant prove causation if they cannot understand how the AI system reached the decision it did? Indeed, how can a Claimant defeat the argument that the causal chain was interrupted by a failure to make safety-relevant updates? Or by changes due to the system's autonomous learning?

The Act is silent on all of this. However, the EU Commission ("EC") and the European Parliament ("EP") have addressed these concerns.29 The EP has recommended strict liability for high-risk AI, fault-based liability with a reversed or modified burden of proof for other AI, and mandatory insurance for high-risk AI. The EC's draft AI Liability Directive ("ECAILD") includes a modified burden of proof, but proposes to defer consideration of strict liability and mandatory insurance until a review 5 years after the Directive becomes law. This is unlikely to be before 2025. The lacuna therefore remains, leaving considerable uncertainty for businesses. The EU should ensure the future Directive interacts appropriately with the Act and does not double the regulatory burdens on AI developers and users.

Practical take-aways:

  • At present, it is likely to be difficult for Claimants to prove an AI system has caused harm.
  • To address this, the ECAILD may introduce a modified burden of proof in claims for harm by AI systems. The EU may also introduce strict liability and mandatory insurance for high-risk AI systems, but it appears this is unlikely to happen before 2030.
  • Until the ECAILD becomes law, the position on AI liability remains uncertain for businesses.

Challenge 2: AI and copyright law

The Act requires that Providers of AI systems "put in place a policy to comply with [EU] law on copyright." This sweeps a complex and controversial issue under the rug. Generative AI raises intractable copyright law questions, which are at present unresolved and subject to a growing wave of litigation. There are three core issues, none of which has been fully addressed by the EU.

Firstly, training: generative AI is overwhelmingly trained on copyrighted data, and OpenAI has argued robustly that it would be impossible to train such AI without using copyrighted work30. The Act indicates that training on copyrighted data requires the rightholder's authorisation unless an exception applies31. Within the EU, there is a wide exception for 'text and data mining'32. However, for all commercial use, rightholders can opt out, including by technical means such as metadata or click-wrap terms and condition. This exception may therefore wear thin. Tools have already been launched to enable creators to exclude their copyrighted work from training sets33. In the UK and the USA, the developer must demonstrate 'fair dealing' or 'fair use'. These are discretionary and fact-sensitive concepts. Unsurprisingly, the UK and USA are awash with litigation on this issue34.

Secondly, protection: is it possible for AI outputs themselves to be copyrighted works? If so, who owns the copyright, and what is the threshold for protection? The likely trend in the EU (and UK35) may be towards treating an AI output as a mere 'mechanical reproduction' outside the scope of copyright protection. However, it is noteworthy that China has taken a different approach on this issue36.

Thirdly, infringement: what happens when AI outputs infringe the rights of copyright owners? This is likely to occur when AI outputs are either identical or very similar to existing copyrighted works. In other words, what happens when AI writes a song indistinguishable from 'Hey Jude', or a book indistinguishable from Harry Potter? Who is liable? Is it the developer, the user, or both? Generative AI developers have this liability in mind, and may seek to 'infringement-proof' their AI systems by filtering their outputs. For example, if you prompt ChatGPT to create an 'exact copy', it will say it is not allowed to do so. However, whether GPT practises what it preaches on less obvious prompts is another question; it is not yet obvious that such 'infringement-proofing' is technically possible.

Practical take-aways:

  • If training a generative AI system on copyrighted material in the UK, the developer must be prepared to demonstrate 'fair dealing' under English law.
  • The protectability of AI outputs under copyright law remains an open question.
  • Copyright infringement by the output of generative AI is a substantial risk. One way to sidestep the issue would be to completely prevent infringing outputs, but it is not yet clear to what extent that is technically possible.

Challenge 3: Finding the right experts

The technology behind AI systems is notoriously complicated. EU regulators will be required to grapple with the technicalities, for example in evaluating the appropriate levels of 'robustness and accuracy' for AI systems, and in choosing which GPAI models to designate as 'systemic-risk'. Effectively enforcing the Act will thus require input from experts who fully understand how AI works. The EU understands this, as indicated by its intention to meet the challenge by recruiting top experts to its 'scientific panel'.

However, regulators typically do not offer competitive salaries when compared to the very lucrative private sector – where equivalent salaries may be three, five or even ten times as high37. As such, regulators worldwide have been struggling to recruit. While many principled technology experts may sign up, there is a risk that regulators fail to keep up, and industry starts to lose faith. If the EU cannot find a way to meet this challenge, it will certainly struggle to become a global hub for AI regulation.

CONCLUDING THOUGHTS

The EU clearly intends for the Act to have global influence, as indicated by its boldly extraterritorial scope. However, unlike in the case of the GDPR and other pioneering laws, the EU may not have a 'first-mover' advantage here. China has already legislated on generative AI38; the US signed an Executive Order on AI in October 202339, together with a blueprint for an 'AI Bill of Rights'40 and a bipartisan framework for a US AI Act41; and the UK seems likely to produce its own 'AI Bill' soon, as announced in the King's Speech this year. In short, the EU faces a battle to lead the way on AI regulation. In circumstances where the epicentre of the technology is undoubtedly elsewhere (the USA), where markets outside the EU are growing in importance, and where AI technology is developing so rapidly, the EU's regulation, whilst extensive, may fail to secure the global influence to which it aspires.

Footnotes

1. Article 16 of the EU AI Act.

2. Article 26 of the EU AI Act.

3. Article 16(a), (c)-(e), Articles 8-12 and Articles 17-19 of the EU AI Act.

4. Article 26(1) and (5) of the EU AI Act.

5. Article 26(6) of the EU AI Act.

6. Article 16(a) and Articles 13-15 of the EU AI Act.

7. Article 26(2) of the EU AI Act.

8. Article 26(4) of the EU AI Act.

9. Articles 16(i), 49, and 71 of the EU AI Act.

10. Article 49(5) and Annex 3 point 2 of the EU AI Act.

11. Article 26(11) of the EU AI Act.

12. Article 26(7) of the EU AI Act.

13. Articles 16(f)-(h), (b), and Articles 43, 47 and 48 of the EU AI Act. The Provider must also be ready to demonstrate conformity to a national competent authority and take immediate corrective action if it becomes aware of non-compliance – see Article 16(j)-(k) and Article 20 of the Act.

14. Article 26(9) and Article 27 of the EU AI Act.

15. Article 16(l) of the EU AI Act – under existing EU law, namely Directives (EU) 2016/2102 and (EU) 2019/882.

16. Article 86 of the EU AI Act.

17. Articles 85-87 of the EU AI Act respectively.

18. Chapter IV/Article 50(1)-(2) of the EU AI Act.

19. Article 50(3)-(4) of the EU AI Act.

20. Article 53(1)(a)-(b) of the EU AI Act.

21. Article 53(1)(c)-(d) of the EU AI Act.

22. Article 53(3) of the EU AI Act.

23. Article 54 of the EU AI Act.

24. Article 55 of the EU AI Act.

25. Articles 57-61 and Article 3(55) of the EU AI Act.

26. Article 62 of the EU AI Act.

27. Article 63 of the EU AI Act.

28. Article 2 of Recommendation 2003/361/EC.

29. For the detail on EU developments, refer to the European Commission Report on Artificial Intelligence Liability of 19 February 2020, the European Parliament resolution of 20 October 2020, and the European Commission's Proposal for an AI Directive dated 28 September 2022.

30. OpenAI submission to HoL communications and digital select committee: "Because copyright today covers virtually every sort of human expression – including blogposts, photographs, forum posts, scraps of software code, and government documents – it would be impossible to train today's leading AI models without using copyrighted materials".

31. Recital 105 of the EU AI Act.

32. Under Article 4 of the Copyright in the Single Market Directive, (EU) 2019/790.

33. See e.g. https://haveibeentrained.com/.

34. See e.g. Doe v GitHub, Inc. (training on GitHub data); Tremblay, Silverman and Chabon v OpenAI (training on fiction & non-fiction books); New York Times v Microsoft & OpenAI (training on newspaper articles); Andersen v Stability AI (training on artworks); RIAA & ors v Suno AI and Udio (training on sound recording) – and many more....

35. Under s1(1)(a) Copyright, Designs and Patents Act 1988, copyright protection is available for 'original' literary, dramatic, musical or artistic works. This can include 'computer-generated' works with no human author under s9(3), in which case the copyright-holder will be "the person by whom the arrangements necessary for the creation of the work are undertaken". However, the question of 'originality' would remain material, and the application of s9(3) to AI has not yet been addressed by an English court.

36. Li v Liu, decision of the Beijing Internet Court dated 27 November 2023.

37. Compare €47,320 for a tech specialist at the European AI Office, as against a median compensation of $560,000 at OpenAI – https://www.wired.com/story/regulators-need-ai-expertise-cant-afford-it/.

38. https://www.china-briefing.com/news/how-to-interpret-chinas-first-effort-to-regulate-generative-ai-measures/

39. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

40. https://www.whitehouse.gov/ostp/ai-bill-of-rights/

41. https://www.blumenthal.senate.gov/imo/media/doc/09072023bipartisanaiframework.pdf

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More