ARTICLE
14 August 2024

New EU AI Act – Corporate Governance Considerations

TS
Travers Smith LLP

Contributor

It’s not just law at Travers Smith. Our clients’ business is our business. Independent and bound only by our clients’ ambitions, we are wherever they need us to be. We focus on key areas of work where we are genuinely market leading. If it’s hard – ask Travers Smith.
Earlier this month, the EU's European Artificial Intelligence Act ("AI Act"), the world's first comprehensive regulation on artificial intelligence, came into force.
European Union Corporate/Commercial Law
To print this article, all you need is to be registered or login on Mondaq.com.

Earlier this month, the EU's European Artificial Intelligence Act ("AI Act")1, the world's first comprehensive regulation on artificial intelligence, came into force. The AI Act aims to ensure that AI systems are safe, transparent, traceable and environmentally friendly. In this article, we will explore the corporate governance and ESG implications of this new piece of legislation, including the need for key governance and compliance strategies.

For further information on the wider background to the AI Act, including the scope of the act and its practical implications, please see our earlier briefing here.

As part of its digital strategy, the EU has sought to regulate artificial intelligence ("AI") to help ensure better conditions for the development and use of the technology, while safeguarding against potential risks.

  1. What is the AI Act and who does it apply to?
  2. What impact does the AI Act have on corporate governance?
  3. What about the Environment?
  4. What are the timings?
  5. What practical steps can be taken from a corporate governance and compliance perspective?
  6. Conclusion

1. What is the AI Act and who does it apply to?

By way of recap on what and who is in scope of the AI Act, the obligations apply to:

(i) providers (such as developers);

(ii) deployers (users);

(iii) importers;

(iv) distributors; and

(v) product manufacturers of "AI systems" and providers of "general-purpose AI models".

The AI Act is not sector specific and applies extraterritorially to any provider placing, or otherwise putting into service, an AI system or general-purpose AI models on the EU market, regardless of whether the entity itself is established or located within the EU or in a third country.

The AI Act assigns applications of AI to four risk categories; "unacceptable", "high risk", "limited risk" and "low or minimal risk". Depending on the risk category, different rules will apply. AI systems that present an unacceptable risk are prohibited – this includes AI systems used for social scoring and AI systems that use deceptive or exploitative techniques to materially distort a person's behaviour in a manner that can cause harm, amongst others.

Applications deemed high risk are subject to the most onerous compliance requirements, including (i) adequate risk assessment and mitigation systems, (ii) high quality of the datasets feeding the system to minimise risks and discriminatory outcomes, (iii) logging of activity to ensure traceability of results and (iv) detailed documentation providing all information necessary on the system and its purpose for relevant authorities to assess its compliance.

Examples of the different risk categories include:

  • Unacceptable Risk – AI systems used for social scoring and AI systems that use deceptive or exploitative techniques to materially distort a person's behaviour in a manner that can cause harm
  • High Risk – critical infrastructure (e.g. transport), that could put the life and health of citizens at risk, educational or vocational training, that may determine the access to education and professional course of someone's life (e.g. scoring of exams), safety components of products (e.g. AI application in robot-assisted surgery)
  • Limited Risk – AI systems such as chatbots, where humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back
  • Low or Minimal Risk - applications such as AI-enabled video games or spam filters (N.B. the vast majority of AI systems currently used in the EU fall into this category)

2. What impact does the AI Act have on corporate governance?

The AI Act seeks to ensure that AI technology is developed and implemented in a way that respects and protect people's fundamental rights. As such, the AI Act emphasizes transparency, accountability and sound risk management systems.

In practice, this requires organisations to implement corporate governance systems that:

  • Enhance transparency: Help ensure transparent AI operations, including data sources and decision-making processes. The AI Act outlines that "transparency" in this context means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making users aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.
  • Implement risk assessments and safeguarding: Organisations must classify their systems based on a product safety and risk-based approach. For example, AI systems considered to present an unacceptable level of risk involving a clear threat to the fundamental rights of people will be prohibited. This includes AI systems or applications that manipulate human behaviour to bypass users' free will, such as toys using voice assistance encouraging dangerous behaviour of minors or systems that allow 'social scoring' by governments or companies.
  • Include human oversight: High-risk AI systems must be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which they are in use, including with appropriate human-machine interface tools. The idea behind this is that human oversight will help prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse.

3. What about the Environment?

The AI Act recognises the ability of AI technologies to support socially and environmentally beneficial outcomes, for example in healthcare, agriculture, food safety, education, energy efficiency, environmental monitoring, the conservation of biodiversity and climate change mitigation. At the same time, depending on the circumstances regarding its application, the AI Act states that AI may generate risks and cause harm to public interests and fundamental rights, including the environment.

As the AI Act bill advanced in the European Parliament, new provisions on the issue were included. The version adopted by Parliament introduced the principle for developing and using AI systems in a sustainable and environmentally friendly manner. The Parliament's version also established that fundamental rights impact assessments for high-risk AI systems should include a measurement of "reasonably foreseeable" adverse impacts on the environment of putting the system into use. However, after much negotiation such environmentally-friendly provisions were excluded from the final version, leading some to argue that the AI Act's approach to environmental issues is a missed opportunity to make the AI industry more environmentally sustainable2.

Article 40 of the AI Act now requires that standardised reporting and documentation procedures be created by standardisation bodies to ensure the efficient use of resources by certain AI systems. In theory, these procedures would help in reducing the energy and other resource consumption of high-risk AI systems during their life cycle. These standards are also intended to promote the energy-efficient development of general-purpose AI models. Whilst such reporting standards are a crucial first step to providing basic transparency about some ecological impacts of AI, including energy usage, they do not include other potential environmental harms along the AI production process, such as the impact on water, ecosystems and minerals. Moreover, relying on standardisation bodies to provide such environmental standards may prove time consuming and inefficient in practice.

4. What are the timings?

Most obligations under the AI Act apply from 2 August 2026 (including the core corporate governance obligations), but different obligations come into play at different stages.

  • 2 February 2025: Prohibited AI systems will be banned and the AI literacy obligation also applies.
  • 2 August 2025: Rules for new GPAI models and systems.
  • 2 August 2026: (Most of) the high-risk framework and the transparency risk obligations apply, noting that:
    • Pre-existing high-risk systems intended for public authority use will have until 2 August 2030 to comply; and
    • High-risk AI systems under Annex I of the AI Act (products subject to product safety legislation) that are put on the market on or after 2 August 2026 will have until 2 August 2027 to comply.
  • 2 August 2027: High-risk obligations in respect of Annex 1 systems (products subject to product safety legislation).

Specifically from a corporate governance perspective, providers of high-risk AI systems will be required, from 2 August 2026, to implement the relevant rules on areas including human oversight, data quality and governance, transparency, accuracy and cybersecurity. Users of high-risk AI systems will have fewer, but still considerable obligations from the same date.

5. What practical steps can be taken from a corporate governance and compliance perspective?

In practice, in addition to reviewing and assessing AI products against the requirements of the AI Act, from a corporate governance perspective, businesses would be well placed to:

  1. Establish AI Governance Committees: Consider setting up dedicated committees charged with ensuring AI systems comply with legal and ethical standards and overseeing AI risk assessments.
  2. Develop specific "AI Ethics" Policies: Either as separate, standalone policies or as part of an organisation's broader Code of Ethics, outlining the development and deployment of AI systems and reflecting the AI Act's obligations.
  3. Promote Training: To help ensure that employees and others are aware of the implications of the AI Act and AI's risks on the organisation.

In contrast to the EU's classicisation system, the UK currently takes a "lighter touch" approach to the regulation of AI ethics, for example by providing guidance for businesses introducing the concept of "AI ethics" and providing a high-level overview of the ethical principles needed for the responsible delivery of an AI project3.

6. Conclusion

The AI Act has potentially significant implications from a corporate governance perspective, requiring in-scope businesses to reassess and effectively manage their risk assessment practices, governance structures and broader ethical policies. By proactively engaging and addressing these key risk areas from an early stage, businesses will be able to position themselves as leaders in "ethical" AI deployment that is trustworthy, safe and supports socially and environmentally beneficial outcomes.

"AI has the potential to change the way we work and live and promises enormous benefits for citizens, our society and the European economy. The [AI Act] puts people first and ensures that everyone's rights are preserved."

Margrethe Vestager, Executive Vice-President for a Europe Fit for the Digital Age

Footnotes

1. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

2. https://eu.boell.org/en/2024/04/08/eu-ai-act-missed-opportunity

3. https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More