Council Of Europe's First Legally Binding International Treaty On AI To Be Signed In September

CM
Crowell & Moring LLP

Contributor

Our founders aspired to create a different kind of law firm when they launched Crowell & Moring in 1979. From those bold beginnings, our mission has been to provide our clients with the best services of any law firm in the world through a spirit of trust, respect, cooperation, collaboration, and a commitment to giving back to the communities around us.
Amid the continued exponential rise and adoption of artificial intelligence (AI) systems, the Council of Europe set a unique precedent earlier this year by adopting the first-of-its-kind legally...
European Union Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Amid the continued exponential rise and adoption of artificial intelligence (AI) systems, the Council of Europe set a unique precedent earlier this year by adopting the first-of-its-kind legally binding international AI framework. Aimed at ensuring the respect of human rights, the rule of law, and democracy in the use of AI systems, the framework strikes an important balance in addressing the risks throughout the lifecycle of an AI system without hampering innovation.

The Framework Convention on Artificial Intelligence is a first-of-its-kind, global treaty that will ensure that AI upholds people's rights. It is a response to the need for an international legal standard supported by states in different continents, which share the same values to harness the benefits of AI, while mitigating the risks. With this new treaty, we aim to ensure responsible use of AI that respects human rights, the rule of law, and democracy.

— Marija Pejčinović, Secretary General, Council of Europe

The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law was adopted on 17 May 2024 during the annual ministerial meeting of the Council of Europe's Committee of Ministers in Strasbourg. The Framework Convention provides a common legal framework open for adoption at the global level and requires the signatories to adopt or maintain appropriate domestic legislative, administrative, or other measures to ensure that activities within the lifecycle of public sector AI systems are fully consistent with human rights, democracy, and the rule of law. Notably, domestic adoption must also apply to private sector activities done on behalf of a public authority, such as those under government contracts or tenders. The Framework Convention will be opened for signature by the negotiation's Parties (European and non-European countries) on 05 September 2024 during the Conference of Ministers of Justice in Vilnius, Lithuania. Global reception of the Treaty has been significantly positive, with it being lauded as a commendable multilateral achievement and one that could bridge a growing global digital divide.

Multi-stakeholder Negotiation Process

Negotiations on the Framework Convention took place over a span of two years and were led by an intergovernmental body, the Committee on Artificial Intelligence (CAI). The CAI has also released a comprehensive Explanatory Report to better explain the thought process behind different provisions of the Framework Convention.

The process saw participation from the Ministers of Foreign Affairs of 46 Council of Europe member states (which includes all 27 EU members), along with 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the U.S., and Uruguay). 68 "observers" also joined from the private sector, civil society, and academia, in part to ensure that the final outcome would strike an appropriate balance between safety and innovation. Notable international organizations involved in the effort included the Organisation for Security and Co-operation in Europe (OSCE); the Organisation for Economic Co-operation and Development (OECD); the United Nations Educational, Scientific and Cultural Organisation (UNESCO); and, from the EU, the European Union Agency for Fundamental Rights (FRA) and the European Data Protection Supervisor (EDPS).

Principles for Adoption

The Framework Convention defines an "artificial intelligence system" as a "machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments". It does confine its application to any particular form of AI.

It sets forth seven key AI principles for implementation by signatories, but in a manner appropriate to their domestic legal systems, in addition to the general obligations of protecting human rights and the integrity of democratic processes and the rule of law. No doubt a reflection of the multi-stakeholder process, the principles' core themes align with many proposed by other global entities regarding the safe and responsible use of AI.

The Framework Convention allows for national security exemptions, i.e., the Parties are allowed to not implement the treaty for activities protecting national security, provided that these are compliant with international law and democratic processes.

Human Dignity and Individual Autonomy

According to the CAI's Explanatory Report, AI systems' activities should not lead to the dehumanization of individuals, undermine their agency or reduce them to mere data points, or anthropomorphize the AI system in a way that interferes with human dignity. Individual autonomy should be upheld at all times, i.e., individuals should have control over the use and impact of AI technologies in their lives without diminishing their agency or autonomy.

Transparency and Oversight

Parties should adopt or maintain measures to ensure adequate transparency and oversight requirements throughout the lifecycle of the AI system. The means for ensuring this transparency may include, as appropriate, recording key considerations such as data provenance, training methodologies, and validity of data sources; documentation and visibility on training, testing, and validation data used; and risk mitigation efforts. Such activities will help keep decision-making processes and general operation of AI systems understandable and accessible.

A critical component of ensuring transparency is enabling users to recognize AI-generated content with ease. Some techniques include content labelling and watermarking. Additionally, given the complexity of AI systems, the CAI's Explanatory Report encourages Parties to incorporate reliable and effective oversight mechanisms, including human oversight, within the lifecycle of AI systems.

Accountability and Responsibility

Parties should institutionalize mechanisms to oblige individuals, organisations, or entities responsible for AI system activities to be answerable for any adverse impacts on human rights, democracy, or the rule of law. These mechanisms may include judicial and administrative measures; civil, criminal, and other liability regimes; and, in the public sector, administrative and other procedures so that decisions can be contested.

Overall, this principle emphasizes the need for clear lines of responsibility, which provide the ability to trace actions and decisions back to specific entities and individuals in way that accounts for the diversity of relevant actors and their roles throughout the lifecycle of an AI system.

Equality and Non-discrimination

Parties should adopt relevant regulatory, governance, technical, or other solutions to ensure that activities within the lifecycle of an AI system respect equality, including gender equality, and the prohibition of all discrimination, as provided under applicable international and domestic law.

The CAI's Explanatory Report expands on this, noting that Parties should incorporate mechanisms that address the different ways discrimination and bias can intentionally or inadvertently be incorporated into AI systems throughout their lifecycle.

Currently documented areas for bias development in AI systems include:

  • Potential bias of the algorithm's developers;
  • Potential bias built into the model upon which the systems are built;
  • Potential biases inherent in the training data sets used;
  • Biases introduced when such systems are implemented in real world settings; and
  • Automation or confirmation bias.

Privacy and Personal Data Protection

Without endorsing any particular regulatory measure in a given jurisdiction, this principle requires Parties to protect privacy rights and personal data of individuals in relation to AI systems through effective guarantees and safeguards. Per the CAI's Explanatory Report, at its core, privacy rights of individuals must include at least the following (partially overlapping) elements:

  • Protected interest in limiting access to an individual's life experiences and engagements;
  • Protected interest in secrecy of certain personal matters;
  • Degree of control over personal information and data; and
  • Protection of personhood (individuality or identity, dignity, and individual autonomy).

Reliability

Parties should establish pathways for reliability assurance and trust in output of AI systems by addressing key aspects of their functioning. According to the CAI's Explanatory Report, these include robustness, safety, security, accuracy, and performance; as well as functional prerequisites, such as data quality and accuracy, data integrity, data security, and cybersecurity. Proactive transparency and documentation protocols can help ensure appropriate, end-to-end accountability, as this principle is linked to previous principles on transparency and accountability.

Safe Innovation

Parties should seek to promote and foster innovation in line with human rights, democracy, and the rule of law. This provision recognizes that failure to create an environment in which responsible innovation can flourish risks stifling such innovation. Suggested pathways to stimulate responsible innovation from the CAI's Explanatory Report include creation of regulatory sandboxes and special regulatory guidance or no-action letters to clarify how regulators will approach the design, development, or use of AI systems in novel contexts.

Remedies & Safeguards

Parties must ensure that accessible and effective remedies are available for violations of human rights resulting from AI system activities. Exceptions, limitations, or derogations from such obligations are permitted, however, in the interest of public order, security, and other important public interests.

The CAI's Explanatory Report prescribes that, where AI systems substantially inform or make decisions impacting human rights, safeguards should ensure appropriate human oversight, including ex ante or ex post human review of the decision. As relevant, oversight measures should subject AI systems to built-in operational constraints that cannot be overridden by the system itself and are responsive to the human operator. Additionally, in situations of direct human interaction with AI systems, such persons should be duly notified that they are interacting with an AI system and not a human.

Our Take

The Council of Europe's Framework Convention is an important step forward in the global dialogue on AI governance, as well as a notable example of consensus by diverse stakeholders on a rapidly evolving technological issue. It will be used as a model for other regions and international bodies as the AI policy landscape matures. The Framework Convention text provides a balanced and feasible regulatory approach, thanks to inputs from the entire spectrum of relevant stakeholders, from governments and experts, to industry and civil society.

Nevertheless, whether the Framework Convention can ultimately achieve its goals is entirely dependent on the signing Parties' commitment to domestically upholding and implementing its principles. This does not take away from the importance of the effort to involve and align both Council of Europe member states and a number of non-member states. Such harmonization in AI governance approaches will become increasingly critical if we are to optimally leverage the global technology in the face of its rapid advancement. If nothing else, it reinforces that there are some areas of AI regulation where harmonization may not be so far-fetched.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More