Following extensive consultations, the European Commission’s High-Level Expert Group on AI released ethics guidelines on the use of artificial intelligence. Three broad principles emerged from those guidelines, suggesting that trustworthy AI should be:

  1. Lawful — respecting all applicable laws and regulations
  2. Ethical — respecting ethical principles and values
  3. Robust — both from a technical perspective and taking into account its social environment

To meet those criteria, the guidelines establish seven specific requirements: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, nondiscrimination and fairness; societal and environmental well-being; and accountability, including auditability. These interrelated requirements provide guiding principles on both ethical and operational decision-making.

How operations can keep up with AI

These principles are intertwined. Sound governance is the “glue” that binds the guidelines’ operational requirements. While human agency and oversight on AI decisions are essential, they are only possible through proper governance, which in turn depends on technical robustness at the outset.

A. Empowering decision-making

According to the guidelines, AI systems should be used to empower and assist human beings, and should not operate in a vacuum. The guidelines recommend using “human-in-the-loop, human-on-the-loop, and human-in-command approaches,” respectively referred to as HITL, HOTL and HIC in the guidelines.

These refer to different degrees of oversight — HITL, according to the guidelines, “refers to the capability for human intervention in every decision cycle of the system.” In many cases related to AI, HITL is not possible or necessarily desirable. HOTL means the human can be involved in the design cycle and monitor the system once it is in use. HIC means the human oversees the AI system, including its broader impact on economic, societal, legal and ethical issues. This oversight includes decisions on whether to use AI and when to override a decision made by the system. The guidelines posit that “the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.”

The guidelines appear to favor more comprehensive AI HIC oversight because it accounts for factors beyond those that are immediately connected to the task at hand. HIC is also the most flexible model. But recognizing that human oversight might be limited, the model implicitly recommends stronger governance and stricter testing.

B. Ensuring technical robustness and safety

The guidelines recommend a four-pronged approach to technical robustness and safety:

  1. Resilience to attack and security. This means protecting the data, the model and the underlying infrastructure against hacking.
  2. Fallback plan and general safety. Safety measures should be commensurate with the risk posed by the AI and the AI’s capabilities. Proper fallbacks will either switch from a statistical to a rule-based procedure, or they will require human intervention before continuing their action.
  3. Accuracy. Here, too, accuracy should be commensurate with the importance of the decisions in which the AI is involved. While recognizing that perfect accuracy might be unattainable, the guidelines recommend identifying likely error rates and implementing appropriate safeguards.
  4. Reliability and reproducibility. The AI must respond to inputs in a predictable way in order to prevent unintended harm.

These technological requirements further highlight that although complex, AI should not be perceived as mysterious or unmanageable. Its risks are predictable, measurable and manageable, and proper safety measures are just as important here as they are in all other aspects of a business’s operations.

C. Maintaining privacy and data governance

The guidelines approach governance from the perspective of maintaining privacy and data protection, as well as the quality and integrity of data. Access to data should follow well-defined protocols that determine who can access data and under which circumstances.

As we recently discussed, proper governance can mitigate more risks than those immediately connected to privacy and data protection. One of these is cybersecurity; when regulators review registrants’ files for compliance with cybersecurity requirements, they look for policies and procedures that ensure those requirements are met. In other words, the process is just as important as the results.

D. Promoting transparency and accountability through governance

Although the EU’s guidelines on AI aren’t compulsory, they are written in a style that emphasizes principles-based regulation, where those principles focus on process. The guidelines list transparency and accountability as distinct requirements, but these are fundamentally process-oriented requirements because they can be achieved only through good governance.

For transparency, the guidelines recommend identifying the datasets and processes that lead to the AI’s decision or, as applicable, the reasons why an AI decision was erroneous. This is related to the principle of “explainability,” which pertains to the description both of the technical processes of the AI system and the related human decisions.

As for accountability, the guidelines specify that this requirement complements the others and “is closely linked to the principle of fairness.” While this principle does not require full disclosure of intellectual property, it does require an assessment of algorithms as well as data and design processes, along with an evaluation by internal and external auditors of those elements. This means that companies using AI systems should be prepared to explain to regulators (and data owners) in some detail how the AI works, such as what specific data is imputed, how those inputs are utilized, processed and analyzed by the software, and what checks and audits there are on the system — in terms of both computerized and human oversight.

Ethical principles make AI decisions more ‘human’

The guidelines’ other requirements provide overarching ethical principles to guide both decision-making and oversight, introducing the “human” aspect of decisions aided by AI. AI decisions should not discriminate, should not be biased and should support the well-being of the data population.

Users should ensure that the AI datasets aren’t biased against certain groups or people, because this could potentially exacerbate prejudice and marginalization — key concerns seemingly underpinning these EU guidelines.

AI should be designed with accessibility in mind, particularly when being used directly by a consumer or an individual with a nontechnical background. The AI should be easy to use regardless of age, gender, abilities or characteristics. And it should, at a minimum, be understandable by all constituents. To that end, the guidelines recommend consulting key stakeholders and users for their input in product design.

Finally, the guidelines recommend designing AI so it is sustainable and environmentally friendly, with a beneficial social impact.

AI: a mystery no more

The guidelines presage increasing regulatory interest in the impact of artificial intelligence both on security and societal well-being. This increasing scrutiny can work to demystify the technology, however complex, and assist in anticipating and controlling errors and software misuse.

The use of the word “requirement” in the guidelines suggests that as the technology continues to develop and become better understood both by users and regulators, the guidelines may well become more detailed and prescriptive, and even codified. Companies that utilize AI tools with EU resident data inputs would be well served to focus on these guidelines now and move toward the transparency, human oversight and ethical directives that they encourage.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.