AI technology is to be developed in a way that places people at its centre.

Artificial Intelligence refers to a scientific discipline based on technology which aims to display and mimic human intelligent behaviour – this is done through algorithms which perceive the environment in which they are immersed in, collecting and interpreting data derived from that particular environment, reasoning on what is the best course of action and then acting accordingly, with the aim being producing rationale and high level performing results.

AI can benefit society in a number of ways, from automated vehicles, smart weather forecasting and natural disaster predictions, to adoption in healthcare whereby algorithms may be used to process information, understanding risk factors which may cause certain diseases and identify patterns at a more efficient rate than humans.

The High-Level Expert Group on Artificial Intelligence (HLEG), an independent group set up by the European Commission in June 2018, as part of the EC AI Strategy issued guidelines on April 8, 2018 with the aim of promoting trustworthy AI.

These are intended to offer guidance which can be applied by AI developers, organisations which use AI within their business processes, and users on a voluntary basis and are not intended to be binding measures.

Trustworthy AI is defined in terms of three main components which should be met throughout the system's life cycle. It should be lawful, complying with all applicable rules and regulations, it should be ethical, upholding ethical principles and values, and it should be robust enough to avoid or mitigate risk and unintentional harm.

The guidelines further identify seven key requirements which AI systems should satisfy in order to be trustworthy.

AI systems should support human autonomy and decision-making which respect and upholds fundamental human rights. AI systems which can negatively affect fundamental rights should carry out a fundamental rights impact assessment prior to the systems development and deployment and include an evaluation of how those identified risks can be mitigated. Users involved with AI systems should be provided with the necessary knowledge and understanding to comprehend and interact with AI systems. Central to this principle is the user's right not to be subject to a decision based solely on automated processing when the results produced by the system may give rise to legal effects on users.

The appropriate level of human oversight and human control is to be implemented to ensure that an AI system does not undermine human autonomy and establish levels of human autonomy.

An essential principle towards achieving trustworthy AI is ensuring that algorithms used within the AI systems are technically robust, developed in a manner that they reliably behave in the way for which they were intended. AI software should also implement measures to ensure that they are protected against cyber-attacks and vulnerabilities, to avoid systems behaving in a manner which was not intended, or leading such systems to be shut down altogether.

The principle of prevention of harm is also closely linked to the principle of privacy protection and data governance. AI systems must guarantee privacy and data protection of its users throughout its existence – this is necessary in order to allow individuals to trust that the data being fed into the system will not be used to harm or discriminate against them.

The processes and procedures relating to how data is collected, stored and utilised within the AI system, together with explanations on the algorithms used for the decision making process are to be documented to allow for traceability and transparency, in turn facilitating auditability relating to the decision making within the AI system.

AI systems should be developed in a manner which allows all people to use the AI products or services, regardless of age, capabilities or characteristics.

Sustainability and ecological responsibility towards the environment should also be considered throughout the AI's life cycle. The system's development, deployment and use processes should be assessed keeping this principle in mind, with particular attention to resource use and energy consumption. Mechanisms should be put in place within an AI system to ensure accountability and responsibility within the system both before and after its deployment.

AI technology is to be developed in a way that places people at its centre while ensuring the AI systems adhere to applicable laws and regulations. Their implementation should be such that it does not bring about unintended harm but rather empowers citizens. Guaranteeing respect for fundamental rights and values will facilitate acceptance by the public.

Malta has recognised the potential within AI systems, both commercially and for society. In March 2019, the Government launched a national AI strategy which set out its vision for AI adoption and established an AI taskforce, composed of experts within the field, with the objective of developing use-cases of how AI can be deployed in Malta and applied by the Government to provide more efficient services to the public. In doing so, Malta has the potential of building leadership in the adoption of innovative AI use cases.

This article was first published in the Sunday Times of Malta (Tech & Sigma Supplement), 26 May 2019.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.