FINMA's Supervisory Expectations Regarding The Use Of Artificial Intelligence Of Supervised Entities

M
MME

Contributor

FINMA has identified four areas in which the use of AI can pose challenges: governance and responsibility, robustness and reliability, transparency and accountability, and non-discrimination.
Switzerland Technology
To print this article, all you need is to be registered or login on Mondaq.com.

FINMA's Annual Report 2023 and Risk Monitor 2023 set out its expectations for the use of AI in the financial markets, highlighting the challenges it sees and providing initial guidance on how to address them.

The increasing use of artificial intelligence (AI) in the financial markets has not gone unnoticed by FINMA, which stated in its Annual Report 2023 that it has set out its expectations in this regard with reference to its Risk Monitor 2023. In addition, FINMA stated that it has started to conduct on-site inspections and supervisory exchanges with financial institutions using AI. Overall, the challenges posed by AI are not considered to be fundamentally new but can be addressed within the existing frameworks.

Executive Summary

FINMA has identified four areas in which the use of AI can pose challenges: governance and responsibility, robustness and reliability, transparency and accountability, and non-discrimination. Each challenge has its own implications within the AI lifecycle for financial institutions that decide to implement AI in their processes. Attention should be paid to the implementation of sufficient risk management processes, depending on the intended use case and the associated risks, issuing appropriate internal guidelines, and providing appropriate information about the AI solutions used in the client documentation.

This means that although the use of AI is on the rise, it is not (yet) possible to use it for fully autonomous decision-making. However, while human oversight remains necessary to ensure sufficient oversight and accountability, AI-based solutions will likely lead to a substantial increase in efficiency, scalability and productivity as the humans involved can focus more on the supervisory function (i.e., diligently assessing investment advice received from an AI solution etc.).

The oversight of the accountable person requires in-depth knowledge of the respective matter in order be able to critically assess and challenge AI-based results.

As only time will tell which AI tool is best suited to which task, the choice of tools and the area of application will be crucial in determining the role of AI in the financial industry.

Challenges for the Use of AI in Financial Markets

In its Risk Monitor 2023, FINMA identified fundamental challenges for the use of AI in the financial industry in four areas: firstly, governance and responsibility; secondly, robustness and reliability; thirdly, transparency and explicability; and finally, non-discrimination.

1. Governance and Responsibility

When decisions are based on the results of AI applications, there is an increased risk of errors going unnoticed and a lack of accountability and responsibility. AI models produce results so confidently that they create a false sense of security. If decisions are made based on such results without questioning their validity, undetected errors may occur. When used in complex, organisation-wide decision-making processes, there is an additional risk that no one will feel responsible or is held accountable. Learn more about our expertise in Banking and Insurance.

2. Robustness and Reliability

Risks in the use of AI can also arise from the poor quality of the training data used, incorrect optimisation and IT security risks, which can lead to unreliable or even incorrect results. When AI models are trained, they generalise the statistical information contained in the training data. To avoid unreliable results, the training data must be representative, without significant outliers, and complete, or in other words, of "good quality". Good quality data is therefore a prerequisite for accurate results.

Moreover, there is the danger that AI models become less accurate over time, due to mistakes in the self-optimization process. In addition, the use of large amounts of data increases IT security risks when it leads to increased outsourcing and cloud use.

3. Transparency and Explicability

In AI applications, it is not possible to isolate the impact of individual parameters on the model, making it difficult to explain or verify its results. AI models generalise the statistical information in data sets and calculate the most likely value. As such, their output is based on probabilities rather than causal relationships. With complex models, it becomes increasingly difficult to explain, reproduce or control their output. In addition, on the client side, clients are unable to fully assess the risk of AI if they are not adequately informed about its use. Discover our expertise in Regulatory Compliance.

4. Non-discrimination

The use of AI applications to make individual predictions about groups of people can lead to unintended and unjustified discrimination if they are based on unrepresentative personal data. Particularly for groups of people who have been historically underrepresented, this can lead to unintended bias and discrimination against them.

FINMA's Supervisory Expectations Applied to the Use Cycle of AI Models

The use of AI models in the financial industry can be divided into three phases, each with its own challenges that need to be addressed. First, the AI model has to be developed according to the needs of the financial institution. The second stage is to deploy it, and the third stage, is to address any issues that arise during deployment, possibly fix them and adjust the model accordingly. Each stage has different implications for addressing the four previously identified challenges.

1. Development

The first step is to develop the AI model according to its use case in the organization. This includes defining the scope and objective of the model, designing it accordingly, training and finally testing it. From a regulatory perspective, this phase should define the responsible parties and the risk management processes, which must always take into account the specific environment and use case of the model and the associated risks. Reference can be made, for example, to the abstract requirements laid down in FINMA's Circular 2023/1 on operational risks and resilience for banks.

The board of directors must regularly approve strategies on how to deal with AI. Internally recognised standards, competencies and responsibilities must be defined. The development and test environments of the AI models must be separated from the rest of the IT infrastructure and must consider the underlying responsibilities. The AI models themselves must be tested and validated according to the risk associated to their use case. The design of the model should consider mandatory legal requirements to ensure that it does not violate any domestic or, where applicable, foreign legislation. If external infrastructure is used, such as cloud services, the increased risk associated therewith must be addressed with appropriate measures. Overall, this first stage involves a forward-looking, ex-ante consideration of potential risks and the definition of appropriate processes to address them.

2. Deployment and Monitoring

In the second stage, the developed AI model is deployed. Once deployed, the model needs to be continuously monitored and evaluated according to the risk management processes defined in the development phase. Attention should be paid to the accuracy, robustness and reliability of the results. Systematic collection and analysis should be put in place. This will allow results to be tracked and errors to be corrected. The responsible person must actively review the results and not just "rubber stamp" any output. This is a prerequisite if the AI model is to be used in an automated decision-making process. When reviewing the results, the responsible person ensures that they are sufficiently explainable. As such, the AI model can never make a decision on its own but can only support a decision-making process. This is important because AI models are based on probabilities, not causality, but decisions should be explainable based on causalities.

During implementation, care should be taken to inform clients about the use of AI. As AI models are often very complex, providing the right information can be a challenge. On the one hand, the information needs to be detailed enough so that the use of the model can be understood. On the other hand, clients often lack knowledge about how AI works and may be overwhelmed by too much detail. So, there is a balance to be struck between providing enough detail without defeating the purpose of the information. However, it is not only external transparency that is important, but also internal transparency about the use, processes, responsibilities, risks, and limitations within the financial institution.

3. Adjustment and Remedies

The third stage involves continuous adjustment of the model according to the feedback collected during its deployment. This stage involves a backward-looking, ex-post perspective to remedy risks that have materialized. Remedies can be either internal to the system, by modifying the model, or external, if damage has already occurred and internal adjustment is not sufficient.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More