Unveiling The Influence Of AI On Financial Services

MF
MK Fintech Partners

Contributor

MK Fintech Partners Ltd. is affiliated with the prestigious Michael Kyprianou Group, a leading international legal and advisory entity. Renowned for its diverse legal services, the group has become one of Cyprus' largest law firms, with offices in Nicosia, Limassol, Malta, Ukraine, the United Arab Emirates, and the UK.
The relentless influence of Artificial Intelligence (AI) on financial services has changed the sector profoundly.
Malta Finance and Banking
To print this article, all you need is to be registered or login on Mondaq.com.

The relentless influence of Artificial Intelligence (AI) on financial services has changed the sector profoundly. Although still in its infancy, AI has abruptly paved its way into financial services' core operations, to the point of no return. Growing AI adoption in areas such as asset management, algorithmic trading, credit underwriting, and blockchain-based financial services is facilitated by the abundance of data available and improved computing capacity. Nevertheless, the heavy reliance on AI in the financial sector has inadvertently created several risks that need careful consideration.

EU Legislative instruments: Dealing with AI

INFLUENCE OF AI ON FINANCIAL SERVICES

AI technologies have prevailed within various industries, that much is clear. Legislatively, however, as is often the case, we are still playing catch-up. However, the European Union (EU) has been quick to recognise the need for a comprehensive regulatory framework, and actually act upon it. We are now legislatively addressing AI's potential, whilst also mitigating its risks.

THE PRODUCT LIABILITY DIRECTIVE AND THE AI LIABILITY DIRECTIVE

The European Commission (EC) produced an evaluation report in 2018. It became apparent that the EU Product Liability Directive (85/374/EEC) (PLD) inadequately addressed whether AI software could be qualified as a product, together with issues pertaining to potential human harm due to specific aspects of AI technology. For this reason, in September 2022, the Commission published two proposals for liability rules. Specifically, AI Liability Directive (AILD) and a revised PLD that were designed to complement each other.

THE PRODUCT LIABILITY DIRECTIVE (COM/2022/495)

The PLD brings significant changes to the existing product liability framework by expanding its scope. Furthermore, the new legislation incorporates new criteria to assess product defects and introduces provision concerning presumptions of defectiveness and causation. It primarily focuses on strict liability, encompassing physical goods as well as software which includes AI. As such, it imposes liability on manufacturers and other entities operating within the supply chain, such as remanufacturers and businesses that significantly alter products, provided that certain conditions are satisfied. The PLD further extends its scope to defective products leading to situations of physical injury, property damage, and data loss.

AI LIABILITY DIRECTIVE (COM/2022/496)

The AILD pertains to non-contractual civil law claims for damage resulting from an AI system under fault-based liability frameworks. The AILD lays down common rules for the disclosure of evidence on high-risk AI systems, enabling claimants to substantiate fault-based civil law claims for damages. Such evidence may include information about the AI's functioning, its decision-making logic, the classification or grouping of affected individuals, or its decision-making process itself. The AILD's scope encompasses manufacturers, as well as other professional (providers) and non-professional (consumers) users. It also aims to address potential violations of fundamental rights and primary financial losses.

The overarching objectives of the AILD include harmonising legal frameworks to reduce uncertainties, bridging liability gaps between AI system providers and users, and streamlining the compensation process for injured parties. To achieve these goals, the proposed AILD seeks to ease the burden of proof for claimants and address information gaps by granting them access to information about high-risk AI systems to demonstrate faults.

THE AI ACT

The AI Act (COM 2021/206) forms part of the EU Digital Strategy and was initially proposed by the EC in April 2021, and was passed on March 13, 2024, with its publication expected in May 2024. The legislation became the world's first comprehensive legal framework for AI, tailored to address the ethical and regulatory challenges stemming from the widespread adoption of AI technologies. With a keen awareness of ethical and societal considerations, the AI Act aims to establish a robust regulatory framework that promotes innovation, safeguards the ethical dimensions of AI-driven systems, and ensures effective enforcement mechanisms.

The new legislation's cornerstone lies is its risk-bassed approach, guaranteeing recognition that not all AI systems are equal in terms of their societal impact. As such, the AI Act provides a classification system dividing AI systems into three categories: (i) minimal risk, (ii) limited risk, and (iii) high and unnaceptable risk. Among the AI systems categorised as posing unacceptable risk are those employing subliminal or intentionally manipulative techniques. Notably, the AI Act extends its regulatory reach to include the financial sector within the ambit of high-risk systems/. This subsequently subjects them to more stringent deployment requirements.

Systems falling under the scope of limited risk are required to comply with minimal transparency requirements, allowing users to make informed decisions. Meanwhile, AI applications categorised as minimal risk, such as spam filters or AI-enhanced video games, are proposed to be regulated primarily through voluntary codes of conduct, as outlined by the Commission.

THE DIGITAL MARKETS ACT AND THE DIGITAL SERVICES ACT

Both the Digital Markets Act (Regulation (EU) 2022/1925) (DMA), as well as the Digital Services Act (Regulation (EU) 2022/2065) (DSA) are integral components of the EU Digital Services package, which introduced significant alterations to the regulation of online platforms. Although not specifically referring to AI, the DSA and the DSM play an important role in their regulation.

THE DIGITAL SERVICES ACT

The algorithmic and accountability requirements outlined in the DSA complement other EU AI regulatory initiatives, such as the AI Act and the AI Liability Directive. The Recitals of the AI Act denote that compliance with the latter legislation shall enable very large online platforms to comply with their broader risk assessment and mitigation obligations under Articles 34 and 35 of the DSA. Additionally, the Recitals of the AI Act suggests that authorities designated under the DSA would serve as enforcement bodies for implementing the AI system provisions established within the AI Act.

THE DIGITAL MARKETS ACT

The DMA provides comprehensive guidelines on how gatekeepers and their competitors should engage with AI. Specifically, the DMA enlists new rules for fair ranking, which forms the cornerstone of gatekeepers' AI-driven business models. The DMA is particularly concerned with the leveraging of data performed by gatekeepers across one area of activity to another. Especially when these same undertakings exercise control over entire ecosystems that are structurally challenging for existing or new market operators to contest, as noted in Recital 3 of the DMA. Articles 5(2), 6(2), or 6(10) of the DMA are precisely tailored to address the latter concerns.

Furthermore, the DMA explicitly prohibits gatekeepers from using data, including its collection and use for AI training purposes. Access rights are established for business users to enable them to develop high-performance AI models themselves. It is also worth noting that the DMA imposes information obligations to mitigate information imbalances between gatekeepers and their business users, particularly in the realm of advertising.

How is AI utilised in the financial sector?

INFLUENCE OF AI ON FINANCIAL SERVICES | ENHANCING CUSTOMER EXPERIENCE

The introduction of AI in the banking sector has undoubtedly enhanced customer experience. Integrating AI-powered chatbots and virtual assistants are at the forefront of this transformation, providing instant responses to queries and streamlining issue resolution.

AI also enables financial institutions to analyse customer data, client preferences & satisfaction, and user behaviour faster and to offer personalised services and products that can meet consumer needs more accurately. AI algorithms can also proactively offer relevant services or products by analysing transaction histories, spending patterns, and life events.

Credit scoring tools, based on Machine Learning (ML), accelerate lending decisions while reducing risk. AI allows for an analysis of qualitative factors such as spending behaviour and payment willingness, facilitating faster borrower segmentation. It is worth noting that AI-based models and big data are increasingly being used by banks and Financial Technology (FinTech) companies to assess the creditworthiness of prospective borrowers and make underwriting decisions. This approach broadens access to credit by evaluating repayment capacity and extends benefits to underserved borrowers without a traditional credit history.

FRAUD DETECTION AND SECURITY

AI greatly enhances security measures within financial institutions by swiftly detecting patterns of suspicious activity through real-time data analysis. In risk management, AI algorithms play a pivotal role by promptly responding to security breaches, safeguarding both customer assets and the institution's integrity, while also evaluating and mitigating various risks, ranging from credit defaults to market volatility. These algorithms surpass the limitations of traditional models reliant on historical data and predetermined rules.

Likewise, biometric authentication methods (ex. recognition and fingerprint scanning), mitigate risks associated with unauthorised access and identity theft. The timely detection of fraud is essential in combatting the increasing problem of payment card fraud, covering activities such as identity verification, anti-money laundering (AML), financing of terrorism and fraud detection (FTF). AI systems proactively address emerging security risks by analysing historical data and identifying anomalies. In turn, this bolsters customer trust, enhances efficiency, and enables institutions to better assess the risks, whilst maintaining a competitive edge.

OPERATIONAL EFFICIENCY, CAPITAL, AND MARKET IMPACT

AI enables financial institutions to operate with flexibility and responsiveness in today's dynamic and competitive environment. Considering AI is built on the concept of mathematical optimisation, this further boosts the speed of capital optimisation. In addition to this, AI systems are extensively used in the financial sector to forecast macroeconomic and financial variables, monitor business conditions, meet customer needs, and ensure payment capacity. In terms of portfolio management, AI could be also used to identify new signals on price movements and optimise data utilisation, surpassing current models.

REGULATORY COMPLIANCE

AI's impact on regulatory compliance is most evident in automating compliance checks and monitoring transactions for suspicious activities. Algorithms analyse enormous datasets in real-time to detect patterns indicative of potential compliance breaches. This guarantees that systems stay up to date with the latest legal requirements, by also reducing the risk of penalties and reputational damage.

Moreover, AI-powered automation tools compile and organise data, ensuring accurate reports are submitted on time. This helps reduce the administrative burden on compliance teams and minimises errors arising from manual entry.

ALGORITHMIC TRADING

AI finds multifaceted applications in trading, offering both trading strategy suggestions and powering automated systems for predictive analysis and autonomous trade execution. These AI-enabled systems effectively manage risk and streamline order flow among brokers, enhancing operational efficiency. ML can serve as a basis for risk modelling, assisting in the identification of high-risk trading accounts that may warrant intervention.

INTEGRATION OF AI IN BLOCKCHAIN-BASED FINANCIAL PRODUCTS

AI can be deployed in blockchain technologies to mitigate security vulnerabilities and protect the network. This can aid users in identifying irregular activities potentially linked to theft or scams.

Risks Associated with the use of AI in the Financial Industry

INFLUENCE OF AI ON FINANCIAL SERVICES

AI systems in financial services raise several concerns, including data privacy and bias issues arising from incomplete or unrepresentative data, which could lead to unethical practices and financial exclusion. Although robust AI performance is crucial for maintaining public trust and financial stability, challenges persist. Good examples of this are explainability of AI models and cybersecurity risks persist.

In fact, explainability in ML models is challenging due to their complexity, lack of interpretability and susceptibility to manipulation. Namely, the lack of explanation directly to the end-user is the main reason for ML to be considered black boxes. It should be noted that AI presents cybersecurity challenges as it could potentially generate phishing messages, impersonate individuals, and facilitate identity theft and fraud, while also being susceptible to data poisoning, jailbreaking, and input attacks

Conclusion

INFLUENCE OF AI ON FINANCIAL SERVICES

While AI has brought unprecedented convenience and efficiency to financial services, we also need to be wary of the risks it can bring. These developments call for improvements in oversight monitoring frameworks and active engagement with different stakeholders to identify possible risks and remedy regulatory actions.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More