ARTICLE
29 August 2024

HKMA Enhanced Consumer Protection In The Use Of GenAI By Authorized Institutions In Hong Kong

MB
Mayer Brown

Contributor

Mayer Brown is a distinctively global law firm, uniquely positioned to advise the world’s leading companies and financial institutions on their most complex deals and disputes. We have deep experience in high-stakes litigation and complex transactions across industry sectors, including our signature strength, the global financial services industry.
The Hong Kong Monetary Authority (HKMA) has imposed additional principles on the use of big data analytics and artificial intelligence (BDAI) and specifically, generative artificial intelligence (GenAI) by authorized institutions.
Hong Kong Technology
To print this article, all you need is to be registered or login on Mondaq.com.

The Hong Kong Monetary Authority (HKMA) has imposed additional principles on the use of big data analytics and artificial intelligence (BDAI) and specifically, generative artificial intelligence (GenAI) by authorized institutions.

These additional principles to the existing BDAI Guiding Principles1 were set out in a Circular on 19 August 2024 to all authorized institutions.

The additional principles aim to ensure that authorized institutions put in place appropriate safeguards for consumer protection when they deploy generative artificial intelligence (GenAI) in customer-facing applications.

Potential customer-facing applications include customer chatbots, customised product and service development and delivery, targeted sales and marketing, and robo-advisors in wealth management and insurance.

GenAI is a form of BDAI that enables generation of new content such as text, image, audio, video, code or other media, based on vast amounts of data. That being the case, the HKMA expects all authorized institutions to apply and extend the Existing BDAI Guiding Principles to the use of GenAI – and continue to adopt a risk-based approach commensurate with the risks associated with using GenAI.

While GenAI shares similar risk dimensions as BDAI, other potential risks arising from its use of complex models include lack of explainability and hallucination – namely, generating output that seems realistic but is factually incorrect, incomplete and lacking either important information or relevance to the context. These risks could cause even more significant impact on customers.

The HKMA recognises the potential of BDAI and GenAI for product-feature optimisation and customer segmentation to the individual level, enabling its authorized institutions to design and promote products matching specific products with specific customers.

Yet while realising business potential and opportunities, the HKMA also encourages authorized institutions to explore the use of BDAI and GenAI in enhancing consumer protection. Examples given by the HKMA include identification of vulnerable customers or other customers who may need more protection or more information or explanation to better understand product features and risks; or sending fraud alerts to customers who conduct transactions with potentially higher risks.

The Circular sets out the following additional principles under the four major areas of the Existing BDAI Guiding Principles:

1. Governance and Accountability

The board and senior management of authorized institutions should remain accountable for all GenAI-driven decisions and processes. They should have thoroughly considered the potential impact of GenAI applications on customers through an appropriate committee under the authorized institutions' governance, oversight and accountability framework.

They should ensure, among others:

  1. Scope of customer-facing GenAI applications is clearly defined to avoid usage in unintended areas;
  2. Proper policies and procedures are developed on the responsible use of GenAI in customer-facing applications and related control measures are implemented; and
  3. Proper validation of the GenAI models are put in place, in particular during the early stage of deploying customer-facing GenAI applications. Authorized institutions should also adopt a "human-in-the-loop" approach, with a human retaining control in the decision-making process ensuring accuracy of the model-generated output.

2. Fairness

Authorized institutions should ensure GenAI models produce objective, consistent, ethical and fair outcomes to customers.

For example:

  1. The model-generated outputs would not lead to unfair bias or disadvantage against any customers or groups of customers. This can be achieved by taking into account different approaches, such as anonymising certain categories of data; deploying datasets that are comprehensive and fair representation of the population; and making adjustments to remove bias during the validation and review process (e.g. by adopting "human-in-the-loop").
  2. During the early stage of deployment, customers are provided with the option to opt out of using GenAI and request human intervention on GenAI-generated decision, at their discretion as far as practicable. Where an "opt-out" option cannot be provided, authorized institutions should provide channels for customers to request review of GenAI-generated decisions.

3. Transparency and Disclosure

Authorized institutions should provide an appropriate level of transparency to customers regarding their GenAI applications through proper, accurate and understandable disclosure.

To meet this requirement, they should disclose the use of GenAI to customers, and, among others, communicate with customers on the use and purpose of adopting the GenAI models, as well as the limitations of such models to enhance customer understanding.

4. Data Privacy and Protection

Authorized institutions should implement effective protection measures to safeguard customer data. In particular, if personal data are collected and processed by GenAI applications, authorized institutions should comply with the Personal Data (Privacy) Ordinance (Cap. 486).

They should also pay due regard to relevant recommendations and good practices issued by the Office of the Privacy Commissioner for Personal Data related to GenAI, including the "Guidance on the Ethical Development and Use of Artificial Intelligence" published on 18 August 20212 (PCPD Guidance Note), and the "Artificial Intelligence: Model Personal Data Protection Framework" published on 11 June 2024 (PCPD Model Framework)3.

The PCPD Guidance Note recommends three fundamental data stewardship values and seven ethical principles which any organisation should consider when it develops and uses artificial intelligence, namely:

  1. Data Stewardship Values
    1. Be Respectful
    2. Be Beneficial
    3. Be Fair
  2. Ethical Principles
    1. Accountability
    2. Human Oversight
    3. Transparency and Interpretability
    4. Data Privacy
    5. Fairness
    6. Beneficial AI
    7. Reliability, Robustness and Security

The PCPD Model Framework is an extension of the PCPD Guidance Note, offering a comprehensive checklist for integrating artificial intelligence tools into operations. It reflects the expectations of the PCPD and outlines the investigative approach that would be taken in the event of a data breach linked to the use of artificial intelligence. Please see our Legal Update on the PCPD Model Framework.4

Link to the Circular:

Consumer Protection in respect of Use of Generative Artificial Intelligence (hkma.gov.hk)

Footnotes

1 See: Consumer Protection in respect of Use of Big Data Analytics and Artificial Intelligence by Authorized Institutions (hkma.gov.hk)

2 See: https://www.pcpd.org.hk/english/resources_centre/publications/files/guidance_ethical_e.pdf

3 See: https://www.pcpd.org.hk/english/resources_centre/publications/files/ai_protection_framework.pdf

4 See: Hong Kong PCPD Issues Model Personal Data Protection AI Framework

Visit us at mayerbrown.com

Mayer Brown is a global services provider comprising associated legal practices that are separate entities, including Mayer Brown LLP (Illinois, USA), Mayer Brown International LLP (England & Wales), Mayer Brown (a Hong Kong partnership) and Tauil & Chequer Advogados (a Brazilian law partnership) and non-legal service providers, which provide consultancy services (collectively, the "Mayer Brown Practices"). The Mayer Brown Practices are established in various jurisdictions and may be a legal person or a partnership. PK Wong & Nair LLC ("PKWN") is the constituent Singapore law practice of our licensed joint law venture in Singapore, Mayer Brown PK Wong & Nair Pte. Ltd. Details of the individual Mayer Brown Practices and PKWN can be found in the Legal Notices section of our website. "Mayer Brown" and the Mayer Brown logo are the trademarks of Mayer Brown.

© Copyright 2024. The Mayer Brown Practices. All rights reserved.

This Mayer Brown article provides information and comments on legal issues and developments of interest. The foregoing is not a comprehensive treatment of the subject matter covered and is not intended to provide legal advice. Readers should seek specific legal advice before taking any action with respect to the matters discussed herein.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More