The Personal Data Protection Commission ("PDPC") has on 18 July 2023 published a consultation paper proposing new advisory guidelines to clarify how the Personal Data Protection Act 2012 ("PDPA") applies to the collection and use of personal data to develop and deploy artificial intelligence ("AI") systems, such as those used to make recommendations or decisions. This Client Update outlines the key points set out by the PDPC in the proposed advisory guidelines.

1. Introduction

On the back of the rapid development of the AI sector, the PDPC is consulting on proposed new advisory guidelines to address how the PDPA would apply where organisations use personal data in the development and deployment of their AI systems. The advisory guidelines would first address the data protection concerns at the stage of initial developmental of AI systems before turning to the subsequent stage where AI systems are already deployed to collect and use personal data. Finally, the advisory guidelines would also address the role of service providers that help organisations set up their own AI systems.

What follows below is an overview of the key elements of the new advisory guidelines.

2. Using personal data to develop, test and monitor AI systems

a. Business Improvement and Research Exception

Although organisations would typically seek express consent for the use of personal data, they may also rely on the Business Improvement1 or Research2 exceptions. Firstly, where the Business Improvement exception is relied upon to use personal data, the advisory guidelines would set out various purposes for which personal data can be used in the context of AI systems, such as:

  1. allowing for more relevant social media content to be offered to users;
  2. allowing for automatic assignment of jobs to platform workers;
  3. allowing for potential job candidates to be matched to corresponding job vacancies;
  4. testing AI systems to assess their accuracy;
  5. ensuring that AI systems are not biased against traits like race or religion;
  6. ensuring that privacy enhancing measures do not comprise the accuracy of AI systems; or
  7. otherwise using AI systems to provide new product features and functionalities.

As to whether such purposes can be achieved without the use of personal data, and whether the use of personal data under the Business Improvement Exception would be appropriate, the advisory guidelines would state that organisations should consider:

  1. whether the use of personal data contributes towards improving the effectiveness or quality of the AI systems and their output;
  2. whether it is technically possible and/or cost-effective to develop, test or monitor the AI systems without using personal data;
  3. common industry practices or standards on how to develop, test or monitor such AI systems; and/or
  4. whether such use will contribute to the effectiveness or improved quality of new product features and functionalities that help organisations innovate, improve competitiveness, become more efficient or effective, and enhance consumer choice, experience, and usability.

Secondly, where the Research exception is relied upon to use personal data, the advisory guidelines would state that organisations should consider:

  1. whether the AI system will improve understanding and development of science and engineering;
  2. whether the AI system can increase innovation in products or services to improve quality of life;
  3. whether the use of personal data helps develop more effective methods to improve the quality or performance of the AI system; and/or
  4. whether the use of personal data helps develop industry practices or standards for the development and deployment of AI systems.

b. Protection and Accountability Obligations

Finally, where the Research exception is relied upon to disclose personal data, the advisory guidelines would state that organisations should assess whether it would be impractical to seek the individual's express consent for disclosing personal data to another company for the purposes of conducting joint research and development of new AI systems.

The advisory guidelines would also provide that organisations should adopt data minimisation measures as a matter of good practice. When developing AI systems, personal data should be limited through filters such as the volume of personal data needed, as well as time periods, market segment or customer segment, so as to reduce the risks of data leakage.

If possible, organisations should also anonymise personal data to reduce the risk of re-identification, especially if doing so would not unduly compromise the development of AI systems. As to whether anonymisation measures are sufficiently robust, organisations should consider:

  1. whether the process of chosen anonymisation method is reversible;
  2. the extent of disclosure of the dataset and its intended recipients (e.g., internal closed-group sharing vs. cross-company sharing);
  3. whether a motivated individual can likely find means to re-identify the anonymised dataset using either publicly available information or information the organisation already has in its possession; and
  4. the extent of controls the organisation has put in place, including within the AI system, to prevent re-identification of the anonymised data.

Where it is not possible to adopt the above measures, organisations would have to adhere to the Protection and Accountability Obligations3 under the PDPA, and put in place appropriate data security and protection measures and update their policies regarding the use of personal data to develop AI systems, for instance, the circumstances for when deidentified data versus raw and identifiable personal data is to be used.

3. Collecting and using personal data in deployed AI systems

a. Consent and Notification Obligations

Turning now to the deployment of AI systems within products and services, the advisory guidelines would reiterate the need for organisations to be cognizant of the Consent and Notification Obligations4 in allowing individuals to provide meaningful consent. To achieve this, it would be recommended within the advisory guidelines for sufficient information to be provided, including:

  1. the function of the product that requires collection and processing of personal data (e.g., movie recommendations);
  2. a general description of types of personal data that will be collected and processed (e.g., users' movie viewing history);
  3. an explanation of how the processing of personal data collected is relevant to the product feature (e.g., users' movie viewing history will be analysed to make movie recommendations); and
  4. specific types of personal data that are more likely to influence the product feature (e.g., whether the user finished the movie, or whether the user watched the movie multiple times).

The advisory guidelines would also suggest the use of notification pop-ups to display the most relevant information more prominently, while written policies (as discussed further below) can be used to provide further details. Where there is a need to provide less detail to protect proprietary information or the security of the AI system, organisations should also document the reasons for such decisions internally as a matter of good practice.

b. Accountability Obligation

The advisory guidelines would also put emphasis on the Accountability Obligation and remind organisations that they must have in place written policies setting out, in sufficient detail and in a transparent manner, the practices and safeguards relating to their use of AI systems. Such practices and safeguards could include:

  1. measures taken to achieve fairness and reasonableness during development and testing, such as assessing for bias, ensuring the quality of training data, or ensuring that results are repeatable and reproducible;
  2. safeguards and technical measures taken to protect personal data, such as data minimisation or anonymisation before deployment, or steps taken to ensure the security of personal data and AI systems before and after deployment; or
  3. implementation of proper accountability or human oversight mechanisms, such as steps taken in the event that the AI system encounters adversarial or unexpected input.

As regards the level of detail expected, organisations should include such information to the extent that a reasonable person would consider appropriate, and where doing so would not compromise security, safety or commercial confidentiality.

4. Supporting organisations in implementing AI solutions

Finally, the advisory guidelines would also make clear that service providers who assist their customers in creating such AI systems would be considered data intermediaries under the PDPA. Such service providers should keep track of data used as training data through data mapping and labelling, and should keep track of how such training data has been transformed during data preparation (for instance by documenting the lineage of the source data to the processed data).

Additionally, the service providers should help their customers comply with data protection obligations using their technical expertise. For instance, they could help to identify information that customers would need to comply with such obligations, and design their AI systems to make it easy for customers to extract such information, which could be used for instance to develop notifications or written policies addressed to end users.

5. Final comments

The release of the proposed advisory guidelines follows on the back of various initiatives to develop AI regulation in Singapore, such as the Model AI Governance Framework and AI Verify to help organisations assess their AI practices against key AI ethics and governance principles. The proposed advisory guidelines will help contextualise the PDPA within the nascent but fast-growing AI sector, and supplement existing publications and guidelines so that organisations can tailor-fit their use of AI systems to established best practices.

A copy of the consultation paper may be obtained here.

Footnotes

1. The Business Improvement exception is set out under Part 5 of the First Schedule and Division 2 of Part 2 of the Second Schedule to the PDPA and enables organisations to use personal data collected in accordance with the PDPA, where the use of the personal data falls within the scope of any business improvement purposes.

2. The Research exception is set out under Division 3 of Part 2 of the Second Schedule to the PDPA and enables organisations to use personal data for a research purpose, subject to certain conditions.

3. The Protection Obligation can be found under section 24 of the PDPA, while the Accountability Obligation can be found under Part 3 of the PDPA.

4. The Consent Obligation can be found under section 13 of the PDPA, while the Notification Obligation can be found under section 20 of the PDPA.

Originally published August 04, 2023

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.