Published in The Journal of Robotics, Artificial Intelligence & Law March-April 2019

(This is part 2 of a two part series. To read part 1, click here.)

In my last column, I explored the first two components that I recommend clients address in their public-facing AI policies: disclosure of AI that interacts with customers and disclosure of the decisions AI makes. In light of the California bot bill (the "California Bot Bill") that became law last fall, I advised businesses that rely on AI-based customer service to include a statement in their AI policies disclosing the existence of any chat bots and explaining the requirements of the new law. In light of the EU's General Data Protection Regulation (the "GDPR"), I advised that organizations conduct a two-part self-analysis of their business practices: (1) determine if they rely on AI for profiling or automated decision making, as the GDPR defines it; and (2) isolate those decisions and classify them as either (a) decisions that produce legal effects concerning data subjects or similarly significantly affects data subjects, or (b) decisions that produce no legal effects concerning data subjects or do not similarly significantly affects data subjects. I then recommended that your AI policy be drafted to reflect how your answers comply with Article 22(1) of the GDPR, which states that each "data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

In this second part, I review the final two components that should be standard considerations when preparing an AI policy: disclosure of the types of data relied on and disclosure of how AI reaches decisions.

To read the full article, please click here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.