CFPB Warns Of Risks Related To AI Chatbots In Banking

SM
Sheppard Mullin Richter & Hampton

Contributor

Sheppard Mullin is a full service Global 100 firm with over 1,000 attorneys in 16 offices located in the United States, Europe and Asia. Since 1927, companies have turned to Sheppard Mullin to handle corporate and technology matters, high stakes litigation and complex financial transactions. In the US, the firm’s clients include more than half of the Fortune 100.
On June 6, the CFPB released a new report related to the adoption of chatbots by financial institutions, including those with advanced technology such as generative chatbots and others...
United States Finance and Banking
To print this article, all you need is to be registered or login on Mondaq.com.

Listen to this post

On June 6, the CFPB released a new report related to the adoption of chatbots by financial institutions, including those with advanced technology such as generative chatbots and others marketed as "artificial intelligence." "In 2022, over 98 million users (approximately 37% of the U.S. population) engaged with a bank's chatbot. This number is projected to grow to 110.9 million users by 2026." According to the CFPB, "financial institutions have begun experimenting with generative machine learning and other underlying technologies such as neural networks and natural language processing to automatically create chat responses using text and voices." Chatbots are intended, in part, to help institutions reduce the costs of customer service agents.

The CFPB cautions that while chatbots may be useful for answering basic questions, their effectiveness is more limited as the questions become more complex. The report also warns that financial institutions may risk violating federal consumer protection law when deploying chatbot technology. In particular, the CFPB's analysis suggests that:

  • Noncompliance with federal consumer financial protection laws. Financial institutions run the risk that when chatbots ingest customer communications and provide responses, the information chatbots provide may not be accurate, the technology may fail to recognize that a consumer is invoking their federal rights, or it may fail to protect their privacy and data.
  • Diminished customer service and trust. When consumers require assistance from their financial institution, the circumstances could be dire and urgent. Instead of finding help, consumers can face repetitive loops of unhelpful jargon. Consumers also can struggle to get the response they need, including an inability to access a human customer service representative. Overall, their chatbot interactions can diminish their confidence and trust in their financial institutions.
  • Harm to consumers. When chatbots provide inaccurate information regarding a consumer financial product or service, there is potential to cause considerable harm. It could lead the consumer to select the wrong product or service that they need. There could also be an assessment of fees or other penalties should consumers receive inaccurate information on making payments.

According to the CFPB, while the use of chatbots has increased, so have the consumer complaints, including complaints concerning difficulties related to: (i) obtaining dispute resolution; (ii) obtaining accurate or sufficient information; (iii) obtaining meaningful customer service; (iv) obtaining intervention from human customer service representatives; and (iv) keeping personal information safe. The CFPB warned that financial institutions should avoid using chatbots as their primary customer service delivery channel when it is reasonably clear that the chatbot is unable to meet customer needs.

Putting It Into Practice: This report should be a reminder that financial institutions and other market participants that use chatbots and other enhanced generative AI tools need to develop policies for ensuring their use is done in a fashion that complies with federal consumer protection and other laws. While only time will tell whether the CFPB will scrutinize what appears to be customer service issues to alleged violations of law, the CFPB pronounced last April that with regard to its policy statement on abusive acts or practices, the prohibition on abusive conduct "would cover abusive uses of AI technologies to, for instance, obscure important features of a product or service or leverage gaps in consumer understanding" (see our prior blog post on this policy statement here). The use of AI, and generative AI in particular, implicates a number of legal issues. Companies leveraging these technologies need to adopt and continue to update policies on their use. If you have questions on why you need these policies and what they should include, you should confer with an attorney who focuses on these issues.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More