The rapid adoption of artificial intelligence (AI) in banking and financial services has prompted increased regulatory scrutiny and new legal challenges. This paper examines recent developments and offers guidance for legal risk managers navigating this evolving landscape.
Recent Legal and Regulatory Developments
1. Executive Order on AI. The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order) issued in October 2023 recognized that "[r]esponsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure," while also recognizing that "irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security." Executive Order, Sec. 1. Relative to financial services, the Executive Order centered on consumer protection, data bias, financial models, and AI governance issues:
- Consumer Protection. The Executive Order
provided that the "Federal Government will enforce existing
consumer protection laws and principles and enact appropriate
safeguards against fraud, unintended bias, discrimination,
infringements on privacy, and other harms from AI." Executive
Order, Sec. 2(e). The Executive Order recognized that "[s]uch
protections are especially important in critical fields like . . .
financial services . . . where mistakes by or misuse of AI could []
cost consumers or small businesses, or jeopardize safety or
rights." Id.
- Cybersecurity. The Executive Order provides
for certain action to manage AI in critical infrastructure and in
cybersecurity. "The Secretary of the Treasury shall issue a
public report on best practices for financial institutions to
manage AI-specific cybersecurity risks." Executive Order, Sec.
4.3(a)(ii).
- Bias and Discrimination. The Executive Order
provides for certain action to strengthen AI and civil rights in
the broader economy.
- The Executive Order encourages the director of the Federal
Housing Finance Agency and the director of the Consumer Financial
Protection Bureau "to require their respective regulated
entities, where possible, to use appropriate methodologies
including AI tools to ensure compliance with Federal law and: (i)
evaluate their underwriting models for bias or disparities
affecting protected groups; and (ii) evaluate automated
collateral-valuation and appraisal processes in ways that minimize
bias." Executive Order, Sec. 7.3(b).
- The Executive Order also takes measures "to combat
unlawful discrimination enabled by automated or algorithmic tools
used to make decisions about access to housing and in other real
estate-related transactions." Executive Order, Sec. 7.3(c). It
provides that "the Secretary of Housing and Urban Development
shall, and the Director of the Consumer Financial Protection Bureau
is encouraged to, issue additional guidance: (i) addressing the use
of tenant screening systems in ways that may violate the Fair
Housing Act (Public Law 90-284), the Fair Credit Reporting Act
(Public Law 91-508), or other relevant Federal laws, including how
the use of data, such as criminal records, eviction records, and
credit information, can lead to discriminatory outcomes in
violation of Federal law; and (ii) addressing how the Fair Housing
Act, the Consumer Financial Protection Act of 2010 (title X of
Public Law 111-203), and the Equal Credit Opportunity Act (Public
Law 93-495) apply to the advertising of housing, credit, and other
real estate-related transactions through digital platforms,
including those that use algorithms to facilitate advertising
delivery, as well as on best practices to avoid violations of
Federal law." Executive Order, Sec. 7.3(c).
- The Executive Order encourages the director of the Federal
Housing Finance Agency and the director of the Consumer Financial
Protection Bureau "to require their respective regulated
entities, where possible, to use appropriate methodologies
including AI tools to ensure compliance with Federal law and: (i)
evaluate their underwriting models for bias or disparities
affecting protected groups; and (ii) evaluate automated
collateral-valuation and appraisal processes in ways that minimize
bias." Executive Order, Sec. 7.3(b).
- Financial Stability. The Executive Order also encourages independent regulatory agencies "to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI, including risks to financial stability, and to consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI, including clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities' ability to explain their use of AI models." Executive Order, Sec. 8.
2. US Federal Regulatory Guidance. While the United States lacks comprehensive AI legislation, federal regulators have issued guidance:
- Joint Agency Statements on AI Bias. Several
federal agencies issued a joint statement on enforcement efforts against
discrimination and bias in automated systems. The statement by the
Federal Trade Commission (FTC), Department of Justice (DOJ),
Consumer Financial Protection Bureau (CFPB), and Equal Employment
Opportunity Commission (EEOC) confirms that "[e]xisting legal
authorities apply to the use of automated systems and innovative
new technologies just as they apply to other practices" and
that these agencies will enforce "civil rights,
non-discrimination, fair competition, consumer protection, and
other vitally important legal protections" in the AI context.
The statement focuses on concerns surrounding datasets, model
opacity, and model design and provides examples to confirm the
agencies' intent to exercise their enforcement authority. The
CFPB, responsible for enforcing federal consumer financial laws and
protecting consumers in the financial marketplace from unfair,
discriminatory, and deceptive acts, published a circular confirming that "federal
consumer financial laws and adverse action requirements apply"
regardless of the technology being used. The circular also made
clear that the fact that the technology used to make a credit
decision is complex, opaque, or new is "not a defense for
violating these laws." The CFPB has signaled its interest in
automated decision-making and valuation models, among other risks,
and has issued guidance on credit denials using AI, chatbots in banking, and AI-based home appraisals. The DOJ's Civil
Rights Division enforces federal laws prohibiting discrimination
across education, the criminal justice system, employment, housing,
lending, and voting, among other areas. The division filed a statement of interest in federal court
explaining that the Fair Housing Act applies to AI and
algorithm-based tenant screening services. The FTC enforces laws
and regulations to protect consumers from deceptive or unfair
business practices and methods of competition. The FTC issued a report evaluating the use and impact
of AI in combating online harms that addresses concerns about AI
inaccuracy, bias, and discriminatory design. The FTC also warned market participants that it may violate the FTC Act to use AI that has
discriminatory impacts, to make misleading or unsubstantiated
claims about AI, or to deploy AI without risk mitigation. The FTC
has required companies to destroy algorithms and other products that were trained on data that should
not have been collected.
- Securities and Exchange Commission (SEC)
Enforcement. The SEC has focused on addressing investor risk from conflicts of interest that
may arise through broker-dealer and investment adviser use of AI
predictive data analytics to interact with investors. SEC Chair
Gary Gensler has expressly warned investment advisers and
broker-dealers against "AI washing." The SEC has also released charges against investment advisers
making false and misleading statements about their use of AI.
- Treasury Report on AI Cybersecurity Risk in Financial
Services. The US Department of the Treasury issued a
report titled Managing Artificial Intelligence-Specific
Cybersecurity Risks in the Financial Services Sector, which
provided: "Regulators have emphasized that it is important
that financial institutions and critical infrastructure
organizations manage the use of AI in a safe, sound, and fair
manner, in accordance with applicable laws and regulations,
including those related to consumer and investor protection.
Controls and oversight over the use of AI should be commensurate
with the risk of the business processes supported by AI. Regulators
have noted that it is important for financial institutions to
identify, measure, monitor, and manage risks arising from the use
of AI, as they would for the use of any other technology. Advances
in technology do not render existing risk management and compliance
requirements or expectations inapplicable. Various existing laws,
regulations, and supervisory guidance are applicable to financial
institutions' use of AI. Although existing laws, regulations,
and supervisory guidance may not expressly address AI, the
principles contained therein can help promote safe, sound, and fair
implementation of AI."
- FTC Enforcement. The FTC has intensified its commitment to enforcement actions in the AI space by creating the FTC Office of Technology and sweeping authorization for a compulsory process for AI-related products and services. The commission, through an omnibus resolution, streamlined its ability to issue civil investigative demands relating to AI. The FTC has also taken aim at generative AI and unfair competition, generative AI and deception, the use of AI facial recognition, and the use of biometric information technologies.
3. Legislation. On May 15, 2024, a bipartisan US Senate AI working group released its Roadmap for Artificial Intelligence Policy in the United States Senate, which recommends an approach for Senate committees to address sector-specific AI policy. The US Senate majority leader's SAFE Innovation Framework proposal offers a comprehensive AI policy framework with four primary guardrails: identifying algorithm trainers and intended audiences, disclosing data sources, explaining the response generation methodology, and establishing ethical boundaries. Other bills aim to set standards for foundation models, address Communications Decency Act Section 230 immunity related to generative AI, and require transparency for AI-generated content.
4. International Standards and the EU AI Act. The European Union's (EU) proposed AI Act establishes a comprehensive regulatory framework for AI systems. Financial institutions not operating in the EU can look to the EU AI Act for guidance that may inform US regulatory and legal frameworks. Financial institutions operating in the EU should prepare for risk-based categorization of AI systems, strict requirements for high-risk AI applications, and transparency and human oversight mandates. The development of more comprehensive international AI standards is ongoing and includes NIST's AI Risk Management Framework 1.0 (AI RMF), a foundational resource for organizations managing AI-related risks and compliance. The AI RMF is a structure to identify, assess, and mitigate AI risks throughout the tools' life cycles with key principles, methodologies, and practices to develop effective AI governance strategies. The AI RMF Generative AI Profile (NIST AI 600-1) can also aid organizations in identifying and managing risks posed by generative AI.
Key Legal Risks
1. Bias and Discrimination. AI systems may
perpetuate or amplify biases, leading to potential violations of
antidiscrimination laws.
2. Privacy and Data Protection. AI's
data-intensive nature raises concerns about compliance with data
protection regulations.
3. Transparency and Explainability. The
"black box" nature of some AI algorithms may conflict
with regulatory requirements for transparent decision-making.
4. Liability and Accountability. Determining
responsibility for AI-driven decisions poses challenges in areas
like lending, trading, and risk management.
5. Intellectual Property (IP). AI-generated
content and inventions raise complex IP questions.
Practical Tips for Legal Risk Managers
1. Implement a Robust AI Governance Framework.
- Establish clear policies and procedures for AI development, deployment, and monitoring.
- Define roles and responsibilities for AI oversight.
- Conduct regular AI risk assessments.
2. Enhance Model Risk Management.
- Extend existing model risk management practices to AI systems.
- Implement rigorous testing and validation processes for AI models.
- Maintain comprehensive documentation of AI model development and performance.
3. Prioritize Fairness and Bias Mitigation.
- Conduct thorough bias testing of AI systems, particularly in high-risk areas like lending and employment.
- Implement ongoing monitoring for potential discriminatory outcomes.
- Develop remediation plans for addressing identified biases.
4. Ensure Transparency and Explainability.
- Invest in explainable AI techniques and tools.
- Develop clear communication protocols for AI-driven decisions.
- Maintain human oversight and intervention capabilities for critical AI systems.
5. Strengthen Data Governance.
- Implement robust data quality and integrity measures.
- Ensure compliance with data protection regulations in AI training and deployment.
- Establish clear data retention and deletion policies for AI systems.
6. Stay Informed and Engaged.
- Monitor evolving AI regulations and industry standards.
- Participate in industry working groups and regulatory consultations.
- Invest in AI literacy training for legal and compliance teams.
7. Collaborate Across Functions.
- Foster close collaboration between legal, compliance, IT, and business units.
- Ensure legal and ethical considerations are integrated into AI development processes.
- Conduct cross-functional AI risk assessments.
Conclusion
As AI continues to transform financial services, legal risk managers must proactively address the emerging challenges. By staying informed of regulatory developments, implementing robust governance frameworks, and fostering a culture of responsible AI use, financial institutions can harness the benefits of AI while mitigating legal and reputational risks.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
We operate a free-to-view policy, asking only that you register in order to read all of our content. Please login or register to view the rest of this article.