Federal Court Decision Identifies Employer Liability For Use Of Automated Employment Decision-Making Tools

CL
Carter Ledyard & Milburn

Contributor

Carter Ledyard & Milburn is a New York-based law firm with a strong focus on litigation, corporate transactions, real estate, and trusts and estates. We have a ratio of partners to associates of about one to one, and provide personal, partner-level attention to all clients and matters, large and small. This forms part of our Partners for Your Business® commitment, together with the focus we place on providing counseling to help advance the business interests of our clients.
Mobley alleged that he is African American, over 40 years of age and disabled, and that he (unsuccessfully) applied to over 100 positions using Workday's online platform.
United States Employment and HR
To print this article, all you need is to be registered or login on Mondaq.com.

We have previously written on New York City's law requiring audits for employers and employment agencies (together, "Employers") who use automated employment decision-making tools ("AEDTs") to assist in making employment decisions. A recent ruling by a California court in Derek Mobley v. Workday, Inc. (Case No. 23-cv-00770-RFL) holds that Employers may face liability under federal law for their use of AEDTs. If other courts follow suit, this development could impact Employers across the country.

Derek Mobley v. Workday, Inc. Case: A Turning Point?

Mobley alleged that he is African American, over 40 years of age and disabled, and that he (unsuccessfully) applied to over 100 positions using Workday's online platform. Workday (NASDAQ:WDAY) is an on‑demand (cloud-based) financial management, human capital management company. Mobley alleged that Workday's AEDTs, which implement artificial intelligence and machine learning, discriminated against him based on race, age, and disability. For example, Mobley alleged that Workday's algorithmic decision-making tools "rely on biased training data and information obtained from pymetrics and personality tests, on which applicants with mental health and cognitive disorders perform more poorly." (Order at 13).

The court rejected the argument that automated tools should be treated differently from human decision-makers in this context, stating: "Nothing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one." (Order at 10). Applying this principle to Mobley's allegations, the court explained, "Workday's role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being" (Order at 10).

The court distinguished disparate treatment from disparate impact discrimination for purposes of Mobley's claims. The court dismissed Mobley's disparate treatment, or intentional discrimination theory, noting that that Mobley's operative complaint "lacks allegations supporting that Workday intended this outcome" of discrimination (Order at 18). However, disparate impact discrimination does not require intent. Rather, the court explained, Mobley had to "(1) show a significant disparate impact on a protected class or group; (2) identify the specific employment practice or selection criteria at issue; and (3) show a causal relationship between the challenged practices or criteria and the disparate impact." (Order at 13.) With the AEDTs satisfying the first requirement, the court concluded that Mobley had successfully alleged disparate impact discrimination.

Implications for Employers and Vendors

This ruling has significant implications for both vendors and Employers using AEDTs. The court applied settled principles of disparate impact discrimination to the AEDTs without citing to any prior decisions doing so in the AEDT context. Thus, Mobley may signal increased scrutiny and potential liability for Employers and vendors using AEDTs under traditional disparate impact analysis.

In addition, new theories of liability may emerge. For example, as of 2024 several U.S. states have enacted or proposed legislation to regulate various aspects of AI, including its use in employment decisions. California's proposed Artificial Intelligence Accountability Act (SB 896) requires state agencies to report on the benefits and risks of generative AI. Connecticut's privacy law gives consumers the right to opt-out of profiling related to automated decision-making. And we have already mentioned New York City's law. These state- and local-level efforts may reflect a growing trend towards AI governance in this context.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More