Update on Mobley v. Workday AI-Related Employment Litigation
On July 12, 2024, the United States District Court for the Northern District of California issued a ruling in the closely watched Mobley v. Workday putative class action, which claims that Workday, a Human Capital Management platform, is directly liable for alleged unlawful employment discrimination caused by an employer’s use of Workday’s AI-powered hiring tools. The key issue before the Court was whether Workday could be directly liable under Title VII of the Civil Rights Act of 1964 and other federal civil-rights laws. While the court dismissed the claims that Workday acted as an “employment agency,” it allowed claims that Workday acted as an “agent” of employers to proceed to discovery. This ruling has significant implications for both AI vendors and employers using AI-powered hiring tools, potentially expanding the scope of liability under federal anti-discrimination laws.
In the Mobley v. Workday case, briefly discussed in our July 15, 2025, blog post, the plaintiff alleges that Workday’s AI-powered applicant screening tools discriminate on the basis of race, age, and disability in violation of federal and state anti-discrimination laws. The putative class action has gained significant attention due to its potential to set precedent for AI vendor liability in hiring processes. Initially, the Court granted Workday’s motion to dismiss the original complaint, with leave to amend. Following the plaintiff’s filing of the First Amended Complaint, Workday again moved to dismiss. It was at this stage that the EEOC filed an amicus brief, supporting the plaintiff’s novel theories of direct AI vendor liability and urging the Court to deny the motion to dismiss.
Plaintiff’s “Employment Agency” Claims Are Rejected
The Court’s decision last week rejected the theory that Workday, the AI vendor, was an “employment agency” under federal law, finding that Workday’s alleged activities did not meet the statutory definition of “procuring” employees for employers. The Court analyzed the First Amended Complaint and found no support for the conclusory allegations that Workday was the entity recruiting or soliciting candidates, and accordingly dismissed the claim that Workday was acting as an “employment agency”.
Plaintiff’s and the EEOC’s “Agent” Theory of Liability Move Forward
While the Court’s rejection of the “employment agency” theory of liability represents a partial rejection of the liability theories advanced by the plaintiff and the EEOC, the Court’s finding that an “agent” theory of liability may move forward means that there is now real risk for AI vendors to face direct liability for employment discrimination claims. In denying Workday’s attempt to dismiss the plaintiff’s “agent” theory of liability, the Court emphasized that the First Amended complaint “plausibly alleges that Workday’s customers delegated their traditional function of rejecting candidates or advancing them to the interview stage to Workday.”While Workday argued that it was simply providing a tool that implemented the employers’ criteria, the Court rejected that characterization and held that the First Amended Complaint sufficiently alleged, “Workday’s software is not simply implementing in a rote way the criteria that employers set forth, but is instead participating in the decision-making process by recommending some candidates to move forward and rejecting others.”
The Court also highlighted the allegation that Mobley received rejection emails not just outside of business hours, but allegedly almost immediately after submitting his application. The opinion notes one allegation that “Mobley received a rejection at 1:50 a.m., less than one hour after he had submitted his application.” While the Court accepted the inference that this rapid rejection might be evidence of automation in the decision-making process, it remains to be seen and tested in discovery whether, alternately, this rejection was simply consistent with the application of “implementing in a rote way” an employer’s straightforward “knockout” criteria or minimum qualifications.
In considering arguments whether Workday was simply applying rote criteria, the Court’s opinion draws a distinction between Workday’s alleged role and that of a simple spreadsheet or email tool, suggesting that the degree of automation and decision-making authority was relevant to the decision. While the opinion accepts that spreadsheet programs and email systems do not qualify as “agents” because they have not been “delegated responsibility,” the Court drew a distinction between those simple tools and Workday, noting that “[b]y contrast, Workday does qualify as an agent because its tools are alleged to perform a traditional hiring function of rejecting candidates at the screening stage and recommending who to advance to subsequent stages, through the use of artificial intelligence and machine learning.”
The Court’s opinion emphasized the importance of the “agency” theory in addressing potential enforcement gaps in federal and state anti-discrimination laws. In this regard, the Court illustrated the potential gaps with a hypothetical scenario: a software vendor intentionally creates a tool that automatically screens out applicants from historically black colleges and universities, unbeknownst to the employers using the software. Without the agency theory, the Court opined, no party could be held liable for this intentional discrimination. By construing federal anti-discrimination laws broadly and adapting traditional legal concepts to the evolving relationship between AI service providers and employers, the Court’s decision was based, in part, on the desire to avoid potential loopholes in liability.
By allowing the plaintiff’s agency theory to proceed, as supported by the EEOC in its amicus brief, the ruling opens the door for a significant expansion of liability for AI vendors in the hiring process, with potential far-reaching implications for both AI service providers and for employers using those tools.
What’s Next?
In recent years, online platforms have increasingly eased and streamlined the job application process, and consequently the number of applications employers receive has dramatically increased, leading to greater demand for technological solutions to help sort, rank, and filter applicants. AI tools are increasingly being used to help address this issue, but, as with any new technology, the tools can lead to novel claims. As one of the first largescale tests of such solutions in the courts, the Mobley case will undoubtedly continue to attract considerable attention from employers and practitioners alike. Regardless of the outcome, Mobley illustrates the legal risk associated with employing AI tools, and the need for employers to be thoughtful as they implement them.
In light of the decision and the EEOC’s support of the plaintiff’s theory of liability, employers using AI-powered hiring tools should review their processes to ensure they can clearly articulate the role these tools play in their hiring decisions. They should also be prepared to demonstrate that their use of these tools does not result in disparate impacts on protected groups. Holon’s attorneys will continue to monitor this case and other developments as lawmakers, regulators, and courts grapple with the issues created by the use of AI in employment decisions.
