A recent decision in a class action lawsuit against Workday, a leading provider of cloud-based human resource management software, highlights the critical need for employers to carefully evaluate and monitor their use of artificial intelligence (AI) in recruitment. This article explores the pending Mobley v. Workday class action, its legal implications for both software vendors and employers, and considerations in navigating the complex intersection of AI, hiring practices, and anti-discrimination laws.
New Scrutiny of an Old Problem
According to an IBM survey from 2023, 42% of companies use AI screening to improve recruiting and human resources,” with another 40% considering its implementation. AI-assisted hiring software offers the potential for reward (efficiency in business operations) but carries with it the risk of legal exposure. Although AI tools promise objectivity in the hiring process, they may inadvertently perpetuate or exacerbate existing biases present in the data they train on. Further, some experts believe that AI hiring tools may reject some of the most qualified job applicants.
Historical cases and studies on the use of machine learning algorithms in the employment context dating back to the 1980s reveal that AI systems may codify the human biases inherent in their training data—a challenge that persists for developers and users of AI technologies today. Recent studies suggest that large language models like OpenAI’s ChatGPT may not exhibit direct racial or gender bias, but further investigation is needed.
The impact of even subtle bias in AI recruitment tools could be significant. Hilke Schellmann, author of “The Algorithm: How AI Can Hijack Your Career and Steal Your Future,” points out that “one biased human hiring manager can harm a lot of people in a year, and that’s not great. But an algorithm that is [] used in all incoming applications at a large company… that could harm hundreds of thousands of applicants.” Given the risk, legislative bodies in the US and beyond are weighing in. Both the EU AI Act and the Colorado AI Act characterize the use of AI for these employment purposes as “high-risk”—requiring companies using these systems to implement risk management policies, conduct annual bias audits, and provide related notices and disclosures. New York City has implemented a similar rule at the municipal level.
Judicial Scrutiny of AI Recruitment
AI-assisted resume screening and recruitment is now facing judicial scrutiny. A class action lawsuit filed in 2023 against Workday accuses the company of discriminating against job applicants in violation of both federal and state law. The plaintiff, Derek Mobley, claimed that Workday’s applicant screening tools (which include assessments and personality tests) discriminate (or aid in employers’ discrimination) against applicants based on their race, age, and disability. The trial court recently rejected Workday’s challenge to the suit, effectively green-lighting several of the claims to go forward. The Workday lawsuit is a wake-up call, underscoring the potential legal risks not only to software developers but also to employers who rely on developers’ AI-powered recruitment tools.
The Workday Case: Allegations and Recent Developments
The Workday class action lawsuit was filed in the Northern District of California (case No. 3:23-cv-00770) in early 2023. Plaintiff Mobley alleges that he applied, via the Workday platform, for 100+ positions for which he was qualified based on his education and work experience but was rejected from every job. He claims Workday’s AI-powered hiring tools systematically discriminated against him based on his race, age, and disability (depression/anxiety). Mobley asserts that Workday qualifies as an “employment agency” or “employer” under civil rights laws and therefore can be held liable for discriminatory outcomes resulting from its AI tools. Workday moved to dismiss the complaint, arguing that the company is not covered by laws prohibiting employment discrimination.
On May 2, 2024, Judge Rita Lin permitted the Equal Employment Opportunity Commission (EEOC) to file an amicus brief on Workday’s motion to dismiss. The EEOC contended that Workday could plausibly be considered a covered entity under federal anti-discrimination laws as (1) an employment agency, due to its role in screening and referring job applicants; (2) an indirect employer, because of its control over applicants’ access to employment opportunities; and (3) an agent of employers, which delegated hiring functions to Workday. The EEOC urged that these longstanding legal theories should apply to new technologies like AI-powered hiring platforms. Although it refrained from taking a position on the accuracy of Mobley’s factual allegations, the EEOC asked the court to deny Workday’s motion to dismiss on these grounds, arguing that Mobley had sufficiently pleaded Workday’s potential liability under federal anti-discrimination laws (Title VII, the ADA, and the ADEA).
On July 12, 2024, the court issued a mixed ruling on Workday’s motion to dismiss. It allowed Mobley’s disparate impact claims under Title VII, the ADEA, and the ADA to proceed, finding that Mobley adequately alleged Workday could be liable as an “employer” under an agency theory based on its screening of job applicants. However, the court dismissed Mobley’s claims alleging intentional discrimination and those alleging Workday was an “employment agency,” without leave to amend. The court also dismissed Mobley’s state law claim but granted him leave to amend that claim (according to a later-filed stipulation of the parties, Mobley will not amend his complaint). This ruling is a significant development in the evolving legal landscape surrounding AI-powered hiring tools, potentially opening the floodgates for lawsuits based on the use of algorithmic decision-making services.
The Workday case has gained significant attention, not just for its allegations but also for the legal precedent it might set. The court’s decision to allow certain claims to proceed, coupled with the EEOC’s supportive stance, could potentially inform how liability is assigned in AI-assisted hiring processes. AI vendors and employers may face heightened scrutiny regarding the use of these tools, as this age-old issue receives new attention. Software developers and employers are certain to monitor the case closely. The outcome may have far-reaching consequences, potentially requiring AI vendors to bear more responsibility for ensuring their tools do not lead to discriminatory outcomes, while also placing greater onus on employers to monitor and test the AI tools they use in hiring processes.
Implications for Employers and AI Vendors
The Workday case is a wake-up call for employers utilizing AI in their hiring processes and the vendors providing these tools. It underscores the need for careful evaluation and ongoing monitoring of these tools to ensure they do not exhibit biases or lead to discriminatory outcomes.
Employers and AI vendors may wish to consider, at a minimum:
- Conducting audits of AI hiring tools to identify potential biases, including disparate impact on protected groups.
- Implementing human oversight and intervention in the hiring process to catch and correct AI-driven biases.
- Monitoring evolving legal standards and regulations regarding AI in hiring, including potential liability under agency theories.
- For AI vendors, reflecting on their level of involvement in hiring decisions and the potential for them to be considered an “employer” under anti-discrimination laws.
Proactively addressing potential AI hiring biases in tools may help employers mitigate risks and ensure they’re not excluding top talent due to flawed algorithms. As we navigate a new frontier in AI-enhanced recruitment technologies, maintaining a balance between innovation and legal/ethical considerations will be key to building diverse, talented workforces in the AI era.
Munck Wilson Mandala’s Artificial Intelligence and Machine Learning practice group, along with the firm’s Employment & Labor team, will closely monitor the evolving legal landscape of artificial intelligence and machine learning and its impacts on employers.