Skip to main content

The Invisible Gatekeeper: AI Hiring Bias Is Reaching Its Legal Breaking Point

8 min read
Emily Chen
Emily Chen AI Ethics Specialist & Future of Work Analyst

Over 1.1 billion job applications have passed through Workday’s AI screening system. According to court filings, the lead plaintiff in an active federal lawsuit was rejected by it more than 100 times—often within minutes of submitting, frequently in the middle of the night, with no human ever appearing to have reviewed his materials. If that number lands with a thud, it should. We have quietly outsourced one of the most consequential decisions in a person’s life—whether they get a shot at a job—to an algorithm, and the legal system is finally catching up.

This week, three simultaneous developments are making it impossible to look away.

A chrome justice scale with one pan holding a glowing algorithm sphere and the other holding stacked human silhouettes reduced to flat tokens, the scale tipping despite apparent symmetry
AI in hiring: the balance is not what it appears.

The Workday Lawsuit: A Ticking Clock
#

Mobley v. Workday, Inc. (Case No. 3:23-cv-00770-RFL, Northern District of California) is the most consequential AI hiring case in history, and it has an opt-in deadline of March 7, 2026—two days from now. That deadline matters because this is a collective action, not a standard class action: you are only part of the case if you proactively sign a consent form. Anyone aged 40 or older who applied for a job through a company using Workday’s platform between September 24, 2020, and now, and believes they were denied a fair review, has until this Saturday to visit workdaycase.com to join.

The plaintiff, Derek Mobley—a Black professional in his forties with a finance degree from Morehouse College and substantial professional experience—alleges that Workday’s AI recommendation system, which scores and ranks applicants for employers, has a disparate impact on candidates based on age, race, and disability. In May 2025, the court granted preliminary collective action certification. Judge Rita Lin ruled that Workday’s role in the hiring process is no less significant because it operates through AI rather than a human sitting in an office reviewing résumés—and that Workday could be held liable as an agent of the employers using its platform, as reported by HiredAI Editorial Team on February 17, 2026.

This “agent liability” theory is the legal earthquake hiding inside this case. Employers have long assumed that buying a certified third-party tool insulated them from liability for that tool’s discriminatory outputs. The court has now made clear: it does not.

Eightfold AI: When Candidate Scoring Meets Consumer Protection
#

A second lawsuit filed in January 2026 targets Eightfold AI, a recruiting platform used by Microsoft and PayPal. This case takes a different legal route: rather than discrimination claims, plaintiffs argue that Eightfold’s AI candidate scoring violates the Fair Credit Reporting Act (FCRA) and California’s Investigative Consumer Reporting Agencies Act.

The core allegation is that Eightfold’s system collects data from social media profiles, location information, and internet activity that candidates never volunteered in their applications—then uses it to generate a zero-to-five score ranking a candidate’s “fit” before any human reviews them. Candidates have no opportunity to see or dispute this AI-generated report. The argument is compelling: if a credit bureau did this, it would be a clear FCRA violation. Why should an AI hiring vendor be any different?

If this legal theory succeeds, the implications extend far beyond Eightfold. Virtually every modern applicant tracking system that generates a candidate score or ranking based on aggregated data could suddenly face obligations around transparency, accuracy, and dispute resolution that mirror the consumer protection regime built around credit scoring.

The ACLU Complaint Against HireVue: Disability and Dialect
#

A third case, filed in March 2025 by the ACLU of Colorado, targets HireVue’s AI video interview platform. An Indigenous, deaf woman who had worked successfully at Intuit for years applied for an internal promotion. Her request for CART captioning during the AI video interview was denied. She was subsequently rejected, with feedback recommending she practice “active listening.” The complaint alleges violations of Title VII, the Americans with Disabilities Act, and Colorado’s Anti-Discrimination Act, as detailed by HiredAI Editorial Team on February 17, 2026.

The case raises a question that algorithmic fairness researchers have flagged for years: do AI interview tools perform equitably across candidates with disabilities, those who use assistive communication, or those who speak dialects—including Native American English—that diverge from the linguistic patterns the model was trained on? The answer in this instance was apparently no, and a longtime employee with positive performance reviews paid for it.

The EU AI Act’s August 2026 Deadline Is Now in Sight
#

While these American lawsuits accelerate, a regulatory wall is approaching from Europe. Under the EU AI Act, August 2, 2026 is the full compliance date for high-risk AI systems—and hiring AI sits squarely in that category. Tools that shortlist CVs, rank candidates, score interviews, or make employment recommendations are classified as high-risk, meaning providers and the employers who deploy them must have completed a suite of obligations by that date, including:

  • CE marking and registration in the EU database before deployment
  • Human oversight structures with documented review processes for AI-assisted decisions
  • Candidate and worker notification when AI meaningfully shapes hiring or evaluation outcomes
  • Data Protection Impact Assessments and ongoing bias monitoring
  • Post-market monitoring with incident reporting obligations

Fines for non-compliance can reach €35 million or 7% of global annual turnover for banned practices (such as the emotion-recognition interview tools that were already prohibited as of February 2025), and up to €15 million or 3% for other breaches of high-risk system rules, as confirmed by HireTruffle’s EU AI Act hiring compliance guide (Accessed March 5, 2026).

Critically, the EU AI Act’s reach is not limited to European companies. If your AI tool’s output is used to make hiring decisions affecting workers in the EU—even if your company is headquartered in San Francisco and the candidate is a remote hire in Berlin—you are in scope, as explained by IRIS Global’s EU AI Act HR Compliance Guide (Accessed March 5, 2026).

What This Actually Means for the People on the Other Side of the Algorithm
#

I want to step back from the legal mechanics for a moment, because it is easy to get lost in opt-in deadlines and compliance checklists and forget what is actually at stake here.

Algorithmic hiring tools are not neutral. They are trained on historical data that reflects historical workforce patterns—which is to say, historical exclusions. When a model learns from incumbent employees, it learns to prefer candidates who look like incumbents: typically younger, typically white, typically male in many technical domains, typically without disabilities. It then scores and ranks thousands of applicants against that implicit template, often before a human being has read a single word of a single résumé.

The candidate who spent 45 minutes tailoring her cover letter has no idea her application was scored in 90 seconds at 2 a.m. and placed near the bottom of a ranked list. The candidate who was rejected instantly doesn’t know whether it was his race, his age, his disability disclosure, or simply that his carefully worded LinkedIn profile doesn’t match the patterns in a training dataset he has never seen and can never inspect.

That information asymmetry is the core ethical problem—and it’s why I believe the legal developments of this week are genuinely significant, not just as litigation risk, but as a forcing function for the kind of transparency that should have been required from the start.

What Employers and Job Seekers Should Do Now
#

For employers using AI hiring tools:

The “my vendor handles compliance” assumption is now formally dead. Audit every AI tool in your recruiting stack—résumé screeners, video interview analyzers, skills assessments—and ask your vendors hard questions about CE marking timelines, bias testing methodology, and candidate notification workflows. Disable any feature that uses emotion detection, voice tone analysis, or biometric trait inference immediately; these have been prohibited under the EU AI Act since February 2025. Build human review checkpoints and document them. Your legal exposure, under both U.S. case law and EU regulation, is real.

For job seekers:

If you are 40 or older and applied through Workday’s platform since September 2020, visit workdaycase.com before March 7, 2026. If you have been rejected through AI interview tools and believe the process was inaccessible or discriminatory, document everything—screenshots, timestamps, rejection language—and consult an employment attorney. State laws in California, New York City, Illinois, and Colorado already give you specific rights related to automated decision-making in hiring.


The past few years built a mythology around AI recruiting: that it would eliminate human bias, that it would be more objective, more scalable, more fair. The lawsuits landing right now are a direct refutation of that mythology. Algorithms are not neutral. They carry the biases of the data they were built on and the incentives of the organizations that trained them. The question was never whether AI hiring tools could be biased—researchers answered that years ago. The question was always when the legal system would catch up.

That moment appears to be now.


References
#

AI-Generated Content Notice

This article was created using artificial intelligence technology. While we strive for accuracy and provide valuable insights, readers should independently verify information and use their own judgment when making business decisions. The content may not reflect real-time market conditions or personal circumstances.

Related Articles