Your AI Hiring Tool Is a Legal Time Bomb
5 Hard Truths From The Landmark Mobley v. Workday Lawsuit
At 1:50 AM on a Tuesday, Derek Mobley received yet another job rejection. It was his 100th. Each rejection came from a different employer, but they all used the same AI-powered screening system: Workday. People didn’t reject Mobley; a system erased him.
His experience is a warning shot for the 99% of Fortune 500 companies that now use automated hiring tools. Mobley’s lawsuit, a landmark class-action suit valued at over $365 million, has exposed a massive, overlooked liability for any business that deploys algorithms as silent gatekeepers. The case isn’t a glitch; it’s a systemic governance failure. Here are five essential, often surprising lessons this case offers to any leader using automated hiring systems.
1. Your Vendor Can’t Protect You: The Myth of Neutral Tech
The single most important legal precedent from the Mobley v. Workday case is that AI vendors are considered legal agents of the employer. This means your organization is directly and fully liable for any discrimination committed by the AI. The long-standing defense of blaming the vendor or hiding behind a “black box” is no longer viable.
The court’s ruling was unambiguous:
“AI vendors are not neutral tech providers. They are your legal agents.”
This was not the work of a single “villain” programmer. Catastrophic AI failures stem from a web of rational decisions by well-meaning people across your organization—talent leaders trying to hire faster, engineers building predictive models, and executives trusting a vendor’s marketing claims. Ultimately, that trust is no substitute for rigorous, independent governance. If their algorithm discriminates, your organization will be held accountable.
2. AI Doesn’t Just Discriminate—It Hyper-Scales It
AI learns from our past, which is full of discrimination. When we deploy AI without governance, we don’t eliminate bias. We automate it. We scale it.
The true danger of AI isn’t just that it’s unfair but that it operates at a scale and speed humans cannot match. The Mobley case illustrates this with staggering clarity: the system in question rejected 1.1 billion applications between September 24, 2020, and 2025. A single biased parameter in the code transformed what might have been individual errors in human-led hiring into a massive class-action lawsuit. This is how a tool meant to increase efficiency can become a talent brand-killer and invite reputational collapse.
3. The Lawsuit Will Have Your Name on It
The risk posed by a discriminatory AI system is not just organizational—it’s personal. The Mobley case shows that courts, regulators, and the media will identify the specific executives who approved and deployed the system as the responsible parties.
You may have believed you were innovating. Now you’re justifying to the CEO, the board, and a federal judge why your state-of-the-art system rejected millions of qualified applicants. As a CHRO or VP of Talent, good intentions are not a defense. You are judged by outcomes.
“You don’t receive credit for good intentions. You are held responsible for the bad results.”
4. “Bias-Free” Is a Marketing Slogan, Not a Legal Defense
The issues exposed in the fictional Mobley case are mirrored in numerous real-world lawsuits. In case after case, employers deployed tools marketed as fair, only to face costly litigation when that trust proved misplaced.
iTutorGroup: In the first federal AI discrimination settlement, the company paid $365,000 after its AI automatically rejected all female applicants over age 55 and all male applicants over age 60.
Amazon: An internal recruiting tool was famously discontinued after it was found to penalize resumes containing the word “women’s” and to favor male-associated language.
CVS/HireVue: A lawsuit involving an AI that analyzed facial expressions to generate “employability scores” was settled privately amid concerns that it functioned as an illegal pre-employment lie detector test.
Harper v. Sirius XM: In a pending case seeking class-action status, a Black applicant alleges that an AI screening system uses factors such as employment gaps and geography as illegal proxies for race discrimination.
The common thread is clear: every tool was deployed as a black box, and every employer learned too late that a vendor’s promise is not a legal shield.
5. Plausible Deniability Is Officially Dead
The conversation about AI bias has fundamentally shifted. Five years ago, it was a theoretical discussion. Today, it is an established legal reality, with a growing list of court dockets and settlement checks. The era of claiming ignorance about how your hiring systems work is over.
“The Mobley lawsuit closed the door to plausible deniability.”
Leaders fall into catastrophic failure by relying on a set of now-dangerous assumptions: “Our vendor is different.” “This is too technical for HR.” “We haven’t been sued yet.” Your AI is making legally binding decisions right now. Delaying oversight is a direct acceptance of risk.
From Awareness to Action
Moving from passive risk to active governance requires asking hard questions now, before they are asked of you in a deposition.
When was the last time anyone in your organization questioned whether an AI system was producing fair outcomes? If no one is asking, the system is not governed.
Ask your hiring managers: “Have you ever overridden an AI recommendation?” If the answer is consistently no, you don’t have human oversight—you have compliance theater.
The 1:50 AM Question
The era of passive trust in AI vendors is over. Active, human-led governance is no longer optional; it is a non-negotiable business imperative. You must evolve from vendor dependence to governance readiness.
This brings us back to that 1:50 AM rejection. How many automated rejections has your system issued this year?
If you don’t know the answer, that’s your first risk.
About AI Governance for HR CoLab Workspace
Margaret Spence, author of When AI Breaks the Law, helps HR and talent leaders operationalize AI governance across hiring, performance, and promotion. This CoLab workspace delivers daily frameworks to bridge the gap between compliance documentation and ethical AI principles—the gap where the $365M Mobley lawsuit occurred. You’ll build governance infrastructure that reduces legal, reputational, and EU AI Act compliance exposure before AI-driven talent decisions scale bias into discrimination lawsuits.
Join us live every Wednesday for our Let’s Build Governance Together Series - Our live programs are a premium edition of this Colab.
Get the SimpliFocus Readiness Assessment Free
Join the waiting list for “When AI Breaks the Law” and get early access to the SimpliFocus AI Governance Readiness Assessment. Find out in 10 minutes where your governance gaps are before they become lawsuits.




