Your AI in HR innovation is now your primary legal vulnerability. The era of passive innovation has officially ended.
For years, HR leaders watched as AI tools popped onto their desks. We were told these systems would make hiring faster, selection more objective, and workflows more efficient. We chased the “shiny red ball” of productivity.
But as we moved through our recent webinar, “AI, HR, and AI Governance,” one thing became clear: 2026 is the year HR gets sued because of AI.
The legal landscape is shifting. Courts no longer accept “I didn’t know how it worked” as a valid defense. If an algorithm touches human potential, the responsibility rests with you—not IT, and certainly not the vendor.
The $365 Million Wake-Up Call
We spent a significant portion of our session discussing the landmark case of Mobley v. Workday. Derek Mobley, a qualified software engineer, applied for 100 jobs using the Workday platform. He received 100 rejections.
One rejection arrived at 1:50 AM.
When a rejection hits your inbox in the middle of the night, you know no human being reviewed your application. That case has now been certified as a class action affecting 1.1 billion applicants, with an exposure of $365 million.
The court drew a sharp line: AI vendors are the employer’s legal agents. You cannot outsource your liability to a software provider.
Video Time Stamp
0:00 Introduction and Speaker Background
2:50 The Biggest Challenges in AI Governance
5:30 Why HR Needs Its Own AI Governance
8:15 The 365 Million Dollar Wakeup Call
11:00 Mobley v Workday and Class Action Risks
14:45 EEOC v iTutorGroup and Automated Agism
18:20 Facial Analysis and ADA Violations in Hiring
21:55 Amazon Performance Algorithms and Systemic Bias
25:30 Why AI Chatbot Conversations are Discoverable in Court
30:10 The Eightfold Case and Fair Credit Reporting Act
34:40 Understanding the Black Box Problem
39:15 Global AI Regulations and the EU AI Act
43:50 Deployer Accountability and Legal Liability
47:25 Insurance Risks and Employment Practices Exclusions
51:00 Security Alert North Korean Operatives and Deepfakes
55:40 The Five Step AI Governance Process for HR
59:10 Implementing a Kill Switch and Human Oversight
1:01:00 Final Resources and Call to ActionUnpacking the Black Box
Most AI currently used in HR operates inside a black box. We have no idea what decisions the machine makes between “data in” and “decisions out.”
Our webinar deep-dived into several high-stakes cases that every HR leader must know:
EEOC v. iTutorGroup: An AI program rejected female applicants over 55 and male applicants over 60. The result? A federal settlement.
CVS v. HireVue: Facial and video analysis was used to generate an “employability score.” This behavioral penalty violated the ADA by assessing suitability based on physiological data.
Amazon Performance Algorithms: Systems designed to track productivity systematically excluded pregnant workers and women desiring favorable schedules. Amazon eventually had to shut down the system.
The Discovery Trap: Your Phone Is Now Evidence
We addressed a new, urgent risk: the use of AI on personal devices.
The recent ruling in U.S. v. Heppner (and Warner v. Gilbargo) changed the game for internal investigations. If a manager uses Gemini, Claude, or ChatGPT on their personal phone to draft a disciplinary write-up for an employee, that conversation is discoverable.
Internal discussions on non-enterprise AI about terminations or investigations are not privileged.
HR 2030: Who You Must Become
To manage AI in the workplace, HR must move beyond process management and into systems thinking. We need to architect “epi-systemic safety.”
I’ve introduced a five-step AI Governance Process for HR, starting with these three core takeaways:
Named Accountability: Every AI system in your organization must have a human owner. If you cannot write one name on a piece of paper for a specific tool, that system is ungoverned.
Independent Verification: Stop taking marketing brochures as gospel. You must ask vendors: What data was used to train the algorithm? Can I see the adverse impact stress test?
The Kill Switch: You need a protocol to pause or “kill” a system the moment bias is detected. Automation without a leash is a liability.
Moving Forward
Governance is not a policy in a handbook. It is a daily operational requirement.
We can no longer be passive consumers of tools. We must demand proof of fairness. We must train our recruiters and managers to spot red flags. Most importantly, we must keep the “Human” in Human Resources.
AI will create efficiency. It might even handle terminations. But the governance requirement will never go away.
Reflection Question for Your Week: How many 1:50 AM rejections has your system issued this year? If you don’t know the answer, that is your first risk.





