Is Your AI Breaking the Law? Why CHROs Can No Longer Outsource AI Governance
The First AI Governance Book Written for Talent Leaders
Charting a New Path: Introducing “When AI Breaks the Law”
For HR leaders navigating this new reality, there is now a definitive guide. “When AI Breaks the Law: AI Governance for Talent Leaders” by Margaret Spence is the essential playbook for establishing durable, defensible, and ethical AI governance. The book will be available in stores on April 20th. Join the When AI Breaks the Law release list → (Sign Up Here)
The era of trusting AI vendors with your organization’s legal fate is over. The landmark case of Mobley v. Workday is a definitive wake-up call for every senior human resources leader. Derek Mobley, a qualified African American applicant over 40, allegedly applied for more than 100 jobs that used Workday’s AI screening tools and was rejected each time. In one instance, he submitted an application at 12:55 a.m. and received an automated rejection at 1:50 a.m.—a clear signal that no human was involved in the decision.
The court’s ruling sent shockwaves through the industry: AI vendors can be considered “legal agents of the employer.” This critical decision means that accountability for discriminatory outcomes cannot be outsourced through a contract or delegated to a “black box” algorithm. This isn’t just a compliance headache; it’s a balance-sheet catastrophe in the making. One CHRO in our research estimated the potential liability from a single talent system at $365 million, an uninsured risk. The age of consequence for HR has begun, and “we trusted the vendor” is no longer a defense.
The Anatomy of an AI Failure: Why Good Intentions Create Systemic Risk
The Mobley case was not an isolated incident caused by a single flawed algorithm. It was the result of a systemic failure—a complex web of decisions by well-intentioned people across HR, IT, Legal, and vendor organizations. Traditional risk management, which seeks to identify a single point of failure, is dangerously inadequate for governing AI systems where responsibility is so widely distributed.
Recurring failure patterns are woven into how most organizations deploy AI technology. These patterns—such as the “Speed Trap,” where the pressure to deploy outpaces the ability to test, and the “Vendor Shuffle,” where unverified vendor claims are treated as interconnected governance that amplifies and creates hidden systemic risk. Your current approach to writing AI handbook policies likely misses this bigger picture, leaving your organization exposed to the same catastrophic failure outlined in the Mobley lawsuit.
The Leadership Gap: Are Your Teams Prepared to Govern?
With accountability now firmly in your hands, the critical question is whether your team can lead AI governance. True readiness requires a delicate balance of two distinct competencies: AI Fluency, the ability to understand the technology, and Governance Capability, the ability to manage its risks.
Unfortunately, many leadership teams are dangerously unbalanced. At one end of the spectrum is the tech-savvy “Risky Innovator,” who champions new tools without understanding their legal or ethical consequences. At the other end is the compliance-focused “Policy-Driven Steward,” who is strong on policy but lacks the technical comfort to evaluate whether an AI system meets those standards. An unbalanced team is unprepared and vulnerable to the very risks you are now responsible for mitigating.
An Expert Guide for an Unprecedented Challenge
Margaret Spence is a human resources and risk management strategist with over 35 years of experience, having managed more than $495 million in workplace litigation claims as a risk professional. Her front-row view of how well-intentioned systems create catastrophic risk shaped her core belief that “AI governance is not red tape—it’s leadership.” Spence is the expert HR leaders need to translate complex legal challenges into clear operational imperatives.
From Theory to Actionable Frameworks
This book goes beyond abstract principles to provide the operational tools needed to build a robust governance program. It presents a series of powerful frameworks to address the systemic failures and leadership gaps outlined above:
The SimpliFocus AI Governance System™️: A comprehensive five-step implementation plan to build a durable governance architecture.
Your First Step Towards Accountable AI
In this new age of consequences, ignorance of AI risk is no longer a defense against liability. True leadership requires proactive, informed steps to protect your organization from legal, financial, and reputational harm. Reading this book is the single most important first step to understanding the new rules of engagement and building a governance framework that protects your organization, your people, and your professional legacy.
Get the book release + companion tools.
Sign up to receive the When AI Breaks the Law launch announcement, bonus resources, and next steps for implementing AI governance in HR.
About AI Governance for HR CoLab Workspace
Margaret Spence, author of When AI Breaks the Law, helps HR and talent leaders operationalize AI governance across hiring, performance, and promotion. This CoLab workspace delivers daily frameworks to bridge the gap between compliance documentation and ethical AI principles—the gap where the $365M Mobley lawsuit occurred. You’ll build governance infrastructure that reduces legal, reputational, and EU AI Act compliance exposure before AI-driven talent decisions scale bias into discrimination lawsuits.
Join us live every Wednesday for our Let’s Build Governance Together Series - Our live programs are a premium edition of this Colab.



