Back to Insights
AI EthicsAugust 7, 202510 min read

AI Ethics Isn't Optional. It's the Foundation of Responsible Innovation.

AI Ethics Isn't Optional. It's the Foundation of Responsible Innovation.

Hiring is one of the highest-stakes places we deploy AI. A model doesn't just "optimise workflow" here. It can determine who gets income, stability, healthcare, and a future.

That is why Mobley v. Workday matters.

The case centres on allegations that AI-enabled hiring workflows disproportionately rejected older applicants (and raised broader concerns around protected traits). What makes it consequential is not only the discrimination claim itself, but the evolving legal logic around vendor accountability—especially when AI systems shape outcomes at scale.

And the case has progressed in ways that should make every HR leader, product owner, and AI vendor pause:

  • A US federal court allowed key claims to proceed under a theory that a software provider can be treated as an "agent" in the hiring process—opening the door to liability beyond the employer alone.
  • In 2025, the court granted conditional certification for an age-discrimination collective action, signalling the dispute may extend beyond one individual and toward a wider class of impacted applicants.
  • The court also moved the case toward greater transparency about which employers enabled certain AI features (including mechanisms to manage disclosure), reinforcing that "we didn't build it, we just used it" and "we built it, they used it" are both weak defences in an AI-mediated chain of decisions.

This isn't only a legal story. It's a governance story.

What AI ethics actually demands in hiring

AI ethics is often framed as values—fairness, transparency, accountability. That's true, but incomplete.

In practice, AI ethics is a discipline of design choices and operational controls that reduce harm before people experience it. In hiring, that means building systems that can answer three questions without hand-waving:

1) Fairness: "Who does this system disadvantage, and why?"

A tool can be "blind" to age and still disadvantage older applicants through proxies (experience length, graduation year patterns, job-title trajectories, gaps in employment, etc.). Disparate impact is not a theoretical risk; it's a predictable failure mode when historical data reflects historical inequality.

2) Transparency: "Can we explain what happened in plain English?"

If a candidate is rejected in minutes, with no understandable reason and no route to challenge, you've created an accountability vacuum. In the UK/EU context, automated decision-making and profiling rules (including restrictions around solely automated decisions with significant effects) raise the bar further.

3) Accountability: "Who owns the outcome end-to-end?"

Hiring AI is rarely a single model. It's a workflow: parsing, scoring, ranking, filtering, recruiter UI defaults, and human review behaviour shaped by system suggestions. Accountability must cover the whole socio-technical system, not just model metrics.

The uncomfortable truth: ethics can't wait for regulation

Regulation is catching up, but it is fragmented and uneven:

  • New York City Local Law 144 requires bias audits and notices for automated employment decision tools (AEDTs), with enforcement that began in July 2023.
  • US enforcement is increasingly shifting to state and local activity, while courts test how existing civil rights frameworks apply to algorithmic decisions.
  • In the EU, employment-related AI use is treated as high-risk under the AI Act framework (with phased implementation), reinforcing that hiring is not a casual sandbox for experimentation.

But even the best regulation will always lag real deployments. Responsible innovation requires organisations to behave as if the standard already exists—because the harm is already possible.

A practical ethics checklist for hiring AI (what "responsible" looks like)

If you're building, buying, or deploying hiring AI, these are the questions that separate performative ethics from operational ethics:

A. Prove you can measure harm

  • Can you run adverse impact analysis across protected groups (and intersections) before deployment and continuously after?
  • Are your metrics tied to real decision points (shortlisting, rejection, ranking), not just offline model performance?

B. Design for challenge and recourse

  • Do candidates know when AI is used and what it is doing in the process? (Notice is becoming a baseline expectation.)
  • Can a candidate request human review, correction of input data, or a meaningful explanation—without needing a lawyer?

C. Govern the vendor relationship like a high-risk partnership

  • Are bias audits, documentation, and monitoring rights written into procurement and renewals?
  • Can you trace which features were activated, how they were configured, and who approved them?

D. Don't ignore disability risk

Automated screening can discriminate against disabled candidates in ways teams don't anticipate (for example, assessments that penalise certain speech patterns, response times, or gaps in work history). US agencies have explicitly warned about disability discrimination risks in algorithmic hiring.

The takeaway

The lesson from Mobley v. Workday is not "don't use AI in hiring."

It's this:

If AI influences employment outcomes, it must be treated as a high-stakes system—audited, monitored, explainable, and governed across the full chain of responsibility.

Because the real risk isn't that AI will be biased. The real risk is that we will scale it before we build the controls that fairness requires.

Topics

AI EthicsResponsible AIHR TechFuture of WorkAI BiasDEIHiring

Need guidance on AI governance?

If you're navigating AI ethics, governance challenges, or regulatory compliance, we can help clarify priorities and next steps.

Book a Readiness Consultation