PODCAST 8: Legal Risks and Responsibilities: Navigating Workday's AI and Data Challenges

In this episode, we delve into the substantial legal risks faced by Workday, Inc. regarding AI-driven discrimination, data breaches, and system failures.

https://monica.im/ai-podcast/share?id=0b980d9e-a37d-4362-be51-ff687cb032b8

With increasing regulatory scrutiny, employers using Workday’s services must understand the potential liabilities they may inherit, from bias in hiring practices to data security and software outages. We discuss the implications of these challenges and the importance of proactive measures to mitigate legal repercussions for both Workday and its clients.

As regulations around AI and data privacy continue to tighten, it’ll be an ongoing challenge for companies to stay compliant. The message here is clear: take proactive steps now, or risk getting trapped in a legal labyrinth later on.

Employers truly need to scrutinize the fine print of their contracts with Workday. Often, they’ll uncover clauses that limit Workday's responsibility, so the onus falls on them for compliance. It’s like purchasing a car with a warranty that doesn’t cover engine faults. You might end up bearing the costs, and, well, it’s not pretty, right?

Data breaches represent another critical area of risk. Imagine if one of Workday's systems gets hacked, and sensitive employee data leaks out. It’s like leaving the front door of your house wide open! Identity theft can follow, leading to lawsuits against Workday—and their clients could find themselves enmeshed in that chaos too. The financial fallout could hit like a sledgehammer.

Workdays Growing Liabilities and Customer Scapegoats.pdf
3.95MB

https://www.sec.gov/ixviewer/ix.html?doc=/Archives/edgar/data/0001327811/000132781124000242/wday-20241031.htm#ic43cfeadf79e46479373826377060886_316

Based on Workday’s 2024 SEC filing and supported by ongoing academic and legal scrutiny, the following is a summary of critical risks associated with Workday’s enterprise systems—including AI hiring algorithms, data brokerage integration, and algorithmic profiling—with significant implications for:

  • Customers and employers using Workday’s systems

  • Job applicants

  • Employees

  • Investors and regulators

⚠️ Critical Risks in Workday’s Enterprise AI and Data Practices

  1. Shadow Profiles & Data Brokerage Violations

Issue: Workday systems integrate third-party data—often from data brokers—without transparent user consent, raising major privacy and compliance risks.

  • AI models may rely on resume scraping, social media signals (e.g., LinkedIn), and predictive scoring models that create unverified candidate profiles.

  • This raises potential violations of the Fair Credit Reporting Act (FCRA), GDPR, CCPA, and EEOC standards, especially where decisions are automated.

“These systems… may include personal data… drawn from public and commercial sources, and are subject to privacy regulation.”

— Workday 2024 SEC Filing, p. 54

  1. Undisclosed Use of Experimental AI

Workday admits to embedding large language models (LLMs) and experimental AI systems into its talent and finance suite.

  • These systems are not subject to clear regulation and may simulate predictive hiring outcomes.

  • Developers are actively experimenting, despite “unclear lines of legal responsibility.”

“The line between developers and deployers of these technologies… remains largely unclear.”

— Workday 2024 SEC Filing

  1. Legal Exposure for Employers

Employers who use Workday may bear liability for discriminatory outcomes, even if unintentional.

  • A pending federal discrimination lawsuit claims that Workday’s systems result in adverse impacts based on age, race, and disability.

  • Employers named as co-defendants could face reputational and financial damage.

“We already are defending against a lawsuit alleging that our products and services enable discrimination…”

— Workday 2024 SEC Filing

  1. System Failures and Data Leaks

  • Workday reported a November 2023 incident where PDF documents were sent to unintended recipients within organizations.

  • Such breaches erode trust and expose firms to data protection liability under EU/US privacy laws.

“In November 2023, we discovered… PDF documents being sent to unintended recipients.”

— Workday 2024 SEC Filing

  1. Unpredictable Algorithmic Decision-Making

  • AI and scoring tools may assign candidate values based on factors unrelated to skills or performance (e.g., formatting, keywords, prior roles).

  • With over 625 billion data points and 55,000 AI-generated “skills” (per vendor documentation), this becomes a black-box profiling system.

  1. Noncompliance with AI and Data Protection Laws

Workday faces increasing pressure to comply with:

  • EU AI Act

  • U.S. state privacy laws (CCPA, CPRA)

  • Global data residency regulations (India, China, EU)

Yet the company warns that:

“Failure to comply… could lead to monetary penalties of up to 4% of worldwide revenue.”

— Workday 2024 SEC Filing

Implications

For Employers

  • You may be held legally liable for the decisions made by AI systems you license.

  • Workday does not guarantee full compliance with existing or emerging data protection laws.

For Job Seekers & Employees:

  • Your application may be ranked or rejected by algorithm based on incomplete or incorrect third-party data.

  • You have no clear rights to review or correct your Workday profile if scored unfavorably.

For Investors:

  • Any adverse litigation, data breach, or regulatory action could result in stock devaluation, SEC exposure, or class-action lawsuits.

VI. Strategic Recommendations and Action Plan

1. For Institutions & Employers Using Workday

Audit Your AI Systems Now:

  • Conduct independent bias audits on all Workday AI modules (e.g., Skills Cloud, Candidate Scoring).

  • Demand full disclosure on data sources, algorithm logic, and third-party data usage.

  • Create candidate appeal pathways and ensure human oversight of any automated rejection.

Review Contracts and Liability Exposure:

  • Examine Workday’s indemnification clauses. Many vendors shift AI liability to clients.

  • Determine whether privacy terms meet state/federal legal thresholds (CCPA, GDPR, FCRA).

  • Build internal risk registers for AI model use that includes regulatory exposure and ethics assessments.


2. For Policy Makers and Regulators

Expand FTC and EEOC Oversight into AI Hiring Vendors:

  • Enforce FTC Section 5 (unfair and deceptive practices) against vendors that fail to disclose data brokerage and shadow profiling.

  • Use EEOC guidance to audit systems producing disparate impact in hiring, even when operated by third parties.

Treat AI Candidate Scoring as Consumer Reporting:

  • Apply FCRA standards to AI employment risk scores: transparency, accuracy, ability to dispute, and consent.

  • Require pre-adverse action notice if AI-generated insights are used to reject candidates.


3. For Job Seekers and Advocates

Know Your Rights:

  • Under the FCRA and GDPR, you may have the right to:

    • Know what data was used to evaluate you.

    • Challenge inaccurate or discriminatory scoring.

    • Request human review.

Take Action Through Advocacy:


VII. Conclusion: Toward Fair, Legal, and Transparent AI

Workday’s 2024 SEC filing is one of the clearest public admissions that today’s AI-driven employment systems are legally uncertain, ethically questionable, and increasingly risky for employers, job seekers, and investors alike.

Despite its public messaging about fairness and innovation, Workday acknowledges:

  • It cannot ensure non-discrimination.

  • It is defending civil rights lawsuits.

  • Its AI practices may violate global data laws.

  • It may suffer reputational and financial losses if regulators or courts act.

We cannot continue to let algorithms make life-changing decisions in secret. Whether you’re a university, a recruiter, or a job applicant—now is the time to demand transparency, accountability, and consent in all AI hiring systems.

Supporting Research

  • Binns, R. (2018). Algorithmic accountability and public reason. Philosophy & Technology.

  • Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. fairmlbook.org

  • Kim, P.T. (2017). Data-driven discrimination at work. William & Mary Law Review, 58(3), 857.

  • Angwin, J. et al. (2016). Machine bias. ProPublica.

  • Wexler, R. (2019). Life, liberty, and trade secrets. Stanford Law Review.