Abstract: This article examines the internal and external implications of Workday's AI-powered systems, focusing on the use of shadow profiling, algorithmic hiring, and performance assessments. Drawing on platform documentation, FTC and EEOC guidance, and academic literature, we explore how Workday's integration of large language models (LLMs), data brokerage APIs (e.g., Lightcast.io), and AI systems like Skills Cloud and People Analytics have altered both the hiring pipeline and the internal dynamics of performance evaluation. The article argues for legal reforms and regulatory transparency measures to counteract discriminatory outcomes, opaque decision-making, and data misuse.
I. Introduction
As large-scale data collection and artificial intelligence increasingly infiltrate employment decisions, technology platforms such as Workday have emerged as powerful mediators between individuals and labor market access. Workday's suite of products—Skills Cloud, People Analytics, HiredScore, and Career Hub—construct comprehensive profiles of applicants and employees. These profiles are enriched through integrations with data brokers and unverified datasets scraped from online activity, employment history, and inferred behavioral metrics. This article explores the resulting implications for autonomy, fairness, and the right to participate in the workforce.
II. Shadow Profiling and Pre-Hire AI Surveillance
Workday and its partners (e.g., Lightcast.io) have created infrastructures that harvest shadow data to build predictive profiles long before applicants formally engage with an employer. These profiles include inferred skills, behavioral tendencies, educational history, employment risk factors, and even geographic and demographic predictors.
Data Sources: Social media, job boards, scraped public resumes, learning platforms, and proprietary consumer data sets (Lightcast.io claims to maintain over 1 billion job and skill data points).
Shadow Profiles: Unverified, non-consensual data is assembled to forecast an applicant's "fit," employability, or cultural alignment. Applicants are often unaware these profiles exist or are being used to gatekeep.
Talent Agencies and Gatekeeping: Increasingly, staffing agencies and corporate recruiters purchase or exchange data scores to predict the "likelihood of success," reinforcing elite filters and bypassing public vetting mechanisms.
Legal Risk: The Federal Trade Commission (FTC) has warned against "opaque and unaccountable" AI models under Section 5 of the FTC Act. The Equal Employment Opportunity Commission (EEOC) has launched investigations into algorithmic discrimination and AI-driven blacklisting (EEOC, 2021).
III. Internal Surveillance: Workday's AI and the Employee Lifecycle
Workday's internal analytics systems impact performance management, promotions, layoffs, and workforce reshuffling:
Workday Skills Cloud
Uses machine learning to generate a dynamic skill graph that continuously updates based on employee data, job postings, and evolving organizational needs.
Embedded with metadata extracted from resumes, internal mobility history, digital training completions, certifications, and even collaboration patterns within Workday's platform.
Automates the identification of "adjacent" skills—skills related to those the employee already demonstrates proficiency in—thereby suggesting future learning paths and eligibility for alternative roles.
Claims to process over 200 million verified skill records (Workday, 2024). The system ingests structured and unstructured data to surface relevant capabilities without human intervention.
Citations:
Workday. "Skills Cloud." https://www.workday.com/en-us/products/human-capital-management/skills-cloud.html
Workday. (2023). "The Science Behind Workday Skills Cloud." https://blog.workday.com/en-us/2023/science-behind-workday-skills-cloud.html
Workday People Analytics
Delivers insights using augmented analytics, AI-driven pattern recognition, and contextual narrative tools to help executives assess workforce performance.
Dashboards monitor attrition risks, high-potential talent pools, DEI benchmarks, and organizational health metrics.
AI storytelling converts raw data trends into human-readable summaries, influencing upper management decisions on restructures or promotions.
Red flags include limited user visibility into how predictive scores are calculated and potential reliance on historical biases baked into legacy HR datasets.
Citations:
Workday. "People Analytics." https://www.workday.com/en-us/products/human-capital-management/people-analytics.html
Workday Datasheet. "Workday People Analytics." https://www.workday.com/content/dam/web/en-us/documents/datasheets/workday-people-analytics.pdf
Career Hub and Talent Optimization
Recommends internal roles based on inferred skills, AI-mapped career paths, and Workday’s Skills Graph correlations.
Employees may be algorithmically matched to roles or excluded from mobility tracks based on hidden thresholds.
Lacks a transparent appeals process for rejected recommendations or overlooked candidates.
The automated filtering process—while efficient—may exclude neurodivergent, disabled, or older employees whose profiles don’t conform to standardized digital footprints.
Citations:
Workday. "Career Hub." https://www.workday.com/en-us/products/human-capital-management/career-hub.html
Workday. (2022). "Unlocking Talent Potential With Workday Career Hub." https://blog.workday.com/en-us/2022/unlocking-talent-potential-workday-career-hub.html
Impacts:
A. Performance Reviews Influenced by AI Signals
Insight Generation: Workday People Analytics identifies, surfaces, and explains key insights that might otherwise be missed, transforming people data into actionable analytics that everyone can understand.
Potential Impact: Employees may be scored not just by their managers but also by AI-driven algorithms interpreting unstructured data, which could influence performance reviews and career progression.
B. Opaque Internal Mobility Decisions
Skill Matching: Skills Cloud uncovers connections between skills to deliver personalized, data-driven suggestions, helping organizations understand skills across their entire workforce and take action—whether that’s upskilling, reskilling, redeploying, or hiring new talent.
Employee Awareness: Employees may not be fully aware of how their skills are being assessed or matched to opportunities, potentially leading to missed internal mobility prospects.
C. Disciplinary & Layoff Targeting
Risk Identification: Workday People Analytics identifies top risks and opportunities regarding an organization’s workforce and delivers these insights in easy-to-digest story form, empowering HR and business leaders to make better people decisions.
Employee Impact: The use of AI to identify risks could influence decisions related to disciplinary actions or layoffs, potentially affecting employees without their knowledge.
IV. Health, Demographics, and Embedded LLMs
 Workday’s AI models may incorporate inferred health data, age proxies, and demographic indicators through third-party APIs or scraping methods, even if not explicitly labeled. This raises significant red flags under the Americans with Disabilities Act (ADA) and Title VII of the Civil Rights Act.
Embedded LLMs: Workday uses natural language models to parse resumes, performance reviews, and internal messages. These LLMs are integrated into the user interface and distributed across HR platforms.
Data Brokerage Integration: APIs like those from Lightcast.io offer real-time data synchronization into Workday’s Skills Cloud and Hired AI layers.Â
V. Legal and Ethical Concerns
Title VII (Civil Rights Act)
Griggs v. Duke Power Co., 401 U.S. 424 (1971) prohibits employment practices with disparate impact absent business necessity.
AI-derived skills and performance scores may disproportionately disadvantage protected groups due to biased training data.
ADA (Americans with Disabilities Act)
Predictive health or performance risk scoring violates ADA provisions if based on inferred or unrelated health/behavioral data.
FCRA Violations
o   FTC v. RealPage Inc., No. 3:20-cv-02281 (N.D. Tex. 2020): Algorithmic decisions affecting employment must allow for notice and dispute.
o   Workday’s models may function as de facto consumer reporting systems, especially when used for adverse employment actions.
FTC & EEOC AI Guidance (2021–2023)
Emphasizes need for explainability, fairness, and non-discrimination in workplace AI.
Systems with hidden scoring models are incompatible with these expectations.
FTC v. Kochava Inc. (2022)
Sets precedent that location, health, and behavioral data use without consent constitutes “unfair” data practice.
Similar data is often used internally through Workday-integrated tracking (email metadata, app usage, wellness logs).
Academic and Industry Sources
Harvard Business Review (2022): "Over 60% of employers now use AI tools for ongoing employee monitoring and evaluation.”
Brookings Institution (2021): Warns of “algorithmic chilling effects,” where employees adjust behavior due to invisible performance surveillance.
AI Now Institute (2020): Reports that inferred and adjacent skill scoring systems contribute to inequity and unaccountable management practices.
Case Law:
Loomis v. Wisconsin (2016): Use of black-box AI in sentencing deemed legally controversial.
EEOC v. iTutorGroup (2023): Age discrimination via algorithmic screening.
VI. Recommendations
Auditability of LLMs: Legislative mandates requiring public audit logs, data lineage tracing, and algorithmic transparency.
AI Explainability: Rights for applicants and employees to receive meaningful explanations for AI-based decisions.
Consent Frameworks: Employers must gain explicit, informed consent before using external data to influence job outcomes.
FOIA Expansion: Federal contractors using Workday should be required to disclose algorithmic models under FOIA (template included in appendix).
Digital Civil Rights: Establish a federal Digital Bill of Rights ensuring freedom from algorithmic discrimination.Â
VII. Conclusion
Workday and similar platforms have ushered in a new era of automated managerialism, where AI systems silently but decisively shape human opportunity. The use of shadow data, embedded predictive models, and opaque AI decisioning transforms employees into quantified subjects of constant evaluation. Without regulatory reform and public accountability, these tools risk entrenching existing inequities and creating a black-box caste system in employment.Â
Citations:
Workday. (2024). Workday Skills Cloud. https://www.workday.com/en-us/products/human-capital-management/skills-cloud.html
EEOC. (2021). Agency Guidance on AI in Employment. https://www.eeoc.gov/ai
FTC. (2022). The Business Guidance on AI Discrimination. https://www.ftc.gov/business-guidance/blog/2022/04/using-artificial-intelligence-employment-decisions-heres-what-ftc-wants-you-know
Lightcast.io. (2023). Data Capabilities and API Integration. https://lightcast.io
Harvard Business Review. (2021). "Who’s Being Left Out by AI Hiring Tools?" https://hbr.org
Workday People Analytics Overview. (2024). https://workday.com/content/dam/web/en-us/documents/datasheets/workday-people-analytics.pdf
Workday Career Hub. (2022). https://blog.workday.com/en-us/2022/unlocking-talent-potential-workday-career-hub.html
Workday. Skills Cloud Science (2023). https://blog.workday.com/en-us/2023/science-behind-workday-skills-cloud.html