Fight For Digital Privacy, Transparency And Fairness. Regulation and Retribution.

Know Your Rights

AMERICAN CIVIL LIBERTIES UNION

Know Your Digital Rights: Digital Discrimination in Hiring

Equal access to job opportunities is a core component of economic justice. Increasingly, employers are using automated tools in their hiring processes, including advertising job opportunities, screening applications, assessing candidates, and conducting interviews. These tools can perpetuate existing bias in hiring and employment or enable new kinds of digital discrimination based on race, gender, disability, and other protected characteristics in ways that may be difficult to detect. However, there are various existing local, state, and federal laws that protect you from discrimination, including digital discrimination, and may give you some control over how your data is used by employers. Learn more about how automated tools are used in the hiring process and your digital rights under these laws.

https://www.aclu.org/know-your-rights/know-your-digital-rights-digital-discrimination-in-hiring

WE ARE NOT YOUR GUINEA PIGS

COMBAT DATA BROKERAGE AND SHADOW PROFILING

Companies Are Secretly Scoring & Rejecting Candidates Using AI
Tech giants like Workday use secretive AI scoring systems to rank and filter job seekers without explaining their decisions. These hidden hiring algorithms are produced through data from third party data brokerage firms that use data mining and various other privacy infringements to obtain unverified data.

The shadow profiles are disseminated through integrations with all major HCMs through Workday. Workday's Intelligent Skills Cloud is a Scam and NOT based on skills, knowledge, and experience. Utilizing over 625 billion black box datasets, including biometric and experimental natural language processing, unverified shadow profiles and "inferred data", Workday has taken once successful people and forced them into poverty with their experimental "innovation" that claims the ability to predict the probability of future success.

Workday is not God, and we are here to remind them that the future they "predicted" was FORCED upon us, robbing us of everything.

Please click below to find a list of Data Brokerage Firms

While this list is not conclusive, it contains known data brokers and their opt out policies.

A template is available for email.

Protecting Job Seekers

The Case for Strong Data Privacy Regulations in AI Hiring

Artificial intelligence is transforming the hiring process, but at what cost to privacy? AI-driven hiring systems collect, store, and analyze vast amounts of personal data, often without clear consent, oversight, or transparency. Without proper regulations, these systems put job seekers at risk of privacy violations, algorithmic discrimination, and data exploitation (Wachter et al., 2021).

🚨 The Problem: AI Hiring Poses Serious Data Privacy Risks

📌 Mass Data Collection Without Consent – AI hiring tools extract sensitive personal data from resumes, social media, online assessments, and even biometric sources like video interviews—often without job seekers' explicit consent (Acemoglu & Restrepo, 2020).

📌 Unregulated AI Decision-Making – Many AI hiring systems operate as black boxes, using opaque algorithms to analyze candidates' skills, personality traits, and even facial expressions (Binns, 2020). Job seekers have no control or visibility into how their data is used.

📌 Risk of Discrimination & Bias – Without privacy safeguards, AI hiring tools can infer and use protected characteristics—such as age, gender, race, and disability status—to filter out candidates, leading to unfair hiring practices (Raghavan et al., 2020).

📌 Lack of Legal Recourse for Job Seekers – If AI-driven hiring tools misuse data, reject candidates unfairly, or discriminate, most job seekers have no way to challenge decisions or request corrections (Barocas et al., 2019).


⚖️ Why Strong AI Hiring Privacy Regulations Are Essential

🔹 Ensuring Transparency & Consent – Companies must be required to inform applicants about what personal data is being collected, how it is used, and whether AI is involved in decision-making (Wachter & Mittelstadt, 2021).

🔹 Preventing AI-Driven Discrimination – Privacy regulations should prohibit AI hiring systems from using protected characteristics (age, gender, race, disability) in decision-making (Dastin, 2018).

🔹 Giving Job Seekers Control Over Their Data – Candidates must have the right to access, challenge, and delete their personal data from AI hiring platforms, similar to GDPR & CCPA protections.

🔹 Holding Companies Accountable – AI hiring platforms should be audited regularly to ensure compliance with privacy laws and fairness standards (Wachter et al., 2021).


🚀 Advocacy Goals: Pushing for Stronger Data Privacy Laws in AI Hiring

Mandate Transparency & Explainability – Companies must disclose how AI hiring systems collect and process data.
Require Opt-In Consent for AI Hiring – Job seekers should control whether their data is used in AI hiring systems.
Strengthen Anti-Discrimination Laws for AI Hiring – AI tools must not use race, gender, or disability status in decision-making.
Establish a Right to Appeal & Correct AI Hiring Decisions – Candidates must be able to challenge and correct AI-driven rejections.
Enforce Stronger Regulatory Oversight – Governments should audit AI hiring tools and penalize companies violating data privacy rights.


📢 Call to Action: Protect Job Seekers' Data Rights!

💡 AI in hiring should not come at the cost of privacy. We must demand fairness, transparency, and accountability in AI-driven hiring practices.

Take action today:
✔ Contact lawmakers & demand stronger AI hiring privacy laws
✔ Support organizations advocating for ethical AI hiring practices
✔ Sign petitions for AI hiring transparency & fairness
✔ Educate others on their data rights in AI hiring

Together, we can build a future where AI hiring respects privacy, prevents discrimination, and promotes fairness for all.


📚 References

  • Acemoglu, D., & Restrepo, P. (2020). "Artificial Intelligence, Automation, and Work." Econometrica, 88(6), 2085-2127.

  • Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning: Limitations and Opportunities. [arXiv preprint]

  • Binns, R. (2020). "On the Apparent Conflict Between Individual and Group Fairness." Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1-24.

  • Dastin, J. (2018). "Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women." Reuters.

  • Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). "Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT).*

  • Wachter, S., & Mittelstadt, B. (2021). "Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI." Computer Law & Security Review, 41.

Here are some notable Responsible AI non-profits, advocacy groups, and organizations working on AI ethics, employment rights, and algorithmic accountability: