Apr 29, 2025
Florida driver suing Toyota, Progressive, Connected Analytic Services over alleged data sharing
'The problem with the premise is the consumer is unaware it is happening,' attorney John Yanchunis says
A Central Florida man filed a federal class action lawsuit last week over alleged data tracking in his new car.
Philip Siefke of Eagle Lake, Florida, filed the suit in Texas against Toyota, Progressive Insurance and Connected Analytic Services.
Parallels Between the Toyota–Progressive Lawsuit & Workday’s AI Hiring Practices
Case Summary (Toyota–Progressive Lawsuit):
A Florida driver filed a lawsuit against Toyota, Progressive Insurance, and Connected Analytic Services, alleging that his driving data was shared without his consent—triggering a rate hike based on information he never agreed to be collected or used. The case centers on:
Unconsented data collection via connected vehicles.
Data sharing with third parties (Progressive) for profit.
Lack of consumer notification or opportunity to opt out.
1️⃣ Parallels with Workday and Other AI Hiring Vendors
A. Unauthorized Data Collection & Use
Just like Toyota allegedly collected driver data without consent, AI hiring tools—especially those deployed via Workday or LinkedIn Talent Solutions—collect and analyze job seekers’ data, often without:
Informed consent, especially for biometric data (e.g., video interviews).
Disclosure that third-party data brokers may be involved.
A mechanism to opt out or challenge decisions.
Example: Workday’s use of AI may involve retrieving third-party information (social media, skills data, public job history) at the application received stage, creating a profile without user verification—a key parallel to the driving data case.
B. Data Brokerage & Profiling Without Transparency
Both cases involve data brokerage:
In the Toyota case, vehicle data was monetized through partnerships like Progressive’s driver scoring systems.
In the AI hiring space, job seekers' data is often sold or scored by third-party brokers (e.g., resume databases, skills platforms) and used in “black box” AI hiring models like those connected to Workday’s Skills Cloud and candidate ranking engines.
This practice is arguably illegal under GDPR, California Consumer Privacy Act (CCPA), and potentially FTC Section 5 (unfair and deceptive practices)—especially when candidates are:
Not told how their data is being used.
Unable to access or correct AI-generated profiles.
C. Liability: Corporate Responsibility & Informed Consent
Just as Toyota may be liable for failing to disclose how user data was shared, employers and AI vendors may face liability for:
Deploying AI hiring tools without disclosing risks or data sources.
Allowing AI to make decisions based on unverified, third-party data.
Failing to audit or control discriminatory outcomes produced by their systems.
If employers or platforms like Workday use profiles built by data brokers without validating accuracy or securing user consent, they may be liable for:
Discrimination under Title VII and ADA.
Data misuse under CCPA, FCRA, or GDPR.
Violating EEOC guidance on algorithmic fairness.
2️⃣ Legal & Ethical Takeaway
Illegality arises from:
✔ Using personal data for consequential decisions without informed, opt-in consent.
✔ Sharing, scoring, or acting on external profile data without giving users control.
✔ Failing to inform candidates of AI systems’ use or offer human alternatives.Laws Implicated:
FTC Act (Section 5) – Deceptive practices.
FCRA – Use of external data without accuracy verification.
GDPR/CCPA – Use of personal data without clear notice, consent, or correction rights.
Civil Rights Act / ADA – Algorithmic bias resulting in protected-class discrimination.
The Toyota-Progressive case underscores a growing legal vulnerability across industries: the use of personal data in automated, consequential decision-making without consent or transparency. This directly mirrors practices in AI hiring, especially platforms like Workday, LinkedIn, and their data partners.
Legal Theories of Liability
A. Negligence and Product Liability
Employers who utilize AI systems that incorporate black-box models based on third-party data brokers may be liable under negligence and product liability theories, particularly when:
They fail to verify the accuracy or legality of sourced data.
They deploy tools that lack transparency or a means of human appeal.
They fail to audit AI outcomes for systemic bias or data misuse.
B. Civil Rights Violations
Use of AI that disproportionately impacts candidates based on race, gender, age, or disability may constitute unlawful discrimination under:
Title VII of the Civil Rights Act
The Age Discrimination in Employment Act (ADEA)
The Americans with Disabilities Act (ADA)
The reliance on unverified employment "fit" scores, behavioral inference models, and resume ranking algorithms—often shaped by proxy variables—can generate disparate outcomes without lawful justification.
C. Data Privacy and Consent Violations
Employers may also be in breach of:
California Consumer Privacy Act (CCPA)
General Data Protection Regulation (GDPR)
Fair Credit Reporting Act (FCRA)
when they:Fail to obtain informed, opt-in consent for third-party data use.
Do not disclose AI-driven decision-making practices.
Provide no access to, correction of, or dispute process for applicants.
III. Urgent Need for Reform and Risk Mitigation
Employers must immediately:
Audit all AI tools used in hiring for fairness, legality, and data provenance.
Reject systems relying on third-party data not explicitly consented to by applicants.
Shift toward transparent, auditable AI practices with human oversight and appeal options.
Continued use of opaque AI hiring systems—especially those relying on 625 billion black-box data points, 55,000 skill categorizations, and unauthorized resume scoring—exposes employers to substantial litigation, reputational harm, and regulatory penalties.
IV. Conclusion
Courts and regulators are beginning to treat the unauthorized use of personal data in algorithmic decision-making as a serious legal and ethical breach. Employers must not assume that contracting with vendors insulates them from liability—they are responsible for the tools they use and the outcomes they generate.