The Fair Credit Reporting Act (FCRA) is decades old, but a recent artificial intelligence (AI)-related complaint suggests that plaintiffs are testing whether legacy consumer-reporting rules can apply to AI-driven hiring assessments.

In January, a class action complaint was filed in California, Kistler v. Eightfold AI Inc., No. C26-00214 (Cal. Super. Ct. Jan. 20, 2026). Eightfold is an AI recruiting tool that provides employers with tools for a more streamlined hiring process. The class action complaint raises a familiar consumer protection issue in a contemporary HR context: when an AI tool scores job applicants in the background, what legal regime governs that activity? The plaintiffs, and both job applicants, allege that Eightfold uses hidden AI during online job applications to collect sensitive and sometimes inaccurate information about applicants and generate a “likelihood of success” score that employers use to rank candidates. They further allege that applicants often do not even know Eightfold is involved and have no meaningful chance to review or dispute the AI-generated output before it influences whether they advance in the hiring process.

The pleading asserts that Eightfold’s outputs are “consumer reports” used for employment purposes, and that the company operates as a consumer reporting agency under the FCRA. If the court is persuaded by that reasoning, it may find that Eightfold was responsible for FCRA compliance, including clear disclosures and authorization; certifications from employer-clients; and practical mechanisms that allow applicants to access, dispute, and correct information before adverse action is taken against them.

The case offers several takeaways for organizations exploring AI for hiring purposes. First, be clear on what data your AI tool is using. The complaint alleges that Eightfold’s system does not rely only on what the applicant submits but also pulls in information from the employer and third-party online sources, even allegedly generating additional inferences about the applicant to build a profile. The more an AI model relies on external and inferred data, the more you should think about accuracy, transparency, and whether applicants can see and correct information about them.

In addition, there may be regulatory support for the plaintiffs’ position here. The complaint points to Consumer Financial Protection Bureau (CFPB) guidance indicating that FCRA concepts may extend to algorithmic scores used for hiring, particularly where a third party assembles or evaluates consumer information to generate scores for employers. Whether the court agrees to apply FCRA to this context remains to be determined, but it may be the case that AI does not displace existing consumer-reporting frameworks such as the FCRA. If you use an  AI tool to materially influence high-stakes decisions such as hiring, traditional consumer protection measures, the FCRA could potentially apply.

Photo of Roma Patel Roma Patel

Roma Patel focuses her practice on a broad range of data privacy and cybersecurity matters. She handles comprehensive responses to cybersecurity incidents, including business email compromises, network intrusions, inadvertent disclosures and ransomware attacks. In response to privacy and cybersecurity incidents, Roma guides clients…

Roma Patel focuses her practice on a broad range of data privacy and cybersecurity matters. She handles comprehensive responses to cybersecurity incidents, including business email compromises, network intrusions, inadvertent disclosures and ransomware attacks. In response to privacy and cybersecurity incidents, Roma guides clients through initial response, forensic investigation, and regulatory obligations in a manner that balances legal risks and business or organizational needs. Read her full rc.com bio here.