The New York City Department of Consumer and Worker Protection will delay enforcement of Local Law 144, until April 15, 2023. The law requires companies operating in the City to audit automated employment decision tools for bias prior to use, and to post these audit reports publicly. The bill would also require that companies notify job candidates (and employees residing in the city) that they will be evaluated by automated decision-making tools and disclose the qualifications and characteristics that the tool considers. The AI bias law still has an effective date of January 1, 2023, and violations of it are be subject to a civil penalty.

The City is delaying enforcement due to a “substantial volume of thoughtful comments” from concerned parties. Most of these comments likely came from NYC-area businesses, many of which use AI tools in hiring. These tools generally rank resumes and filter out low-quality applicants.

Bias in AI is difficult to isolate. These technologies tend to be black boxes, and the companies that use third-party AI services may not have access to the ins-and-outs of a system. Even if a business develops an AI with the purest of intentions, bias can creep in. AI bias derives from programming, baselines, and inputs established by people, and people are inherently biased.

For example, suppose a company trains its AI hiring system by feeding it past resumes and hiring decisions to teach the AI what a “successful” resume looks like. The AI then categorizes and scores new applicants’ resumes based on how well they compare to the baselines set by the training. The company has been historically white-dominated and has hired fewer qualified candidates from Historically Black Colleges and Universities (HBCU’s). The AI picks up on this trend as one of several factors that predict whether a candidate is “hirable” to the company. Even though the company’s leadership is dedicated to increasing diversity, the AI system filters out many qualified Black candidates.

While New York City’s law is on ice for now, some states are beginning to address AI bias as well. For example, the California Consumer Privacy Act (CCPA, as amended by the California Privacy Rights Act) requires businesses to allow consumers to opt-out of automated decision-making technologies and the California Privacy Protection Agency is expected to propose additional regulations in this area.

Additionally, employees are beginning to challenge allegedly biased AI tools in court. HR technology giant Workday is currently facing a class-action suit alleging that its system is biased against Black and older applicants. (Mobley v. Workday, Inc., Docket No. 3:23-cv-00770 (N.D. Cal. Feb 21, 2023)). The regulation of AI will almost certainly continue to develop as this technology becomes increasingly integrated in everyday life. For the time being, businesses can look to the U.S. Equal Employment Opportunity Commission’s guidance statement on AI hiring tools and the Americans With Disabilities Act.

Photo of Kathryn Rattigan Kathryn Rattigan

Kathryn Rattigan is a member of the Business Litigation Group and the Data Privacy+ Cybersecurity Team. She concentrates her practice on privacy and security compliance under both state and federal regulations and advising clients on website and mobile app privacy and security…

Kathryn Rattigan is a member of the Business Litigation Group and the Data Privacy+ Cybersecurity Team. She concentrates her practice on privacy and security compliance under both state and federal regulations and advising clients on website and mobile app privacy and security compliance. Kathryn helps clients review, revise and implement necessary policies and procedures under the Health Insurance Portability and Accountability Act (HIPAA). She also provides clients with the information needed to effectively and efficiently handle potential and confirmed data breaches while providing insight into federal regulations and requirements for notification and an assessment under state breach notification laws. Prior to joining the firm, Kathryn was an associate at Nixon Peabody. She earned her J.D., cum laude, from Roger Williams University School of Law and her B.A., magna cum laude, from Stonehill College. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.

Photo of Blair Robinson Blair Robinson

Blair Robinson has experience in data privacy and security, cybersecurity, information security governance, information technology (IT), and General Data Protection Regulation (GDPR).  Read her full rc.com bio here.