Shirt cuff

New California Regulations Regarding AI Use in Hiring and Employment

By Fred Alvarez and Hannah Withers 

If your company is using AI or other automated decision making systems to make employment or hiring decisions, a new set of regulations in California will be going into effect on October 1, 2025 that will require your attention.

Issued by the California Civil Rights Council (CRC), these regulations define what types of AI systems are being regulated, what types of employment decisions are included, and what employers need to do to prevent discrimination claims resulting from the use of these AI tools. If you (or your agents) are using AI in any manner related to employment decisions, we suggest familiarizing yourself with the key aspects of the regulations, which are summarized below.

The purpose of the regulations is to respond to concerns about the potentially discriminatory impact of AI tools in employment and to make clear that California’s anti-discrimination laws still apply even when employers are using AI tools to make decisions. The regulations make clear that an employer cannot escape liability for discriminatory hiring or employment decisions based on the fact that the decision was made by an AI generated tool. Rather, even when just using AI tools, companies may still be liable for discriminatory hiring or employment decisions.

Key Sections of the Regulations to be Familiar With:

  • An “Automated-decision system” is defined as “a computational process that makes a decision or facilitates human decision making regarding an employment benefit… [it] may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.”
  • The covered employment and hiring practices are quite broad and include decisions related to recruitment, applicant screening, background checks, hiring, promotion, transfer, pay, benefit eligibility, leave eligibility, employee placement, medical and psychological examination, training program selection, or any condition or privilege of employment.
  • An “agent” includes “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity, which may include applicant recruitment, applicant screening, hiring, promotion, or decisions regarding pay, benefits, or leave, including when such activities and decisions are conducted in whole or in part through the use of an automated decision system. An agent of an employer is also an ‘employer’ for purposes of the Act.”
  • Under the rules, it is unlawful to use AI systems that result in employment discrimination based on protected characteristics (e.g. religion, race, gender, disability, national original, age, etc.) The rules also specify that the use of technology that has an adverse impact is “unlawful unless job-related and consistent with business necessity and the technology includes a mechanism for requesting an accommodation”.
  • There is a 4 year record retention requirement for employment records created or received (including applications, personnel records, membership records, employment referral records, selection criteria, automated-decision system data) dealing with a covered employment practice or employment benefit.

Here is What We Recommend Considering and Preparing For:

  • Determine if you are using any AI systems that should be evaluated for possible discriminatory impact. Consider whether you use any of the following systems with employees or applicants:
    • Computer-based assessments or tests, such as questions, puzzles, games, or other challenges.
      Automated systems to direct job advertisements or other recruiting materials to targeted groups.
    • Automated systems to screen resumes for particular terms or patterns.
    • Automated systems to analyze facial expression, word choice, and/or voice in online interviews.
    • Automated systems to analyze employee or applicant data acquired from third parties.
  • If you use an automated decision making system, examine what data is being collected, how decisions are being made, and any possible discriminatory impact that might be made as a result. Consider how applicants or employees in each protected group might be impacted by the way the automated systems are set up and implemented. (e.g., The use of online application technology that limits, or screens out, ranks, or prioritizes applicants based on their schedule may discriminate against applicants based on their religious creed, disability, or medical condition. Such a practice having an adverse impact is unlawful unless job-related and consistent with business necessity and the online application technology includes a mechanism for the applicant to request an accommodation.)
  • Investigate how the automated decision making systems you are using check for bias on behalf of protected classifications. Efforts to check for bias can be used as a defense to claims of discrimination. To check for bias you should investigate whether the AI tool:
    • Monitors for unintended discriminatory effects during the ongoing use of the software.
    • Conducts live bias tracking functionality as decisions are being made.
    • Supports compliance with anti-discrimination requirements in states that require annual independent bias audits of AI tools in hiring.
    • Supports customer compliance with applicant notice requirements in states that have laws relating to the use of AI tools in hiring.
    • Maintains records of hiring decisions and AI processes conducted.
    • Has materials or a white paper regarding employment and privacy law compliance.
    • Has measures in place to prevent discriminatory outcomes in its algorithm design and outputs.
  • If you work with third parties or other agents for hiring or employment decisions, talk to them about what AI tools they are using and learn more about what information those tools gather and how they make decisions. You are responsible for how your agent is using AI tools and you should communicate with agents specifically about their compliance with these California regulations.
  • Review your accommodations policies to make sure that any automated systems being used are not operating in a way that would miss the need for an accommodation.
  • Review your record retention practices for employment and hiring decisions to make sure that you keep records of hiring or employment decisions made using as AI systems for 4 years.

 

The Coblentz Employment team is available to answer any questions you may have about the impact of these regulations and how to prepare logistically ahead of their effective date on October 1, 2025. For additional information, you may also refer to the CRC’s press release.