The EEOC Issues New Guidance on Use of Artificial Intelligence in Hiring
On May 18, 2023, the Equal Employment Opportunity Commission (EEOC) issued guidance for use of artificial intelligence (AI) in employment selection to comply with Title VII (Title VII), the federal law that protects employees and job applicants from employment discrimination based on race, color, religion, sex, and national origin. Specifically, this guidance (the Title VII guidance) focuses on the use by employers of AI in selection processes, considerations for assessing disparate impacts on employees of color, religion, sex, and/or national origin, and the risk to employers relative to using third-party AI products or using AI vendors to assist in hiring.
The Title VII guidance follows guidance that the EEOC issued last year on the use of AI but through the lens of the Americans with Disabilities Act (ADA) (the ADA guidance). In its 2021 launch of the initiative to ensure algorithmic fairness, EEOC Chair Charlott Burrows stated, “Bias in employment arising from the use of algorithms and AI falls squarely within the Commission’s priority to address systemic discrimination.”
Both the Title VII and ADA guidance consider actions that could run afoul of Title VII as interpreted by the EEOC when using AI, defined as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
Both the Title VII guidance and ADA guidance provide the same examples of when an employer may use AI in an employment situation:
- Résumé scanners that prioritize applications using certain keywords
- Employee monitoring software that rates employees on the basis of their keystrokes or other factors
- Virtual assistants or chatbots that ask candidates about their qualifications and rejects those that do not meet certain pre-defined requirements
- Video interviewing software that evaluates candidates based on facial expressions and speech patterns
- Testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills or perceived “cultural fit”
The Title VII guidance focuses on when the use of AI may have a disparate impact or adverse impact on an employee or prospective employee on the basis of race, color, religion, sex, or national origin. The guidance notes that an employer can assess whether the use of AI has a disparate or adverse impact on the selection of employees in the same manner it uses to assess any selection process – examining whether the process has a substantial impact on a specific protected category or categories and using the traditional “four-fifths rule” as a helpful, but not definitive, tool unless the employer can show that such use is “job related and consistent with business necessity.”
As the EEOC explains, the “four-fifths rule”, also referred to as the disparate impact ratio, is a general, illustrative rule and may not always be appropriate, but can assist employers in determining whether a selection rate of one group is “substantially” different from the selection rate of another group. According to the EEOC, a selection rate is considered substantially different if the ratio for one group is less than four-fifths (80%). In the context of AI, the EEOC provides an example of a personality test scored by AI that has a selection rate[1] of 30% for Black applicants and 60% for White applicants. In this scenario, the EEOC concludes that since 30/60 (50%) is less than 4/5 (80%), the selection rate of Black applicants is substantially different from the selection rate for White applicants, which it might view as evidence of discrimination against Black applicants. The EEOC acknowledges that this four-fifths rule is a rule of thumb, not law, and may not be appropriate in all circumstances.[2]
Using the same standards on disparate impact by AI as other process decisions that may create a disparate impact, the EEOC notes, as it did in the earlier ADA guidance, that an employer is likely responsible for any disparate impact from AI, even when the AI is created or administered by a third-party vendor. The EEOC noted that an employer deciding to rely on AI from a third-party vendor “may want to ask the vendor, at a minimum, whether [the vendor took steps] to evaluate whether the use of the AI caused substantially lower select rates for individuals with a [protected] characteristic”. The EEOC noted that, even if the vendor assures the employer that its tool does not result in disparate impact on a protected category, if the vendor is incorrect, the employer could still be liable. Relatedly, the EEOC notes that, when an employer develops its own AI to make employment selections, if it selected a version that has a disparate impact on a protected class or classes and could have selected a version with less disparate impact, the employer may also be liable.
Finally and not surprisingly, the EEOC instructs that an employer should assess any AI it uses on an ongoing basis to ensure it is not disparately impacting a protected category of persons, and must discontinue using AI that disparately impacts a protected category of persons unless the employer can show the use of the AI is “job related and consistent with business necessity.”
As employers consider new ways to attract talent, they should do so with caution and be sure that those responsible for selecting and purchasing tools that include AI understand the potential risks associated with using them.
[1] “Selection rate” is the proportion of applicants or candidates who are hired, promoted, or otherwise selected. It is calculated by dividing the number of persons hired, promoted, or otherwise selected from a group by the total number of candidates in that group. This is the EEOC’s example in the Title VII guidance:
Suppose that 80 White individuals and 40 Black individuals take a personality test that is scored using an algorithm as part of a job application, and 48 of the White applicants and 12 of the Black applicants advance to the next round of the selection process. Based on these results, the selection rate for Whites is 48/80 (equivalent to 60%), and the selection rate for Blacks is 12/40 (equivalent to 30%).
[2] On January 31, 2023, the EEOC heard testimony in connection with this AI initiative, in which several academics testified about the prevalent misuse of the 45-year-old four-fifths rule and advocated that it be abandoned.
Related Attorneys
Media Contact
- office 513.629.2896