EEOC Issues Nonbinding Guidance on Permissible Employer Use of Artificial Intelligence to Avoid Adverse Impact Liability Under Title VII
1 Juni 2023On 18 May 2023, the US Equal Employment Opportunity Commission (EEOC) issued nonbinding guidance on how existing federal anti-discrimination law may apply to employers’ use of artificial intelligence (AI) when hiring, firing, or promoting employees (the EEOC AI Disparate Impact Guidance). The EEOC AI Disparate Impact Guidance aims to assist employers when using AI for such “selection procedures” to avoid disparately or adversely impacting protected groups under Title VII and incurring any related liability.1
The EEOC AI Disparate Impact Guidance is notably limited in scope because it does not address Title VII’s or other federal employment laws’ separate prohibition on employers intentionally discriminating against employees based on their protected characteristics, including race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), or national origin, nor does it address validation of the selection procedure in the event adverse impact is found.
Nevertheless, as discussed further below, employers currently or intending to use AI—especially AI created by third-party vendors—as part of their employment selection procedures should consider the EEOC AI Disparate Impact Guidance before implementing such procedures to avoid potential disparate impact liability under Title VII.
Title VII Disparate Impact
Title VII prohibits “disparate” or “adverse” impact discrimination. This prohibition precludes employers from using facially neutral selection procedures or tests that have the effect of disproportionately excluding persons protected by Title VII if the tests or selection procedures are not “job related for the position in question and consistent with business necessity.”2 There is a three-pronged test for analyzing disparate impact:
- Does an employer use a particular employment practice that has a disparate impact on a group protected by Title VII?
- If there is a disparate impact, are the selection procedures or tests job-related and consistent with business necessity?
- Even if the selection procedures or tests are job-related and consistent with business necessity, is there a less discriminatory alternative available?3
The EEOC AI Disparate Impact Guidance relates only to item (1).
The long-standing Uniform Guidelines on Employee Selection Procedures (the “Guidelines”) under Title VII, which have been in place since 1979, provide EEOC guidance to employers about how to determine if their selection procedures or tests are legal under a Title VII disparate impact analysis.4
The EEOC AI Disparate Impact Guidance
The EEOC AI Disparate Impact Guidance begins with a refresher on key provisions in the Guidelines.
“Selection rate” is the proportion of applicants or candidates who are hired, promoted, or otherwise selected;5 and the selection rate for a particular group is calculated by dividing the number of persons hired, promoted, or otherwise selected from that group by the total number of persons in that group.6 By way of the EEOC’s example, if 80 white persons and 40 Black persons applied for a position and an employer’s use of an AI-powered employment selection tool selected 48 white applicants and 12 Black applicants, the selection rate for white persons and Black persons would be 60% (48/80) and 30% (12/40), respectively.
The Guidelines also set forth a “four-fifths rule,” a general rule of thumb providing that the selection rate for a particular group will be “substantially” different from another group if the ratio of those two groups’ selection rates is less than 80%.7 In the example above, the ratio of selection rates for white applicants and Black applicants is 50% (30/60). Since the ratio is less than 80%, the four-fifths rule provides that the selection rate for Black applicants is substantially different than white applicants and shows possible evidence of discrimination against Black applicants.
Turning to how AI-powered employment selection tools may intersect with Title VII’s prohibition on disparate impact discrimination, the EEOC issued the following recommendations for employers in the “Questions and Answers” section in the EEOC AI Disparate Impact Guidance:8
- AI-powered tools used for making or informing decisions about hiring, promoting, terminating, or taking similar actions toward applicants or current employees would be subject to the Guidelines as a “selection procedure.” The Guidelines define “selection procedure” as any “measure, combination of measures, or procedure” used as a basis for making an employment decision.9
- An employer can, and should, assess whether the use of an AI-powered employment selection tool has an adverse impact on a particular protected group. The use of the AI-powered employment selection tool will have an adverse impact on a particular protected group where its results cause a selection rate for individuals in that protected group that is substantially less than the selection rate for individuals in another group.10 If so, then using the AI-powered employment selection tool will violate Title VII unless the employer can show that its use is “job related and consistent with business necessity.”11
- An employer can be liable for the disparate impact caused by an AI-powered employment selection tool, even where the tool is designed or administered by a third party. If an employer administers the use of an AI-powered employment selection tool that causes a disparate impact, the employer may be liable under Title VII even if the tool was developed by an outside vendor. Additionally, an employer may be liable under Title VII where it relies on the results of an AI-powered employment selection tool administered by a third party. This is because an employer may be liable under Title VII for the actions of its agents if the employer gave them the authority to act on the employer’s behalf.12 As a precaution, an employer should, at a minimum, ask the third party, whether it be the developer or the administrator of the AI-powered employment selection tool, if it has evaluated whether the use of the tool causes a substantially lower selection rate for individuals in groups protected by Title VII. If the third party states that the AI-powered employment selection tool should be expected to cause any such disparate impact, the employer should consider whether using the tool is job related and consistent with business necessity, and if there are alternatives that may meet the employer’s needs and have less of a disparate impact. An employer may even be liable where it relies on a third-party developer or administrator of an AI-powered employment selection tool’s own incorrect assessment of whether the tool results in a disparate impact.
- “The four-fifths rule is merely a rule of thumb” and “may be inappropriate under certain circumstances.”13 For instance, smaller differences in selection rates may still indicate adverse impact where the procedure is used to make a large number of decisions14 or there is evidence demonstrating that an employer’s actions disproportionately discouraged individuals from applying on the basis of a characteristic protected under Title VII.15 The four-fifths rule also does not automatically supersede the results of a test of statistical significance.16 The EEOC is not bound to base a determination of a charge of discrimination alleging an employer is engaged in employment discrimination on the four-fifths rule.17 An employer should ask a third-party developer or administrator of any AI-powered employment selection tool whether it relied upon the four-fifths rule or other test of statistical significance in determining whether the use of the tool will have a disparate impact on a group protected by Title VII.
- Where an employer discovers an AI-powered employment selection tool would have an adverse impact on one or more groups of individuals protected by Title VII, it should take steps to reduce the impact or select a different employment selection tool. Employers should also frequently conduct self-analyses on an ongoing basis to ensure its AI-powered employment selection tools are not causing a disparate impact.
Conclusion
The EEOC AI Disparate Impact Guidance comes during a potentially revolutionary moment for AI given its myriad applications to resume scanners, monitoring software, chatbots, and video interviewing software, among other things.
Employers and their counsel should read the EEOC AI Disparate Impact Guidance as part of the Biden administration’s “Blueprint for an AI Bill of Rights,” which is concerned with how AI and the software underlying it may reinforce existing biases in employment, housing, education, and other critical areas of law.
The EEOC AI Disparate Impact Guidance also follows the EEOC’s April 2023 joint statement with the US Department of Justice, Federal Trade Commission, and Consumer Financial Protection Bureau foretelling enforcement efforts to come: “We also pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
Lastly, in addition to ongoing federal law developments related to AI, employers and their counsel should continue tracking state legislation, which may further complicate employer use of AI when making key employment decisions.