EEOC Starts New Year Targeting AI-Based Discrimination
The U.S. Equal Employment Opportunity Commission (“EEOC”) held a public hearing on Tuesday, January 31, 2023, examining the implications of Artificial Intelligence (“AI”) technology on equal employment opportunity. According to EEOC Chair Charlotte A. Burrows, “The goals of this hearing were to both educate a broader audience about the civil rights implications of the use of these technologies and to identify next steps that the Commission can take to prevent and eliminate unlawful bias in employers’ use of these automated technologies.” Prior to the January 31st, the EEOC most recently held a public hearing concerning AI technology in 2016. On Tuesday, twelve panelists from academia, law and industry grappled with preventing what panelist Suresh Venkatasubramanian, Deputy Director, Data Science Initiative and Professor of Computer Science, Brown University, labeled “the inevitable harm that comes when AI is used without appropriate guardrails.”
Panelists testified to various ways in which AI tools can carry implicit or explicit biases, but largely focused their remarks on how the EEOC can play a role in preventing the discriminatory use of AI by publishing additional regulatory guidance. Panelists suggested that the EEOC require employers to abide by several best practices in their use of AI, including: providing specific notice that an AI tool or process is being used to anyone who will be assessed using it; regularly reviewing the source and quality of data being considered by AI decisionmaking; ensuring that all decisions relating to the development, validation, scoring, and interpretation of AI-based assessments are documented for independent verification; and conducting such audits and verifications on an ongoing and regular basis. Many panelists suggested practices which align with the Institute for Workplace Equality’s recently published Technical Advisory Committee Report: EEO and DEI&A Considerations in the Use of Artificial Intelligence in Employment Making Decisions.(Christine E. Webber, co-chair of Cohen Milstein’s Civil Rights & Employment practice, contributed plaintiff-side perspective to this report.)
Several panelists also called on the EEOC to squarely reject the Four-Fifths Rule as a guideline for quantifying adverse impact under Title VII. Chair Burrows asked several prepared questions regarding the Four-Fifths Rule, potentially signaling that the EEOC will formally review, and perhaps re-write the Four-Fifths Rule as a measurement of adverse impact under Title VII in its future guidance.
The EEOC’s January 31st hearing follows up on its January 10th announcement of its draft Strategic Enforcement Plan for FY2023-2027, which includes increased enforcement efforts aimed at discrimination resulting from AI decision making tools. In its draft plan, the EEOC proposes to further its AI and Algorithmic Fairness Initiative by prioritizing the elimination of barriers to recruitment and hiring posed by “the use of automatic systems, including artificial intelligence or machine learning, to target advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.” Last May, the EEOC published a technical assistance document titled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” as part of this initiative. Given the EEOC’s stated goals for its January 31st hearing, its publishing similar technical assistance concerning Title VII appears likely.
Samantha Gerleman is a Fellow in the firm's Civil Rights & Employment practice.