Articles

Regulation of AI Hiring Tools Is a Work in Progress

Law360 Employment Authority

January 28, 2022

Expert Analysis

Last October, the U.S. Equal Employment Opportunity Commission launched its new initiative on artificial intelligence and algorithmic fairness. In doing so, EEOC Chair Charlotte Burrows affirmed that while

[a]rtificial intelligence and algorithmic decision-making tools have great potential to improve our lives, including in the area of employment … the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.

While the EEOC has been considering issues raised by the use of AI and big data in employment since at least 2016, its current initiative is its first effort to update federal employment decision-making guidance to account for new AI technologies that continue to proliferate amid social distancing policies.

The initiative will culminate with issuing technical assistance and guidance on algorithmic fairness.

While the EEOC’s guidance is welcome, voluntary standards are not enough to address the serious challenges posed by the use of AI in employment decision making.

New AI decision-making technology, including fully automated recruiting, resume screening and even video interviewing, continues to gain popularity with employers, but applicants often do not even know that their resume or videoed interview is being reviewed by AI, rather than human resources.

This lack of transparency is of particular concern because AI decision-making algorithms carry known risks of discriminating on the basis race or gender.

For example, in 2018, Amazon.com Inc. found that its AI hiring software downgraded resumes that included the word “women” and those of candidates from all-women’s colleges because the company had not hired enough female engineers and computer scientists for the AI to see women as viable candidates.

Similarly, a 2018 study found that Face++ and Microsoft AI, facial recognition software products that analyze candidates’ emotions for desirable traits, have been shown to assign Black men more negative emotions than their white counterparts.

As observed by the Brookings Institute, “[l]eft unchecked, these biases in automated systems result in the unjustified foreclosure of opportunities for candidates from historically disadvantaged groups.”

While facially neutral AI selection tools that adversely affect groups of applicants because of race or gender are good candidates for application of long-standing disparate impact analysis under existing Title VII standards, there is a significant missing link: employees and job applicants frequently do not know that they have been reviewed by AI at all, let alone know any details about how the algorithm works, which can leave them unable to frame their claims.

Currently, there are no federal laws or regulations that specifically require that employers inform employees or job applicants when they are being evaluated using AI. This means that employees and applicants typically lack the necessary awareness to challenge discrimination caused by these technologies.

States and cities nationwide have begun to fill this gap by considering and adopting legislation to require that employers disclose any use of AI decision-making tools, and in some cases adopt specific standards that employers must meet if they are going to use AI decision-making technologies.

Most recently and comprehensively, on Dec. 9, 2021, D.C. Attorney General Karl Racine introduced legislation that would prohibit legal entities that gross at least $15 million annually from making algorithmic decisions on the basis of actual or perceived race, color, religion, national origin, sex, gender identity or gender expression.

This proposed bill would make it illegal to use any AI practice that has an adverse effect against an individual or class based on demographic traits, not only in employment, but also in decisions about housing, education and public accommodations, including credit, health care and insurance. This proposal also requires entities using AI to inform consumers about what personal information they collect from consumers and how that information affects their decision-making.

The proposal comes on the heels of passage of a new law in New York City late last year, which regulates employers’ use of AI decision-making technologies. The New York City law prohibits employers from using automated decision-making tools for employment screening unless the tool used is subject to “an impartial evaluation by an independent auditor” on an annual basis, that checks for adverse impact, and the results of the audit published on the employer’s website.

Starting in January 2023, companies will be required to disclose to job applicants how AI technology was used in the hiring or promotion process, and must also allow candidates to request alternative evaluative approaches like having a human process their application. New York City will become the first city to impose fines for undisclosed or biased AI use by employers, charging up to $1,500 per violation.

Previously, but more narrowly, Illinois passed H.B. 2557, which requires employers to disclose when AI is used in a video interview and allows interviewees the option to have their data deleted after being interviewed. Maryland followed Illinois and passed H.B. 1202, which prohibits the use of facial recognition during preemployment interviews until an employer receives the consent of the applicant.

Legislation was introduced in California, but it died in committee. S.B. 1241 would have required AI used in hiring to be audited annually, something like the NYC statute.

However, S.B. 1241 also would have created a presumption that an employer’s decision relating to hiring or promotion based on a test or other selection procedure is not discriminatory if the employer conducted a validity study showing the selection procedure was job related, and if annual reviews showed that the use of the selection process resulted in an increase in hiring of a protected class as compared to the pre-implementation workforce.

In other words, if an employer had a distorted workforce due to a history of discrimination, it could continue to discriminate using new algorithmic tools, so long as the new tool discriminated a little bit less than the employer had in prior years.

Also, the employer would be protected from a finding of discrimination under state law unless a plaintiff proved by the heightened clear and convincing evidence standard that the employer had reason to believe that the selection device would cause disparate impact before it began using the new process.

This would have set up terrible incentives for employers if state law were the only law they were subject to, but this attempted safe harbor would have offered no protection against Title VII claims that would have been facilitated by the law’s requirements to collect and maintain data. Hopefully, California does a better job on its next attempt.

These examples from Illinois, Maryland, New York, D.C. and California are only a start. Other states are likely to consider legislation in this area as well.

Indeed, Iowa Attorney General Tom Miller, the incoming president of the National Association of Attorneys General, has said his presidential initiative will focus on technology consumer protection, including algorithms that may manipulate or harm consumers.

Employers and employees concerned about the import of these AI hiring tools should pay attention not only to the EEOC’s initiative, but to legislative initiatives in state houses nationwide. Given the variety of approaches states and cities have taken, federal action that could bring some uniformity may gain more support.

Read Regulation of AI Hiring Tools Is a Work in Progress.