Artificial Intelligence (ai) Technologies and chatbots

Blog

EEOC’s Second Set of Guidance on using Software, Algorithms and Artificial Intelligence in Employment Decisions

Artificial Intelligence (ai) Technologies and chatbots
27 Jul

Over the past decade, it has become increasingly commonplace for employers to use algorithmic decision-making tools in employment. Employers use a wide range of tools to assist them in employment decision-making and performance management, including:

  1. Resume scanners
  2. Employee keystroke and other monitoring software
  3. “Virtual assistants” or “chatbots” to filter job applicants
  4. Software that evaluates candidates based on their facial expressions and speech patterns in video interviewing
  5. Testing software that provide “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or quiz

In 2021, the US Equal Employment Opportunity Commission (“EEOC”) launched an agency-wide “Artificial Intelligence and Algorithmic Fairness Initiative” to ensure that the use of software – including AI, machine learning, and other emerging technologies – in hiring and other employment decisions complies with federal civil rights laws. The EEOC is the primary federal agency responsible for enforcing federal non-discrimination laws.

In May 2022, the EEOC issued guidance regarding compliance with the federal Americans with Disabilities Act and the use of software, algorithms and artificial intelligence in making employment decisions: The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (“ADA AI Guidance”). The EEOC’s ADA AI Guidance made clear that employers can be liable for violating the Americans with Disabilities Act if their use of software, algorithms and artificial intelligence results, for example, in the failure to properly provide or consider an employee’s reasonable accommodation request or in the intentional or unintentional screening-out of applicants with disabilities even if these applicants can perform the position with a reasonable accommodation.

AI Disparate Impact Guidance

Last month, as part of its continuing focus on AI, the EEOC issued its second set of guidance regarding employer use of AI. The EEOC’s non-binding technical assistance document, titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” (“AI Disparate Impact Guidance”), provides employers with guidance regarding the application of federal non-discrimination laws in connection with an employer’s use of automated systems, algorithms and artificial intelligence (“AI”) tools (referred to as “algorithmic decision-making tools”) in making employment decisions.

The EEOC’s AI Disparate Impact Guidance focuses on one aspect of Title VII of the Civil Rights Act’s non-discrimination provisions – the prohibition on “disparate” or “adverse” impact discrimination resulting from the use of algorithmic decision-making tools. Disparate or adverse impact refers to an employer’s use of a facially neutral employment selection procedure or test that has a disproportionately large negative impact on individuals based on characteristics protected under Title VII, such as race, color, religion, sex, or national origin. Although not specifically covered by Title VII, disparate impact resulting from a selection procedure can also have a disproportionately large negative impact on individuals based on age.

Highlights from the Guidance:

Selection Procedures. The EEOC’s AI Disparate Impact Guidance makes clear that the EEOC treats employer use of algorithmic decision-making tools as an employment “selection procedure” under Title VII. Accordingly, the use of algorithmic decision-making tools to “make or inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees” is subject to the EEOC’s long-standing Uniform Guidelines on Employee Selection Procedures under Title VII (“Title VII Guidelines”), which were adopted in 1979. Employers must therefore ensure that their selection procedures when using algorithmic decision-making tools do not result in a disparate or adverse impact under Title VII – unless they can establish that the use of these tools is “job-related and consistent with business necessity,” and there is no less-discriminatory alternative that is equally effective.

The “Four-Fifths Rule.” Employers can use the “general rule of thumb” for assessing adverse impact – the “four-fifths rule” – when analyzing adverse impact with respect to algorithmic decision-making tools. Under the four-fifths rule, adverse impact is generally indicated where a “selection rate” for any protected characteristic is less than 80% (four-fifths) of the rate of the group with the highest selection rate. For example, if a “job fit” tool selects 30 out of 60 male candidates and 15 out of 45 female candidates, the selection rate for male candidates is 50% (30/60) and the selection rate for female candidates is 33% (15/45). The ratio of the two selection rates is 66% (33/50). Because 66% is less than 80% (four-fifths), the “job fit” tool would generally be viewed as having an adverse impact. Notably, the AI Disparate Impact Guidance expressly reiterates the EEOC’s caution articulated in the Title VII Guidelines – that the four-fifths rule is “merely a rule of thumb” and that reliance on the rule is not dispositive. Indeed, the use of the four-fifths rule may be “inappropriate” in certain circumstances because, for example, the data may not be sufficient to provide a statistically significant assessment.

No Third-Party Shield. While employers often rely on third parties, including software vendors, to develop, design, implement and/or administer algorithmic decision-making tools, the employers themselves may ultimately bear responsibility under Title VII for any adverse impact caused by the use of such tools. The EEOC therefore recommends that employers ask third parties what metrics they have used to assess whether their algorithmic decision-making tools result in adverse impact. In addition, the EEOC advises that a third party’s assurances or representations regarding compliance with Tile VII will not necessarily shield employers from liability if use of a tool results in disparate impact. Rather, the EEOC “encourages employers to conduct self-analyses on an ongoing basis” to determine whether their use of algorithmic decision-making tools create an adverse impact.

Options to Address Disparate Impact. In the event that an employer discovers that its algorithmic decision-making tool does result in disparate impact, the EEOC suggests that the employer can either discontinue use of the tool, select an alternative tool that does not have a disparate impact, or modify or redesign the tool using “comparably effective alternative algorithms” during the development stage of the algorithmic tools. With respect to the final option, the EEOC warns that failure to use less discriminatory algorithms presented during the development process could result in liability.

Looking Forward. The use of AI and other software in employment and other areas is becoming a focal point of regulatory scrutiny. On April 25, 2023, the EEOC issued a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems with officials from the Department of Justice, the Consumer Financial Protection Bureau and the Federal Trade Commission reiterating their “resolve to monitor the development and use of automated systems and promote responsible innovation” and pledging “to vigorously use [their] collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.” In addition, a number of states, including Illinois, Maryland, and New York City have passed laws regarding employer use of AI in the workplace.

Employers who use or are considering the use of algorithmic decision-making tools in employment should be mindful and intentional about their design, implementation and use – and should ensure that they are up to date on regulatory and other developments in this rapidly evolving area. Employers should actively engage with third parties that design, develop, deploy and/or administer the tools they are using to mitigate potential adverse impact and regularly self-audit the use of these tools to determine whether the technology is being used in a way that could result in discrimination. Multi-national employers should also keep in mind that Title VII can apply to US citizens who primarily work outside the United States if they are employed by a US employer or by a foreign corporation controlled by a US employer.

Contact WorkSaver Systems to help ensure that your organization is complying with federal standards.

Reference:

Ruth Zadikany and Radha D.S. Kulkarni with Mayer Brown, Lexology, July 6 2023

Test Orientation Video Evaluator Training Video
address-icon
VISIT US:

WorkSaver Employee Testing Systems
478 Corporate Dr.
Houma, LA 70360