Illustration Of Human Head With A Luminous Brain Network


Using AI to Hire? Beware!

Illustration Of Human Head With A Luminous Brain Network
19 May

When it came to bias in hiring decision-making, conventional wisdom pertaining to artificial intelligence (AI) was that AI offered a system to be a great unbiased equalizer. On its face, it made sense: if we delegate complex decisions to AI, it becomes all about the math, cold calculations uncolored by the bias or prejudices we may hold as people. However, as we entered the infancy of the AI age, the fallacy in this thinking became apparent. As with any new technology, artificial intelligence too often reflects the bias of its creators. Consequently, there has been a call for creating transparency standards and making AI less inscrutable. AI Now, a nonprofit advocating for algorithmic fairness, has proposed a simple principle: when it comes to services for people, if designers can’t explain an algorithm’s decision, you shouldn’t be able to use it.

In response to the growing numbers of human resource decision-makers using artificial intelligence AI and automated systems to help them select new employees, EEOC released a technical assistance document, Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964. This document focuses on preventing the improper application of AI that can result in discrimination against job seekers and workers, including ensuring that organizations provide accommodations to those who are unable or unwilling to use the systems. The document explains the application of key established aspects of Title VII of the Civil Rights Act (Title VII) to an employer’s use of automated systems, including those that incorporate artificial intelligence (AI). The EEOC is the primary federal agency responsible for enforcing Title VII, which prohibits discrimination based on race, color, national origin, religion, or sex (including pregnancy, sexual orientation, and gender identity).

The EEOC’s focus on the use of AI systems was highlighted by the release of a joint statement on April 25, 2023, by the U.S. Equal Employment Opportunity Commission (EEOC), the Federal Trade Commission (FTC), the Civil Rights Division of the U.S. Department of Justice (DOJ), and the Consumer Financial Protection Bureau (CFPB, highlighting their commitment to “vigorously use their collective authorities to protect individuals” concerning artificial intelligence (AI) and automated systems. The joint statement pointed out that automated systems have the potential to negatively impact civil rights, fair competition, consumer protection, and equal opportunity.

One of the main concerns with AI and automated systems is using biased or unrepresentative data sets in the model development and training process. This can lead to AI systems producing discriminatory or unfair outcomes, even if unintentional. Enforcement actions in this area could target organizations that fail to ensure their AI systems are trained on representative and unbiased data or that do not take adequate measures to detect and mitigate potential biases in their AI-driven processes and decision-making.

While it may be difficult for regulators and consumers to see how an AI system works and whether it produces problematic results, it is far easier to determine whether individuals with disabilities can access opportunities governed by AI systems. For example, if an organization uses AI-driven video interviews or scored games to assist in hiring, do applicants with visual, auditory, or other disabilities have the same access to those job opportunities? Potential enforcement activity may result from failure to provide alternative access methods to AI-driven resources and services.

“As employers increasingly turn to AI and other automated systems, they must ensure that the use of these technologies aligns with civil rights laws and our national values of fairness, justice and equality,” said EEOC Chair Charlotte A. Burrows. “This new technical assistance document will aid employers and tech developers as they design and adopt new technologies.”

The EEOC’s new technical assistance document discusses the adverse impact, a key civil rights concept, to help employers prevent the use of AI from leading to discrimination in the workplace. This document builds on previous EEOC releases of technical assistance on AI and the Americans with Disabilities Act and a joint agency pledge. It also answers questions employers and tech developers may have about how Title VII applies to the use of automated systems in employment decisions and assists employers in evaluating whether such systems may have an adverse or disparate impact on a basis prohibited by Title VII.

“I encourage employers to conduct an ongoing self-analysis to determine whether they are using technology in a way that could result in discrimination,” said Burrows. “This technical assistance resource is another step in helping employers and vendors understand how civil rights laws apply to automated systems used in employment.”

The EEOC’s technical assistance document is part of its Artificial Intelligence and Algorithmic Fairness Initiative, which works to ensure that software – including AI – used in hiring and other employment decisions complies with the federal civil rights laws that the EEOC enforces.


  1. EEOC Releases New Resource on Artificial Intelligence and Title VII, May 18, 2023
  2. Will, B, Now Is The Time To Act To End Bias In AI,
  3. Hofeditz et al, Applying XAI to an AI-based system for candidate management to mitigate bias and discrimination in hiring, Electronic Markets (2022) 32:2207–2233
Test Orientation Video Evaluator Training Video

WorkSaver Employee Testing Systems
478 Corporate Dr.
Houma, LA 70360