Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.
On Thursday, October 28, 2021, the U.S. Equal Employment Opportunity Commission announced the launch of an initiative aimed at ensuring that the use of artificial intelligence (AI) and other technology-driven tools utilized in hiring and other employment decisions complies with anti-discrimination laws. While acknowledging that AI and algorithmic tools have potential to improve our lives and employment, EEOC Chair Charlotte A. Burrows noted, “the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”
At this point, according to the EEOC’s press release, the initiative is focused on, “how technology is fundamentally changing the way employment decisions are made.” In this regard, the initiative seeks to guide stakeholders, including employers and vendors, in making sure emerging decision-making tools are being employed fairly and consistent with anti-discrimination laws. As a component of its initiative, the EEOC intends to:
- Establish an internal working group to coordinate the agency’s work on the initiative;
- Launch a series of listening sessions with key stakeholders about algorithmic tools and their employment ramifications;
- Gather information about the adoption, design, and impact of hiring and other employment-related technologies;
- Identify promising practices; and
- Issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.
In 2016, the EEOC held a public hearing on the equal employment opportunity implications of big data in the workplace, and the EEOC intends to build on that work. Focus areas of that hearing included potential discrimination, privacy concerns, and the possibility that disabled applicants or employees may be disadvantaged. At that hearing, Littler Shareholder Marko Mrkonich testified that a critical next step was for regulators, employers, and other stakeholders to learn about the opportunities AI offers, and consider how it can properly be deployed.
Of importance to global employers, earlier this year the European Union published a proposed regulation aimed at creating a regulatory framework for the use of AI. In the hierarchy discussed in the EU’s proposed regulation, AI systems that are implemented in the recruitment and management of talent should be classified as high-risk. Given that classification, among other requirements, employers or vendors using such systems would be required to develop a risk management system, maintain technical documentation, adopt appropriate data governance measures, meet transparency requirements, maintain human oversight, and meet registration requirements.
It remains to be seen how U.S. regulators may utilize the tenets set forth in the EU’s proposed regulation. Nevertheless, the EEOC’s initiative further underscores the interest in such systems, and that care must be taken when deploying such systems to avoid running afoul of anti-discrimination laws.