The Integration of AI into Employment Processes: Laws, Liability, and Look-Ahead

Nov 25, 2024 | Blog
Partner

Background

The rapidly increasing sophistication of artificial intelligence (AI) technology presents an almost endless number of applications in virtually every industry where data of some kind is used. AI is currently being used to automate and streamline tasks that were previously undertaken by humans, including the various processes of employers. AI might be used in a variety of employment processes such as screening resumes for certain characteristics, identifying potential applicants, analyzing customer feedback, determining pay raises, and deciding who should be let go.

But while the integration of AI into employment processes may be useful, there is growing concern over the risk of inadvertent discrimination on the basis of race, color, religion, sex (including gender, sexual orientation, and pregnancy), national origin, age, disability, or genetic information.

An illustrative example is Amazon’s now-defunct recruiting engine that was developed in 2014 to review applicants’ resumes and single out top talent. However, the AI engine was trained using the data from resumes submitted to Amazon during the past ten years—the majority of which were submitted by men. The algorithms therefore learned to prioritize resumes following the pattern of those typically submitted by men while learning to penalize resumes that mentioned the word “women.”

Just because discrimination like this is unintended does not give employers impunity. Employers may be held liable for inadvertent discrimination caused by their use of an AI system in employment processes, and there is new and existing legislation across the country placing an affirmative duty on employers to monitor any AI systems they may use.

U.S. State Laws

Several U.S. states have already passed (or are considering passing) legislation that creates an affirmative obligation for employers. Some examples of current and proposed state legislation include:

Colorado: Employers are required to implement a risk management policy, complete an impact assessment, and notify prospective employees that an AI system is being used, in part or in whole, to make decisions.

Illinois: Employers are required to provide notice any time they are using an AI system in any employment process, including recruitment, hiring, promotion, renewal of employment, selection for training, discharge, discipline, or tenure.

Utah: Employers that use generative AI systems to interact with an individual must disclose to that individual that they are interacting with generative AI if the individual asks.

New York: Proposed state legislation would allow employers to only use AI systems which had been subject to a disparate impact analysis in the past year and would require employers to post a summary of the impact analysis on their website.

New York City already has legislation requiring an employer to only use AI systems which are audited for bias annually, publish a summary of the audit, and provide notices to applicants and employees.

New Jersey: Proposed legislation would require employers that use AI systems for screening prospective employees to provide a summary of the AI system’s most recent bias audit and provide notice to prospective employees that AI was used in the decision-making process.

Federal Guidance

Both the U.S. Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Labor (DOL) have issued guidance on navigating this issue.

In 2022, the EEOC published guidance on the use of AI systems in the hiring process as a part of its Artificial Intelligence and Algorithmic Fairness Initiative. According to the EEOC, employers should assess their use of AI systems to ensure that it does not cause a selection rate for individuals in a certain group that is “substantially” less than the selection rate for individuals in another group. The EEOC also reminded employers that they are responsible for their selection procedures, including the use of AI systems.

More recently, in 2024, the DOL published guidance on the use of AI systems in employment processes for federal contractors and emphasized that eliminating humans from these processes puts them at risk of discrimination law violations. According to the DOL, federal contractors must take affirmative action to ensure that employees and applicants are treated without regard to their protected class.

Laws Around the Globe

Several countries and international organizations have already passed legislation placing an affirmative duty on employers who use AI systems in labor and employment.

European Union: The EU adopted the EU AI Act earlier this year, which requires employers (called “deployers”) to do the following if they use an AI system in employment:

  • Apply the provider’s instructions for use of the AI system.
  • Guarantee human oversight.
  • Validate input data to ensure its suitability for intended use.
  • Monitor AI system activity.
  • Report any malfunctions, incidents, or risks to the AI system’s provider or distributor promptly.
  • Save logs if under their control.
  • Carry out a fundamental rights impact assessment.

Italy: Italy has passed the Transparency Decree, requiring employers to provide employees, job applicants, and any trade union representatives within the company with information about what aspects of the employment relationship may be affected by AI.

Brazil: The proposed AI Bill would require employers to perform risk assessments and be transparent about AI use.

Canada: The proposed Artificial Intelligence and Data Act would require employers to put in place appropriate risk mitigation strategies and ensure AI systems are continually monitored.

What Companies Can Do

In light of the rapidly developing changes related to AI, there are some steps that companies can take to preemptively limit liability:

  • Determine what (if any) AI system is being used right now in any screening, hiring, promotion, or termination processes.
  • Pay attention to whether any AI system is being used in a state or country that requires an employer to notify potential or current employees or create a risk assessment for the AI system.
  • Consider creating risk assessments for any AI system used in employment processes.
  • Consult trusted employment counsel to help navigate the existing laws and stay apprised of new legislation.

If you have any questions regarding the implementation of AI systems in employment processes and the resulting liability, please contact a member of Barton’s Labor & Employment Group.

Barton LLP
Privacy Overview

Our website uses certain cookies to enhance site navigation, analyze website usage, and assist in marketing efforts that may collect your personal information. You can accept or reject these cookies.