September 2023

AI in the Workplace

As employers increasingly adopt artificial intelligence (AI) tools to help make hiring decisions and perform other workplace functions, they must assess how to make the most of technological developments while remaining compliant with legal and regulatory requirements.

Modern workplaces are increasingly receptive to and reliant on tools powered by AI to perform certain human resources (HR) and employee management functions. AI-powered “smart” robots are also becoming common fixtures in various settings, such as grocery stores, fulfillment centers, and manufacturing facilities, often working alongside human employees. Regardless of who or what performs workplace functions, employers must comply with a panoply of federal, state, and local labor and employment laws.

While AI has existed in some form for decades, its scope and applications have expanded greatly in recent years due to increases in computing power and accessibility of large volumes of data. The use of AI in the workplace, in particular, has grown in the wake of the COVID-19 pandemic as remote work arrangements have become more common and employers compete for talent in a tight labor market. However, most employment laws were not designed with these technological developments in mind, presenting unique legal and regulatory challenges for employers.

This article examines the major labor and employment issues implicated by an employer’s use of AI in the workplace, including:

  • The potential for discrimination in screening, hiring, and other employment decisions.
  • Privacy issues raised by the use of AI recruiting tools, such as the need to comply with:
    • background check laws; and
    • salary and criminal history bans.

(For the complete version of this resource, which includes more on the use of AI-powered robots in the workplace, unanswered legal questions raised by AI use in the workplace, and recent AI-related legislative, regulatory, and litigation developments, see Artificial Intelligence (AI) in the Workplace on Practical Law; for a collection of resources to assist counsel with identifying potential issues raised by AI, see Artificial Intelligence Toolkit on Practical Law.)

AI Defined

AI is not a single technology. It exists in many technological forms and applications. AI involves a form of technology where the machine or software “learns” from the data it analyzes or tasks it performs and adapts its “behavior” based on what it learns from the data to improve its performance of certain tasks over time. In other words, it is technology that mimics human intelligence to perform tasks ordinarily performed by humans.

The National Artificial Intelligence Initiative Act of 2020 (NAIIA) defines AI as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments (National Defense Authorization Act for Fiscal Year 2021, Div. E, § 5002(3)).

AI is comprised of two key elements:

  • Data.
  • Algorithms.

At its core, AI is computer software programmed to execute algorithms (which are sets of code with instructions to perform specific tasks) over a data set to, among other things:

  • Recognize patterns.
  • Reach conclusions.
  • Make informed judgments.
  • Optimize practices.
  • Predict future behavior.
  • Automate repetitive functions.

AI intersects with, but should not be confused or used interchangeably with, big data. Although AI algorithms can make sense of big data and process it in a way far beyond any human capability, much of data analytics is just that — analytics — and does not actually incorporate any form of AI or machine learning.

AI exists in many different forms through different functions and applications. Some examples of AI include:

  • Natural language processing (NLP), such as the technology used to enable plain English legal research on Thomson Reuters Westlaw Edge.
  • Logical AI/inferencing, which creates decision trees based on user input, such as online tax preparation programs.
  • Machine learning, which is AI that learns from its past performance, such as predictive text.
  • Artificial neural networks, such as those used in image recognition technology.
  • Machine perception and motion manipulation, which are used, for instance, in industrial robotics.
  • Generative AI technologies, such as ChatGPT and Google Bard.

These and other fundamental AI technologies can be used to perform various functions, including:

  • Expertise automation.
  • Image recognition and classification.
  • Speech-to-text and text-to-speech conversion.
  • Text analytics and generation.
  • Voice-controlled assistance (like Amazon Echo and Google Home).
  • Language translation.

Common Uses of AI in the Workplace

AI-powered tools, including generative AI technologies such as ChatGPT, may assist employers in performing many workplace functions, including:

  • Recruiting and hiring. HR professionals and recruiters may use AI to:
    • source and screen candidates, including using predictive hiring tools to identify a company’s performance drivers and improve the quality of hires;
    • scan resumes and prioritize applications using certain keywords;
    • schedule interviews;
    • ask or answer questions (through virtual assistants or chatbots) about preliminary job qualifications, salary ranges, and the hiring process, potentially rejecting candidates lacking certain defined requirements;
    • create compelling job descriptions to source more suitable candidates;
    • analyze candidate skills, aptitudes, and even cultural fit by rating their performance after completing computer games and tasks with analytical results;
    • use depersonalized information to make salary determinations; and
    • conduct video and recorded interviews and analyze candidate responses and speech patterns.
  • Employee onboarding. Chatbots may answer new employee questions and direct them to the appropriate corporate resources.
  • Performance management and productivity. AI tools are available to:
    • determine the profiles of successful employees;
    • measure individual employee performance;
    • select candidates for promotion;
    • analyze sales calls to improve closing rates;
    • analyze productivity to identify cost-saving measures;
    • rate employee productivity by monitoring keystrokes or other factors;
    • use predictive analytics to identify potential equipment failures, product defects, and worker safety and compliance violations;
    • monitor driver behavior, using dashcams and telematics; and
    • identify pay equity issues.
  • Managing remote employees. Employers may use data analytics, AI, and other technologies to track remote employees, especially given the remote and hybrid work and “wandering” or work from anywhere arrangements that have become increasingly common following the COVID-19 pandemic. (For more on remote and hybrid working, see Remote Employees: Best Practices in the July 2023 issue of Practical Law The Journal and Hybrid Work Checklist on Practical Law.)
  • Career coaching. AI tools may suggest new positions, training, and available professional development resources based on an employee’s career interests.
  • Employee retention. AI tools can be used to predict which employees are likely to leave a job and coach managers about how to retain those employees.
  • Automation and safety. AI-powered “smart” robots have become more common in the workplace and often work alongside human employees to:
    • automate repetitive tasks;
    • increase efficiency; and
    • perform dangerous tasks.

Discrimination and Bias Risks in Screening, Interviewing, and Hiring

One of the most rapidly growing uses of AI is in the employee screening and recruiting process, which is likely to continue given the increase in remote hiring and hybrid work arrangements. AI promises to streamline these tasks by automatically sorting, ranking, and eliminating candidates with minimal human oversight, and potentially drawing from a broader and more diverse pool of candidates.

Anti-discrimination laws apply regardless of whether an employer’s recruiting and hiring tasks are performed by human employees or an AI-powered tool.

Employers must comply with a host of federal, state, and local anti-discrimination laws in all aspects of the employment relationship, including the preemployment screening and interview process. Anti-discrimination laws generally prohibit discrimination against individuals in the terms or conditions of employment based on their membership in one or more protected classes. Theories of liability include:

  • Disparate treatment. Disparate treatment is intentional discrimination against individuals because they are members of a specified protected class. It is the most blatant and obvious form of discrimination against applicants and employees and occurs when an employer treats one individual applicant or employee differently than a similarly situated individual because of that individual’s race, color, religion, sex, or membership in another protected class.
  • Disparate impact. Disparate impact discrimination is a more subtle form of unlawful conduct that occurs when a facially neutral policy or practice unduly disadvantages individuals based on their membership in a protected class. For example, an advertisement seeking applicants with no more than two years of relevant work experience is facially neutral but may have a disparate impact on older candidates.

Anti-discrimination laws apply regardless of whether an employer’s recruiting and hiring tasks are performed by human employees or an AI-powered tool.

(For more on theories of liability for discrimination claims, see Discrimination Under Title VII: Basics on Practical Law; for more on risks in the hiring process generally, see Recruiting and Interviewing: Minimizing Legal Risk and Social Media in the Hiring Process Checklist on Practical Law.)

Disparate Treatment Claims

Using AI tools in the recruitment process offers the promise of:

  • Reducing or eliminating the unconscious bias that sometimes informs human recruiting and hiring decisions.
  • Increasing diversity in the workforce.

For example, a hiring manager may unconsciously form a negative impression of a candidate whose name suggests a certain gender, race, ethnicity, or religion. An AI-powered algorithm can be taught to disregard the candidate’s name in the selection process and focus instead on the skills and experience needed to perform the job. AI tools also can search from a broader and potentially more diverse pool of applicants to identify potential candidates who may have gone unnoticed in a human-driven search.

However, using AI does not insulate an employer from discrimination claims, as the employer remains liable for its employment decisions, regardless of what tools it uses to make those decisions.

Risk of Data Bias

Some AI tools use the internet, social media, and public databases to inform or recommend hiring decisions, and these sources typically contain information about applicants and employees that an employer cannot legally ask about on an employment application or during an interview (such as an applicant’s age, religion, sexual orientation, or genetic information) (for more information, see Avoiding Discriminatory Questions in Interviews Checklist on Practical Law).

AI tools are only as good as the information provided to them. Depending on the available data set and the algorithms used, AI recruiting tools may duplicate and proliferate past discriminatory practices that favored one group of individuals over another.

For example, some AI tools search for the best candidates by examining the characteristics of an employer’s current employee pool, seeking those individuals with qualities similar to the most successful or highly compensated employees. If a company’s top sales performers have been predominantly white men and the AI tool only analyzes data (or a seed set) from the company’s past personnel records, then the machine may “learn” that whiteness or maleness is a desirable trait in and of itself. In other words, the AI tool may find a correlation between whiteness or maleness and success at the company even though these traits are not the reason the individuals were successful.

AI tools are only as good as the information provided to them. Depending on the available data set and the algorithms used, AI recruiting tools may duplicate and proliferate past discriminatory practices that favored one group of individuals over another.

In fact, this was the underlying issue that drove Amazon’s much publicized decision to abandon an AI hiring tool after beta testing it. The tool relied on data compiled from ten years of Amazon’s past hiring practices to identify the traits of successful software developers and other technical employees. Because the workforce had been male dominated during that time, the tool screened out or assigned lower rankings to candidates with traits not found in that data pool, such as attendance at a women’s college, participation on a women’s sports team, or membership in a female sorority. The data essentially had past bias and discrimination “baked into” the results. This example highlights the importance of using human professionals to continually monitor AI results and modify the algorithms and data sets if they discover discriminatory results.

(For more on bias in AI, see Bias and AI: The Case for Inclusive Tech on Practical Law.)

The Unknown “Black Box” of AI

Another potential problem with using AI tools in the hiring process arises from the lack of transparency in the algorithmic process. It is often impossible to determine how or why an AI tool reached a decision or made a prediction. Algorithms that function in this so-called “black box” present challenges for both employees and employers working within a legal structure that relies on proving intent or causation in AI-related discrimination claims.

On the one hand, AI-driven decisions may make it less likely that plaintiffs can prove intentional discrimination using the direct method of proof because computer programs inherently lack intent. This problem is amplified by an inability to prove (or even know) why the AI tool did what it did. On the other hand, most discrimination claims lack direct evidence and instead rely on circumstantial evidence that is analyzed under the McDonnell Douglas burden-shifting analysis. Under that framework, once an employee demonstrates a prima face case of discrimination, the employer must articulate a “legitimate, nondiscriminatory reason” for the challenged employment action. (McDonnell Douglas Corp. v. Green, 411 U.S. 792, 802 (1973).)

Employers relying on an AI-dictated or AI-informed decision may be hard-pressed to meet this burden. The “black box” of AI may make it more difficult for employers to articulate a legitimate, nondiscriminatory reason for a decision because they generally do not know (and often cannot know) how or why the AI tool did what it did. It is unclear whether courts will find that an employer’s decision to use AI, however laudable its goals in doing so, constitutes a legitimate, nondiscriminatory reason for an employment action if the employer cannot explain the underlying decision path. (For more on the McDonnell Douglas standard, see McDonnell Douglas Standard Language for Summary Judgment Brief on Practical Law.)

While employers may seek indemnification from their AI vendors during the contract negotiation stage, vendors are likely to resist these efforts. However, employers that are sued because of AI-informed hiring decisions are likely to attempt to bring the AI vendor into the litigation, with or without an indemnification clause.

Disparate Impact Claims

AI recruiting and hiring tools may increase the risk of disparate impact claims. In disparate impact cases, the plaintiff must identify a facially neutral policy or practice that has a disproportionately harmful effect on a protected class.

If the plaintiff meets this initial burden, the burden of persuasion then shifts to the employer to show both that:

  • The policy or practice is “job related for the position in question and consistent with business necessity.”
  • No other alternative employment requirement suffices. (42 U.S.C. § 2000e-2(k)(1)(A); see also Williams v. Wells Fargo Bank, N.A., 901 F.3d 1036, 1040 (8th Cir. 2018); EEOC v. Stan Koch & Sons Trucking, Inc., 557 F. Supp. 3d 884, 894-98 (D. Minn. 2021).)

If the employer meets this burden, the plaintiff may still prevail by showing that an alternative employment practice has less disparate impact and also serves the employer’s legitimate business interest (42 U.S.C. § 2000e-2(k)(1)(A)). The McDonnell Douglas test is not used to assess disparate impact claims (see Griggs v. Duke Power Co., 401 U.S. 424, 431-32, 436 (1971)). Courts adjudicating disparate impact cases instead analyze the legality of the contested practice (for example, the test or policy).

The plaintiff generally shows disparate impact by a statistical comparison, which the defendant employer can challenge. Under the Equal Employment Opportunity Commission’s (EEOC’s) Uniform Guidelines on Employee Selection Procedures, the EEOC often uses a “four-fifths rule of thumb.” This means that a selection rate for any protected class that is less than four-fifths (80%) of the selection rate for the most successful group generally constitutes evidence of discrimination. (29 C.F.R. § 1607.4(D).) The plaintiff need not prove that the employer intended to discriminate against any individual or group. The employer’s motives are irrelevant, even if the employer’s reason for using the AI tool was to eliminate bias or increase diversity.

(For a discussion of recent EEOC guidance on assessing disparate impact in employment selection procedures involving AI, see EEOC Guidance on AI in this issue of Practical Law The Journal.)

How Disparate Impact Claims May Arise

The risk of disparate impact claims is magnified when using AI tools because of:

  • The large quantity of data that is analyzed.
  • The possibility that the algorithm may identify a statistical correlation that does not causally explain why certain individuals are successful (see Risk of Data Bias above).

For example:

  • An algorithm may identify individuals from a particular zip code as promising candidates for certain jobs. That zip code in turn may reflect a homogeneous population that excludes individuals of a certain race or religion. The problem for the employer is that a person’s zip code is not “job related for the position” or “consistent with business necessity” (42 U.S.C. § 2000e-2(k)(1)(A)). While the tool may find a statistical correlation between these data points and high-performing individuals, it is less clear that their zip code is the characteristic that actually causes those individuals to be successful. Some lawmakers recognize this issue and are attempting to prevent this practice through legislation.
  • An algorithm may be trained to screen out candidates who have one or two year gaps in their employment histories. However, this may disparately impact women who have taken time off for maternity leave or family care responsibilities (for more information, see Pregnancy Discrimination and Family Responsibilities Discrimination on Practical Law).
  • An AI tool that evaluates applicants’ performance on a computer game may disproportionately eliminate candidates in a given protected class or discriminate against individuals with disabilities (see Disability Discrimination and Accommodation and Related EEOC Guidance below). An employer may be unable to justify its use of the gaming tool as a business necessity, especially if it does not (or cannot) understand the AI reasoning process used to score, rank, or eliminate candidates.

Employers must be vigilant in analyzing the results of the AI tools and review and continue to monitor those results, as the AI tools learn and adapt based on prior results.

Class Action Implications

Disparate impact claims are more commonly brought as class actions than as individual claims. Plaintiffs may not rely only on “bottom line” numbers showing, for example, that a company employs fewer women than men. Instead, they must demonstrate that the statistical disparity is caused by the employment policy or practice being challenged. (Watson v. Fort Worth Bank & Trust, 487 U.S. 977, 995 (1988) (stating that disparities must be “sufficiently substantial that they raise such an inference of causation”).) Where an employer relies on an AI tool to screen or select applicants, an algorithmic bias (such as the one experienced by Amazon when beta testing its recruitment tool) may provide the necessary link for the plaintiffs to state a disparate impact claim.

Employees and applicants may have an easier time alleging class-wide discrimination claims if the employer uses the same AI tool or algorithm to assess an entire pool of candidates. This strategy allows plaintiffs to establish a common hiring practice that affects an entire class of plaintiffs, which is one of the hurdles to class certification (for more information, see Class Actions: Certification on Practical Law). However, AI tools may make it more difficult for plaintiffs to meet their initial burden of showing a statistical disparity between the successful applicants and the broader applicant pool because of the difficulty in identifying the relevant applicant pool, which is a key element of the plaintiffs’ statistical analysis (see, for example, Meditz v. City of Newark, 658 F.3d 364, 373-74 (3d Cir. 2011)). Given the available data and vast pool of potential applicants, which may not be limited by geographic proximity to the employer’s physical location, it is unclear how courts will view the plaintiffs’ obligation to present statistical evidence of disparate impact in this context.

Disability Discrimination and Accommodation and Related EEOC Guidance

The Americans with Disabilities Act (ADA) prohibits employers from discriminating against qualified individuals with a disability in all employment-related decisions, such as hiring, promotion, or job termination. The ADA also requires employers to provide a reasonable accommodation to qualified individuals when needed to perform or apply for a job. (42 U.S.C. §§ 12101 to 12213.)

Certain AI-powered screening and recruitment tools may have a discriminatory impact on individuals with a disability. For example, AI tools that analyze an individual’s speech pattern in a recorded interview may negatively “rate” individuals with a speech impediment or hearing impairment who are otherwise well-qualified to perform the essential job functions. Perhaps recognizing this issue, Illinois enacted the Artificial Intelligence Video Interview Act, which requires employers to provide notice to applicants and get their consent before conducting a video interview to be analyzed by AI tools (820 ILCS 42/1 to 42/15 (Public Act 101-0260)).

Similar problems may arise with tools that analyze a candidate’s facial expressions. For example, certain facial patterns may correlate to individuals with genetic diseases or other disabilities but are unrelated to the individual’s ability to perform the essential job functions. This issue is among several raised in a complaint filed against HireVue, a company offering video interviewing and hiring assessment services, by the Electronic Privacy Information Center (EPIC) with the Federal Trade Commission (FTC) (see In re HireVue, Inc., Complaint and Request for Investigation, Injunction, and Other Relief). Following the lawsuit, HireVue announced that it was removing visual analysis from its assessment models.

Employers should monitor the processes and results of AI tools to ensure that they are not screening out potential candidates who need an accommodation to perform analytical or other tasks in the hiring process or the essential job functions once hired. For example, an algorithm that correlates gym membership with successful candidates may screen out disabled individuals who cannot workout at a gym but can otherwise perform the essential job functions, either with or without a reasonable accommodation. Similarly, a chatbot that screens out individuals with gaps in their employment history may impermissibly exclude those who stopped working because of a disability (for example, to seek medical or psychological treatment).

When using online recruiting tools for interviews, initial screening, or testing, employers also must ensure that the website or platform is accessible to individuals who are hearing- or sight-impaired (for more information, see Title III of the Americans with Disabilities Act (ADA): Website Compliance on Practical Law).

In May 2022, the EEOC issued guidance on AI decision-making tools and algorithmic disability bias. The guidance identifies three primary ways that employers using these tools may violate the ADA. These include an employer’s:

  • Failure to provide a reasonable accommodation needed for the algorithm to rate the individual accurately.
  • Use of a tool that “screens out” a disabled individual (whether intentionally or unintentionally) when the individual is otherwise qualified to do the job, with or without a reasonable accommodation. This may occur if the disability prevents the individual from meeting minimum selection criteria or performing well on an online assessment.
  • Use of a tool that makes impermissible disability-related inquiries and medical examinations. (See EEOC, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022) (EEOC ADA Guidance) (Question 2).)

To reduce the likelihood of disability bias when using AI tools, the EEOC ADA Guidance identifies “promising practices” and suggests, among other things, that employers:

  • Train staff to recognize and promptly process reasonable accommodation requests. Accommodations may include:
    • allowing an applicant to retake an assessment test in another format; or
    • reassessing an applicant’s poor test results.
  • Train staff to use alternative means of rating job applicants and employees when the current evaluation process is inaccessible or otherwise unfairly disadvantages someone who has requested a reasonable accommodation because of a disability.
  • Ensure third-party test administrators either:
    • promptly forward all accommodation requests to the employer; or
    • contractually agree to provide reasonable accommodations on the employer’s behalf.
  • Use tools designed to be accessible to individuals with as many different disabilities as possible and engage in user testing.
  • Inform job applicants and employees that reasonable accommodations are available for individuals with disabilities, and clearly communicate in an accessible format the process for requesting an accommodation.
  • Clearly explain in an accessible format:
    • what traits that the algorithm is designed to assess;
    • how the algorithm assesses those traits; and
    • what variables or factors may affect the rating.
  • Ensure that the tools measure abilities or qualifications for the essential functions of the position directly and not by mere correlation (for more information, see Importance of Job Descriptions on Practical Law).
  • Confirm with the vendor that the tool does not impermissibly seek or elicit information about an individual’s disability or health, except and as allowed regarding reasonable accommodation requests. (See EEOC, Enforcement Guidance on Disability-Related Inquiries and Medical Examinations of Employees Under the ADA (July 26, 2010), EEOC ADA Guidance (Question 14), and Tips for Workers: The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence.)

(For more on disability discrimination and accommodation, see Disability Discrimination Under the ADA and Disability Accommodation Under the ADA on Practical Law.)

Background Checks, Salary History Bans, and Other Privacy Protections

When making hiring and other employment-related decisions, employers using AI tools must comply with applicable laws that impose obligations and restrictions regarding what information they may seek and consider in reaching their decisions.

Background Check Disclosures

Employers that use AI screening and recruiting tools with access to criminal records or other information retrieved in a typical background check must be mindful of their compliance obligations under the Fair Credit Reporting Act (FCRA) and applicable state background check laws.

The FCRA requires employers that use a consumer reporting agency (CRA) to obtain consumer reports for employment purposes to:

  • Provide a stand-alone written disclosure to the job applicant or employee indicating that the employer intends to use information in the consumer report for decisions related to employment.
  • Get written permission from the job applicant or employee before obtaining any consumer report. (15 U.S.C. § 1681b(b)(2)(A).)

Social media background check companies are considered CRAs under the FCRA because they assemble or evaluate consumer report information that is furnished to employers, and the employers use that information as a factor in determining eligibility for employment (Letter to FTC, 2011 WL 2110608 (May 9, 2011)). AI tools with access to the same information arguably may qualify as a CRA and therefore trigger disclosure and reporting obligations under the FCRA (15 U.S.C. § 1681b(b)(2)(A)). Some state laws also impose notification requirements on employers that conduct background checks internally, even if they do not use a CRA.

Employers that take an adverse employment action based on information in a background report must satisfy other notification requirements (for more information, see Background Checks and References on Practical Law). However, the reasons an AI tool rejects a candidate may be unknown (or unknowable) to an employer. This creates a tension between the current regulatory scheme and the use of AI in making employment decisions.

To address this concern, the FTC, which is the agency responsible for enforcing the FCRA, has solicited public comment about various issues relating to algorithms, AI, and predictive intelligence (see FTC, FTC Hearing #7: Competition and Consumer Protection in the 21st Century (Nov. 13–14, 2018)). In 2020, the FTC published tips for companies using AI and algorithms. Specifically, the FTC recommends that companies can manage consumer protection risks by:

  • Being transparent.
  • Explaining their decisions to the consumer.
  • Ensuring that their decisions are fair.
  • Ensuring that their data models are robust and empirically sound.
  • Remaining accountable for compliance, ethics, fairness, and nondiscrimination. (FTC Blog, Using Artificial Intelligence and Algorithms (Apr. 8, 2020).)

Although the guidance is directed at protecting consumers, many of the recommendations are equally applicable in the employment context.

Employers that use AI screening and recruiting tools with access to criminal records or other information retrieved in a typical background check must be mindful of their compliance obligations under the Fair Credit Reporting Act and applicable state background check laws.

In 2021, the FTC provided further guidance on how to use AI “truthfully, fairly, and equitably.” Specifically, the FTC recommends that businesses using AI should, among other things:

  • Use the right data set, analyzing for gaps and supplementing missing data as necessary.
  • Monitor for discriminatory outcomes.
  • Embrace transparency and independence, such as by publishing results and considering outside audits.
  • Not exaggerate or over-promise unbiased results, as doing so may be a deceptive trade practice.
  • Disclose how they are using captured data.
  • Ensure that the AI tool does more good than harm.
  • Remain accountable for algorithmic results, or risk an FTC investigation or enforcement action. (FTC Blog, Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI (Apr. 21, 2021).)

(For more on background checks under the FCRA, see Fair Credit Reporting Act (FCRA) Disclosure and Authorization for Background Checks and Fair Credit Reporting Act (FCRA) Adverse Action Notification Letter on Practical Law.)

Salary History Bans

Many state and local governments have made pay equity a priority, including wage-gap initiatives like salary history bans. Salary history bans, also referred to as pay or wage history bans or wage-gap laws, generally prohibit employers from inquiring about an applicant’s past compensation or benefits during the pre-employment process or considering that information when making interview, hiring, or compensation decisions.

Most salary history bans prohibit employers from relying on an applicant’s salary history information when making hiring decisions, even if that information is publicly available. Depending on the data and algorithms used, some AI tools may violate salary history ban laws by screening or selecting applicants based on past compensation. Employers should ensure that the algorithms do not identify past salary as a relevant screening criterion, especially in those jurisdictions that have enacted salary history bans.

(For more on salary history bans, see State and Local Salary History Bans and State and Local Salary History Laws Chart: Overview on Practical Law.)

Criminal History Bans

Many states and local jurisdictions have enacted criminal history bans (also known as ban-the-box laws) in an attempt to ensure fair hiring practices and provide a second chance for individuals with a past criminal arrest or conviction record. While they vary in detail, these laws generally restrict employers from asking about or relying on an applicant’s criminal history until after making a conditional offer of employment.

An AI recruitment tool that screens out applicants based on their criminal backgrounds (and potentially even gaps in employment) may violate these ban-the-box laws. Employers should ensure that the algorithms used do not make impermissible employment decisions based on criminal backgrounds.

(For more on state and local ban-the-box laws, see Ban-The-Box State and Local Laws Chart: Overview on Practical Law.)

Privacy and Data Concerns

Employers using AI screening and recruiting tools must ensure that the AI tools do not violate applicants’ or employees’ privacy rights under various statutes. Although a detailed discussion of these laws is beyond the scope of this article, employers using AI tools must ensure compliance with:

Employers also must understand what data the AI tool collects and know where it is stored. They must ensure they can find the data and take the necessary steps to preserve it in the event of actual or threatened litigation, or face serious litigation risks.

(For more on data privacy issues involving AI, see Artificial Intelligence: Key Legal Issues in the January 2023 issue of Practical Law The Journal.)

ChatGPT and Generative AI

AI-powered chatbots have begun to perform various workplace functions with increasing frequency. This trend has only accelerated since the launch of ChatGPT, a dialogue-based AI technology that provides textual responses to users’ natural language queries through an online chatbot interface.

While ChatGPT and other generative AI tools may offer the promise of increased efficiency and the ability to free employees from more mundane, time-consuming tasks, the use of ChatGPT in the workplace raises many questions for employers, most of which remain unanswered and untested.

For example, employers considering using ChatGPT in the workplace must:

  • Determine how much, if at all, to allow or encourage employees to use ChatGPT and similar tools to perform their job functions. If allowing its use, the employer should train employees on any restrictions or limitations on using the tool.
  • Understand the legal risks of using ChatGPT in the workplace, such as:
    • the potential employer liability based on the inherent bias and limitations of AI, depending on the data on which the tool is trained;
    • the possibility that ChatGPT will provide inaccurate responses; and
    • the potential for disclosure of trade secrets and other confidential information when engaging with ChatGPT.
  • Comply with any state or local laws requiring audits or notice if using ChatGPT or other automated decision tools in the hiring process.

(For more on the legal issues raised by ChatGPT and similar generative AI tools, see ChatGPT and Generative AI: Key Legal Issues in the June 2023 issue of Practical Law The Journal.)

Barbara Harris joined Practical Law from DLA Piper LLP (US), where she was counsel in the labor and employment group. Previously she was counsel at Solomon, Zauderer, Ellenhorn, Frischer & Sharp and a litigation associate at Alschuler, Grossman & Pines LLP.