The evolving legal landscape of AI in the hiring process

Young person on couch working 720x440

7 mins, 22 secs read time

When it comes to AI, it’s clear that there are a variety of ways this game-changing technology can and will influence talent acquisition, both in the near term and down the road. The excitement and speculation about the potentially transformative power of AI has been at a fever pitch, but there’s also a sense that we are still at the beginning stages of fully appreciating the extent of its capabilities or its eventual impact on nearly every aspect of human life.

Our mission at Greenhouse is to help every company become great at hiring. That’s why we’re exploring ways in which we can harness the power of AI in the hiring process to enhance and streamline recruiting and interviewing, while simultaneously maintaining our commitment to promote ethical and equitable practices aimed at reducing bias.


The rapidly evolving legal landscape

In light of AI’s prominent place in the public discourse, it’s no surprise that we are suddenly fielding lots of questions from customers and candidates alike. These include concerns about whether Greenhouse uses algorithms to automatically advance or reject candidates (spoiler alert: we don’t), or whether our products comply with laws regulating the use of AI in the hiring process and in hiring tools (they do).

These concerns are valid, as regulators in the United States and abroad are swiftly signaling their apprehension about the increasing use of AI in hiring and its potential to produce inequitable outcomes for underrepresented communities. Earlier this year, for example, the EEOC issued Title VII guidance for employers who rely on AI and other automated decision-making tools in hiring. New York City’s Local Law 144, which governs the use of so-called automated decision making tools in the employment context, went into effect this summer, and similar bills are pending in Washington D.C., Massachusetts and elsewhere across the country. At the same time, the EU is poised to pass the world’s first comprehensive legal framework for artificial intelligence, the AI Act, sometime within the next year. The draft legislation for the AI Act explicitly designates AI systems used in hiring as “high-risk” and subject to heavy scrutiny, although details about how the law will be enforced are still forthcoming.

New York City’s Local Law 144 is the first in the US to address the use of algorithmic decision-making in hiring and experts believe that it will serve as a model for future legislation governing this space. Specifically, the law mandates that a tool that makes automated hiring or promotion decisions must undergo a bias audit and that employers who use such tools must publish the results of those audits. Hiring software solutions that employ black box algorithms to make hiring decisions that neither the candidate or the recruiter can confidently explain will have a difficult time complying with this new regulatory regime.

Related litigation is also beginning to proliferate. For example, a pending class action lawsuit alleges that one large HRIS provider’s internal screening tools, which recommend accepting or rejecting applications, disqualify applicants who are Black, disabled, or over the age of 40 at a disproportionate rate. In another example, an employer that programmed its hiring software to automatically reject candidates based on age recently settled an enforcement action filed by the EEOC.


How Greenhouse is thinking about incorporating AI in the hiring process

For many of our customers, despite the potential legal risks, the case for simplifying the recruiting process with AI or machine learning (ML) is highly compelling. The key will be ensuring AI is applied to the hiring process in a responsible and ethical way.

As the modern workplace continues to transform, internal recruiting teams are increasingly expected to monitor and report on their performance metrics and deliver efficiencies in their work, many of which can be achieved by leveraging AI or ML to automate some of the steps along the path to hiring the right people. Universal access to the internet and the dramatic shift to remote work occasioned by the COVID pandemic mean that many companies are receiving an unprecedented amount of job applications from a vastly expanded geographic range. The ability to sort through those applications in a streamlined, scalable fashion is critical to establishing and maintaining a successful hiring program.

However, the need for speed must always be balanced against what we now know about the likelihood that algorithms trained on historical hiring data may overrepresent certain populations. In addition, AI algorithms may perpetuate long-standing biases against groups of people who have traditionally faced discrimination in the employment context. A recruiter can correct for their own biases through self-awareness and intentionality; an AI-generated algorithm is inherently opaque and does not possess that capability. Simply put, even with the best intentions, using AI or ML to replace human decision-making in hiring could result in an unacceptable tradeoff, in which fair, equitable and reliable hiring outcomes are sacrificed in the name of efficiency.

It’s good to be aware of the potential challenges of AI in the hiring process – but with measured application, awareness of pitfalls and a clear intention, there’s so much benefit that AI can provide.

That’s why companies should take the time now to carefully evaluate the ways that they use automation in their recruiting and hiring practices, so as to avoid both biased outcomes and undesirable legal consequences down the road.


Greenhouse does not use AI to replace human decision-making

To be clear, there are many aspects of the hiring experience that can be dramatically improved by AI-enabled automation or streamlining without posing any additional risk of bias, such as employing generative AI to assist in drafting job posts, or training calendaring technology to respond to natural language requests to schedule interviews while taking into account all of the relevant participants’ availability. This type of innovation, which has no direct bearing on whether a candidate is ultimately hired or rejected, is not the target of the current legislation regulating AI hiring tools. Indeed, Greenhouse’s feature set already incorporates some of this technology and we plan to aggressively pursue product enhancements in this vein in the months and years to come.

The simplification of cumbersome drafting and scheduling tasks that aren’t inherently vulnerable to bias, however, is a far cry from allowing an algorithm to determine a candidate’s fitness for a job. In order to avoid the risk of bias that is inherent to that scenario, Greenhouse’s software is intentionally designed to ensure that human beings are involved in every step of the hiring decision-making process. We don’t use ML or other algorithmic techniques to automatically make disposition recommendations, assign quality scores, or rank candidates, because to do so would jeopardize our commitment to fairness and transparency in the hiring process, in addition to implicating laws like NYC Local Law 144.

Greenhouse customers can streamline the application review process without triggering legal or ethical tripwires by using our rules-based automation capabilities, which ensure that people (and not algorithms) still make all of the decisions about what qualifications are required for a given role and whether a given candidate meets those qualifications. Greenhouse can be enabled to automate the process of rejecting applicants, but only based on their isolated response to a custom prompt question that is selected and crafted at the employer’s sole discretion, without any input from training data or machine learning.

So, for example, a customer that wants to require candidates to have more than five years of experience for a certain job would be able to configure their Greenhouse account to auto-reject candidates who answer “no” to that qualifier. There’s no black box – it’s a decision-making process that’s easy to understand and explain to recruiters and candidates alike that also happens to save precious time and energy that can be put to better use elsewhere in the search.

With Greenhouse, a human is always in the loop on decisions about which candidates will make the best hires, which means that Greenhouse is not an “automated employment decision tool” and is therefore outside of the scope of NYC Local Law 144, as well as any pending or future legislation that similarly seeks to regulate systems that “hire by algorithm” or independently rank or score candidates based on machine learning.

Greenhouse continues to closely monitor relevant legal developments and remains committed to ensuring that our customers can hire for what’s next while reducing bias along the way.

Our approach to structured hiring empowers companies to facilitate stronger recruiter and hiring manager alignment, improve candidate experience and ultimately make better hires. Learn more about our AI approach and product innovation developments.

Learn more
Jung-Kyu McCann

Jung-Kyu McCann

is Chief Legal Officer and Corporate Secretary of Greenhouse where she manages all legal and compliance matters. Prior to Greenhouse, she served as Chief Legal Officer at Druva and in various roles at Apple and Broadcom, focusing on strategic transactions and corporate governance. She received her B.A. from Cornell University and her J.D. from Fordham University School of Law, cum laude and Order of the Coif.

Kate Hooker

Kate Hooker

is Associate General Counsel at Greenhouse. Prior to joining Greenhouse as the company’s first legal hire in 2015, she served as in-house counsel at Bloomberg L.P. Kate holds an undergraduate degree from Duke University and a J.D. from New York University School of Law.

Ready to become great at hiring?

Request a demo today