Talent Matching, bias audits and transparency

At Greenhouse, we believe that fair hiring starts with fair technology. Talent Matching is a core component of Real Talent™, a new Greenhouse feature set designed to help customers tackle overwhelming candidate pipelines by analyzing and filtering applications into tiers based on risk and relevance.

Recruiters not only decide which job criteria to prioritize but also what relative weight to assign to each factor they are assessing. Our customers can see the basis for each Talent Matching result, meaning recruiters have transparency into which criteria match a candidate’s resume. Talent Matching does not have the ability to automatically advance or reject a candidate; it is a tool designed to help humans make hiring decisions, not to make those decisions for them.

We’ve implemented a comprehensive AI bias auditing program to ensure that Talent Matching operates equitably and consistently across demographic groups. Our commitment to auditing Talent Matching for bias strives not just to meet, but to surpass legal requirements and industry standards because we are passionate about the most qualified candidate getting the job. 

We engaged Warden AI, an independent third party, to conduct our bias audits. Key elements of these audits include: 

Continuous third-party monitoring: Warden AI will conduct regular testing of the algorithms underlying Talent Matching to provide continuous oversight and detect any potential bias issues. This isn’t a one-time or even an annual assessment – it’s our ongoing commitment to combat bias in a space that is rapidly evolving. 

Advanced technical analysis: Warden AI’s auditing process uses two sophisticated methodologies to audit the Talent Matching algorithms: 

  • Disparate impact analysis: We measure the equality of outcome on demographic groups of candidates (i.e., whether certain groups receive disproportionately better/worse results than others from the algorithm).
  • Demographic variable testing: We measure the equality of treatment by Talent Matching by examining how demographic variables (such as names, gendered words, hobbies, etc.) impact the algorithm’s behavior.

Comprehensive audit coverage: Warden AI leverages its own proprietary dataset to evaluate bias across 10 protected classes, in line with existing and emerging regulations like NYC LL-144, Colorado SB205 and California FEHA. 

Public transparency: Warden AI will publish the results of each audit on its public-facing Greenhouse dashboard, giving customers and candidates direct access to our bias testing results. 

Validation across the product lifecycle: Every new version of Talent Matching is tested against bias and requires passing a bias audit before being released live.

Each new AI feature that Greenhouse develops, including Talent Matching, is subject to a comprehensive review process involving key internal stakeholders from our legal, product, and security teams, as well as external legal experts, to ensure that our customers can use our tools to optimize their recruiting processes ethically, legally and confidently. For more information, see our ethical principles.