With great power comes great responsibility: A commitment to data privacy with AI use

Team comparing notes on tablet

2 mins, 38 secs read time

It’s no secret that AI is taking the world by storm. And with this new explosion of AI tools comes a surge of new legislation targeting an area that has not really been specifically regulated prior to 2023. But, of course, there are a number of moral and legal obligations to consider when feeding personal data into generative AI tools. Here’s our approach to AI privacy and security at Greenhouse.


The push and pull between privacy and AI

Within the last year or so, we’ve seen the emergence of new AI legislation aimed at protecting individuals. These proposals typically aim to create legal regimes regulating generative AI (genAI) similarly to the way privacy is regulated. This makes sense – there’s a lot of overlap between genAI and privacy and similar risks.

There’s also a natural tension between AI and privacy. Whereas privacy is about limiting data to what is necessary, genAI models function better with more data and need more data to evolve. But it’s clear that responsible use of genAI requires ensuring that individuals’ privacy rights are secured and protected.


A look at the legal landscape

Valid bias concerns around AI data privacy in the hiring process have led to a number of regulations to protect candidates, which we’ve addressed in a recent blog. Now, the conversation has grown to include new standards for safety, security, privacy and responsibility that apply to any AI development.

In the US, President Biden recently issued an Executive Order on AI directing federal agencies to work on creating rules and research focused on the development of AI. Just as with many prominent data privacy in AI legislative proposals, the Executive Order emphasizes that the development of AI should be accompanied by a respectful stance on privacy rights.

Additionally, the Executive Order explicitly calls for Congress to pass comprehensive data privacy legislation around AI to ensure that all Americans are granted data protection rights aligned with GDPR and other similar laws.


Our commitment at Greenhouse

Above all else, Greenhouse is people-first. That’s why we support the goals of the Executive Order and legislative proposals. As we anticipate the arrival of more laws regulating AI and data privacy, we’re taking a proactive approach to protect our customers and their employees. Our Privacy and Security teams have developed and implemented processes to ensure that every aspect of new genAI tools, features and vendors is documented and reviewed with potential risks mitigated at each stage from development to deployment.

We’re embracing a framework for ensuring that data protection and privacy issues are considered from the earliest stages of design onward throughout the lifecycle of product engineering and development – otherwise known as “Privacy by Design” – and mapped it to AI. This is consistent with our view of AI as an assistant in hiring, not a replacement and is emblematic of our commitment to responsible use of AI to help all companies become great at hiring.

Learn more about how we’re committed to our customers’ privacy and data protection.


Learn more
Brian Reece

Brian Reece

is Privacy Counsel at Greenhouse. In this role, Brian offers legal guidance on privacy and data protection matters. He’s also responsible for overseeing the compliance of various products, services, processes, solutions, systems and architectures with privacy and data protection regulations. Brian holds a degree from the UCLA School of Law.

Ready to become great at hiring?

Request a demo today