Robotic arm throws man in garbage can_web

AI personality tests for hiring are here. Now what?

Author  ADM+S Centre
Date 8 January 2024

Many employers use personality tests to determine a candidate’s cultural fit at a company. According to the New York Times, personality testing is a 2 billion dollar industry. With the recent rise of AI tools more companies are turning to AI driven personality testing to make hiring decisions, but how reliable are they?

“These days we pay a lot of attention to the use of AI in hiring and employment,” explains Julia Stoyanovich, a computer science professor at New York University (NYU) and affilate of the ARC Centre of Excellence for Automated Decision Making and Society.

Associate Professor Julia Stoyanovich has extensively researched the role of AI tools in the hiring process. She and a team of researchers did an external audit of two AI software companies that claim to determine a candidates’ personality when being considered for a job. The tools claim to construct a person’s personality based on their resume, Linkedin and profile on X (formerly known as Twitter). 

But can employers trust the process?

“What we found was that unfortunately these tools do not live up to their own expectations. The kinds of personality profiles that they construct can vary quite a bit depending on some properties of the input such as resume that shouldn’t matter at all.

“And so I think that we really need to be paying attention to the validity of these tools, to how useful they are, in addition to whether or not they are discriminatory.”

Julia said that one of the reasons that employers use these tools despite findings that they aren’t helpful is because they promise efficiency in screening. 

In November, Stoyanovich was invited to speak at one of United States Senator Chuck Schumer’s AI insight forums. In her statement she said one of the ways to make AI use in hiring more ethical is to disclose when AI is being used in the hiring process. And while New York City’s local Law 144 aims to do this, Stoyanovich says this is not enough because the law does not include any provisions to explain to job seekers why they were screened out by the tool.

“The question is whether it’s in fact helpful that employers disclose the use of AI in their screening processes to potential employees.

“What can we gain with the help of disclosure? There are lots of things that we can gain. One of them is simply that the public at large doesn’t have any information at all about what tools are being used today. 

“Additionally for an individual if they learn before they apply that they are going to be screened by a tool that they themselves don’t trust then a potential job applicant can decide not to apply. They can say this particular test is going to disadvantage me because I have a particular type of a disability or just simply to request that they be screened in a different way.”

New York City’s Law 144 requires a bias audit for AI tools, but some factors like age and disability aren’t accounted for.

“They only concern a very specific type of bias, that is relatively easy to comply with and relatively easy to check, and only with respect to gender, ethnicity and race or intersections of these categories.

“But the law doesn’t consider bias auditing based on age for example or on disability status. So ageism is really unfortunately very prevalent in hiring and employment.”

In 2022 the equal employment opportunity commission filed a lawsuit against a company because their AI model reportedly discriminated against female job applicants over the age of 55 and male job applicants over the age of 60.

In an ideal world AI operates without any bias but how likely are we to achieve this? Is it even possible?

“I must say that I am very skeptical about our ability to be able to use a technical patch, piece of technology, to address a long standing societal problem. The reason for this is that these problems, they’re not purely technical , they are social-technical.

“The reason that there’s bias in the predictions that these tools are making is because of how we construct them, because of how we source the data, because of what features we chose to use in order to make these predictions. But also to a very large degree what these tools can do is limited by how biased or unbiased our world is today.”

It’s hard to make strong predictions about exactly how the intersection of AI and hiring will evolve. 

Stoyanovich says there’s more work to be done on a policy and technological level.

“So I think that we all should take it upon ourselves to demand more disclosure about the use of these tools. Folks are paying attention, folks at the federal level, folks in New York City, private companies as well as government entities. Everybody’s paying attention.

“So we really have an opportunity here today, in 2024 and in the following years to hold companies to account about the goals that they pursue in their use of AI and also how they are checking whether these AI based tools are helping them reach these goals.”

Watch the original video story AI personality tests for hiring are here. Now what? published by MarketWatch 4 January, 2024.


Send this to a friend