Analyzing trends in leadership, company culture and the art of career building.
‘Hard-data discourse’
This week, researchers from the University of Chicago Booth School of Business and Erasmus University Rotterdam shared new findings about artificial intelligence’s ability to surface strong job candidates.
The results don’t bode especially well for human recruiters. But they do give business leaders some new data that can help them decide if, or where, it makes sense to invest in AI.
Working with a recruitment firm, the researchers randomly assigned 67,000 job seekers to be interviewed by an AI voice agent, a human recruiter or their choice between the two. (The AI voice agent disclosed itself as AI at the beginning of the call, so candidates who were paired with a bot were aware they weren’t speaking with a human in those calls.) In all cases, humans made the final hiring decisions for the jobs (entry-level customer service roles in the Philippines) based on how well candidates performed in the interview and on a standardized test.
The AI-led interviews resulted in 12% more job offers and a 17% higher rate of retention for at least the first month. The paper, which is being circulated for feedback before undergoing formal peer review, found that AI voice agents covered significantly more key topics in the calls relative to human interviews, which meant the recruiters had more relevant information on which to base their decisions.
That’s in part because, as Booth’s Brian Jabarian, one of the paper’s authors, told me, the AI spoke less and prompted the interviewee to speak more — a useful reminder for anyone meeting with job candidates as part of the evaluation process. Also, when there are thousands of interviews to conduct, human interviewers are prone to fatigue (unlike AI) and can sometimes forget to ask certain questions.
A few candidates were allergic to the idea of being interviewed by AI. About 5% of applicants given an AI-led interview ended the call because they didn’t want to speak to a bot. The voice agent also ran into technical difficulties in 7% of cases. And applicants did rate the AI voice agent as “significantly less natural.”
However, among applicants who volunteered feedback, 70% of those who encountered an AI-led interview said it was a positive experience, versus roughly half of those interviewed by humans. That took the professional recruiters off guard, a majority of whom expected AI to perform worse.
Jabarian said he, too, was “quite surprised” the AI voice agent was as good as it was at collecting data through social interaction — a skill usually thought to be a uniquely human strength.
Also remarkable: Almost four in five applicants, given the option, chose to be interviewed by AI. That may have been because it was simply more convenient — they could schedule the call at any time. The researchers also found that the applicants they surveyed had a mostly sunny view of AI, which also may have contributed to more of them opting in.
One of the most interesting takeaways, in my mind, was what the study implied about the cost-effectiveness of replacing professional recruiters with AI bots.
Even though the bots did surprisingly well in a complex and important job, this type of technology comes with no guarantee of a return on investment. While candidates in the study were able to schedule their interviews a bit faster with the AI voice agents, that efficiency was wiped out by the fact that it took the human recruiters twice as much time, on average, to review the AI-led interview results.
That outcome accords with an MIT study that made headlines and rattled investors last week with the conclusion that 95% of enterprise AI pilots saw no measurable returns, despite billions in investments.
Whether investing in AI voice agents for job interviews actually translates into real savings is largely situational. It may depend on what kinds of jobs are being filled, and whether AI can be as skilled at interviewing for, say, white-collar positions versus the more rote jobs involved in this study.
It also may depend on factors such as hiring volume and how much a company pays its human recruiters. Small companies in regions where recruiter wages are fairly low might not break even, especially given upfront vendor costs. Large organizations processing huge quantities of applicants in areas where recruiter wages are high, by contrast, might realize significant savings.
The fact that the AI interviewing tool appears to lead to higher-quality matches also may mean companies stand to save money because they won’t have to replace their new hires as often — a huge boon in industries like call centers, where turnover is notoriously high.
More broadly, this study may mark the beginning of a more clear-eyed phase in how companies approach their AI investments.
Executives inundated with AI hype (and pressure from the boardroom) might not take the time for stodgy analyses involving randomized control trials as they rush to deploy their first pilots. But to Jabarian, an economist who studies how technology reshapes productivity and organizational behavior and serves as principal researcher at Booth’s Roman Family Center for Decision Research, that kind of scrutiny (and sobriety) is vital. Without thoughtful methodology, there’s no way to know whether AI is really improving matters, and if so, how — and by how much.
“We have to move from the ‘possible’ discourse to the hard-data discourse, so we don’t lose our rationality,” he said.
In other words, instead of listening to the visionaries, sprinkling chatbots everywhere and hoping for magic, deploying AI may soon mean putting decisions firmly in the hands of the bean-counters and scientists, with a process that is slower, more deliberate and laser-focused on where the costs actually pencil out.