Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone

Lately, Tatiana Becker has been flooded with work. As the founder of the boutique US firm NIAH Recruiting, she spends her days sifting through hundreds of resumes, hoping to fill dozens of roles at companies that have hired her to find matching employees.

Companies tend to hire the most at the start of the year, mainly because of hiring budgets that have been set and go into effect in the first quarter. “Everybody came back to work, and it’s been kind of insane,” Becker said in a recent interview. In her professional groups and in forums for human resources and recruiting, everyone is buzzing about the same thing: using new artificial intelligence tools to ease the workload.

In the race to embrace artificial intelligence, some businesses are using a new crop of generative AI products that can help screen and rank candidates for jobs — and some think these tools can even evaluate candidates more fairly than humans. But a Bloomberg analysis found that the best-known generative AI tool systematically produces biases that disadvantage groups based on their names.

Posted in

Iron Will

Leave a Comment

You must be logged in to post a comment.