This ecommerce giant made its billions by automating shopping service, but when it came to recruiting it made three crucial mistakes that lead to bias.
According to sources close to the project, it was obvious from the first year that AMZN.O – Amazon’s Recruiting AI – did not like women… like, at all!
The classified project quietly started in 2014, the Seattle company sought to create in-house computer programs to review and score candidates, sources told Reuters. “Everyone wanted this Holy Grail,” one source shared. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”
In much the same way that customers rate products, AMZN.O rated candidates from one to five stars. However, a year into the experiment a gender bias became apparent, especially for software developers and technical posts.
The problem was in the data – the algorithm was feeding on a decade of almost all male resumes and concluding that the ideal candidate is a man, or rather, not a woman. AMZN.O would dock points from graduates of all women’s colleges and downgrade resumes with the word woman/women like “women’s chess club captain.”
Sources say that the algorithm was edited to be neutral to these specific terms, but there was still the fear that the program would teach itself new ways to detect femme resumes and continue to grade them lower.
And though Amazon owes most of its success to its ability to automate everything from warehouse management to pricing, the project was scrapped last year as executives lost hope that AMZN.O could ever be functional. The sources, who only agreed to speak with Reuters a year after the project ended — and under complete anonymity — maintain that no hiring decisions were made using the bias AI.
For some, this story is proof that we aren’t ready for AI in recruiting, and indeed there is still much to learn. Computer scientists like Nihar Shah, who teaches machine learning at Carnegie Mellon University, warn that an algorithm is easier to make than to control.
“How to ensure that the algorithm is fair, how to make sure the algorithm is really interpretable and explainable – that’s still quite far off.”
However, according to 2017 CareerBuilder survey, 55 percent of US HR managers said that AI will be a regular part of their work within the next five years. So is the solution really to avoid artificial intelligence in recruiting all together, or are there some lessons we can glean from this AI debacle? For further insight, we talked to the director of product for SmartRecruiters, Hessam Lavi.
“Developers of this type of systems have an enormous responsibility to prevent negative biases to shape the artificial intelligence they want to produce,” says Lavi. “So, proper training needs to take place to learn not just the technical and process effects of artificial intelligence, but how AI will affect natural beings as well.”
Lavi, who recently headed the team in building SmartAssistant, the first recruiting AI native to an ATS, sees three crucial mistakes when it comes to AMZN.O
- Thinking the bias is coming from the machine: Negative biases are unfortunately part of the recruiting trade, whether from humans or machines – only it’s much harder to detect in people. So, having a system that makes biases apparent is valuable in itself. The AI learns from the data you feed it, so it’s not the program that’s biased so much as the people who made the decisions that the computer is now analyzing. Eliminating the program is not tantamount to eliminating bias.
- Limiting the data set: The dataset from one company, even one as big as Amazon, just isn’t enough. A singular company may be using bias paradigms unintentionally. The bottom line is, the more data the better.
- Deriving future predictions from past events: Past-predicts-future AI can work great for domains such as medical imaging that have a very narrow focus, for example, forecasting the growth of a tumor where an AI can be trained to make clear-cut decisions and act as an expert. However, in hiring which involves a wide range of factors, this type of assumptive AI tends to emphasize biases of the past. If you only had men hired in the past, the algorithm may assume it’s because they are the best people for the job and will continue prioritizing them for future positions.
His best advice? Avoid the black box!
“When we built SmartAssistant we split up the decision processing into smaller, distinct components,” says Lavi. “For example, one component would analyze candidates’ industry experience, one would examine education, one would evaluate soft skills, and so on. Through creating these stand-alone units, we can trace negative outcomes back to their origin and understand why they are happening.”
“We believe the final decision in the recruiting process will be made by humans for the foreseeable future,” Lavi affirms. “But, AI has the ability to automate many of the repetitive tasks and winnow down the stacks of resumes that overwhelm recruiters and cause them to lean on their negative biases. AI technology is much more than just automating tasks and it can teach us about how we make decisions and point out shortcomings in our abilities.”