Critics of using artificial intelligence (AI) in the recruitment process say that programs are imbued with the biases of their creators. With coding and software development roles still dominated by white males, this presents a huge problem for employers wanting to use technology to eliminate unconscious bias. But what is the truth?
AI is now a catch-all term for everything from Amazon’s Alexa to Google’s AlphaGo. However, many things that are marketed as AI are, in fact, no more complex than the common calculator. These products are reliant on pre-programmed inputs, even though they use concepts and technologies which have emerged from the study of AI, such as voice recognition. Apple’s Siri assistant, for example, is programmed to run a web search for any command that it doesn’t know.
This type of AI is particularly susceptible to its creator’s biases, as it isn’t intelligent in its own right. It relies on pre-scripted instructions to output a result defined by its creator.
Machine learning, which uses algorithms to learn from patterns, is closer to real artificial intelligence. Machine learning can either use supervised learning, where it is fed patterns and told what the patterns mean, or unsupervised learning, where it identifies patterns itself.
For employers wanting to find a bias free recruitment solution, AI can seem like an easy way to avoid human bias and eliminate discrimination. However, there are a number of problems with this:
AI needs data to learn, and without completely unbiased data, it will learn biases. This is a particular problem in recruitment, where most decisions are made with the input of conscious or unconscious biases. Another key issue with artificial intelligence is its dependence on patterns. Where there is a pattern of Russell Group graduates being more likely to perform highly, AI will learn that Russell Group graduates are good, and other graduates are bad. In this way it acts much like the human brain, which also learns biases from the patterns it sees.