Protecting against AI bias in HR and reward

Gethin Nadin, Director of Employee Wellbeing – Benefex

Over the years, I’ve enjoyed reading research and articles about luck – why do some people seem luckier than others? Are people born lucky? Well, the answer to the last question is yes: people are born lucky. For example, if you happen to be a man, you’ll earn around 21% more money than a woman. If you are a white man, you’ll earn as much as 17% more than a black colleague. In fact, it’s estimated that more than a third of the overall gender pay gap in the UK could be the result of bias.

What is unconscious bias?

Bias is defined as our prejudice for or against someone. Our unconscious bias is how we think about different groups of people (based on stereotypes, stigma or society’s influence) outside of our conscious thoughts. Unconscious bias happens without us really noticing – it’s the snap judgements we make about people, and the unconscious way we treat them. As so many organisations are now aiming to be inclusive, unconscious bias is a real concern..

We assume that AI systems will be bias-free. After all, machines are a blank slate. They can’t have unconscious biases, can they? Although some AI is built to specifically detect and remove bias, we must bear in mind that AI is still ultimately programmed by people, and from people data. What does that mean for AI in the workplace?

Technology reflects the biases of the people who build it

Despite the efforts of many organisations to cultivate more diverse and inclusive workplaces, the AI tech that is common in our lives still shows alarming levels of bias. For example, voice command in cars has struggled to recognise female voices, and AI is often biased against black faces. All AI is based on processing and learning from data. Data, which – at some point in its past – has human fingerprints all over it. Unfortunately, bias is also present in some of the most fundamental HR and reward technologies, like employee benefits.

Segmenting groups of employees by age is a common bias we see in reward and HR. We know from research that the decade in which an employee was born doesn’t determine how they feel about work or what benefits they would like. Yet many HR departments still persist in developing benefit and wellbeing strategies that believe every twenty-something wants to get on the property ladder, and only parents want protection products.

A new concept called age meta-stereotypes explains how we think, when we believe another age group is holding a stereotype about our own. A great example of this in action was found by the Harvard Business Review: undergraduate students were tasked with training another person on a computer task. The researchers varied ages of each participant using photographs and voice modifying technology. Results showed that stereotypes about older people’s abilities interfered with the training; as young trainers delivered lower quality training when they believed they were training an older person.

Eliminating historical bias

One of the most famous examples of how bias makes its way into AI was demonstrated by Amazon. After four years of use, Amazon scrapped an AI recruiting project after it was revealed the system had taught itself that male candidates were preferable.

The system was trained to vet applicants by observing patterns in resumes submitted to Amazon over a 10-year period. As historically, the most successful applications came from men, the system developed a preference for male candidates and excluded resumes that included words like ‘women’s’ (e.g. hobbies include Women’s hockey etc). In this instance, real-life unconscious bias demonstrated by humans filtered down through the technology.

It’s difficult to control our own unconscious bias, as it operates without our control. So how do we counteract this? Well, we need to make sure that our HR and reward teams are a diverse and fair representation of our organisation. We also need to make sure that the people who design and roll out AI within HR are able to remove the bias from the training data which informs the AI.

Overcoming bias in AI can be as easy as eliminating the data associated with stereotypes. If we didn’t know an employee’s gender or age, what data points would we start looking at to determine what benefits were best for them? We also need to be honest with ourselves that when our historical data is being used to train a system for the future, it can be erroneous. If income protection was heavily targeted and communicated towards older workers, the historical data will show that older workers have a preference for income protection – a common assumption. Yet in 2019, younger employees are showing a preference for this product as part of their benefits package – the historical data won’t show this.

What are the take-aways?

There’s nothing to say AI cannot be a powerful ally in the fight against unconscious bias. But to get to that point, employers must ensure they are empowering the systems with unbiased thinking. When it comes to building and managing that AI tech, ensure you work with technology providers who put ethics and diversity at the centre of their development. Eliminate historical bias from your data, confront your workplace biases, and even questioning your own unconscious bias.

CTA: HR’s personal assistant

What about AI that can help fight unconscious bias? Check out our latest blog to explore upcoming AI and its place in HR’s toolbox.