Need to know:
- Artifical intelligence (AI) systems like ChatGPT can automate and streamline processes, freeing up HR and leadership time for more nuanced and creative tasks.
- Benefits communications can be personalised, and their delivery optimised for each individual employee, to have a the best possible impact.
- Employers must be careful with sensitive data, as well as ensuring that AI systems do not exacerbate existing biases.
ChatGPT, the latest in artificial intelligence (AI) developments, has stormed the headlines with its ability to seemingly think, and communicate, in a way that is eerily human.
While it is hard to avoid stories that focus on concerns about the direction of technology, it is worth realising that this onward march is inevitable. The difference between success and failure will lie in how it is implemented and monitored.
As with any such development, what takes consumers by storm will soon find its way into the world of employment.
The secret to good comms
Businesses have been using AI to communicate with staff for some time. Unilever discussed how its chatbot was reimagining the employee experience as far back as 2018, when this was the subject of Employee Benefits Live’s closing keynote.
However, ChatGPT is not just another automated service. Instead of simply regurgitating information when triggered by certain prompts, it hints at the possibility of a real-feeling conversation with something that thinks like a person but has instant access to a far greater wealth of information than even the best HR professional has at their fingertips.
Ryne Sherman, chief science officer at Hogan Assessment Systems, says: “Every organisation has a whole bunch of knowledge. A lot of that lives inside the people who work there, some is buried in policy documents, some is based in archives and white papers; things that are available but few people know how to access them or have ever read them.
“Imagine having an employee benefits brain that has all of the knowledge of [an] organisation, and somebody who’s new can come in and ask questions.
“There’s a really big opportunity to have this knowledge base that is organisation-specific. This is a great tool.”
ChatGPT can also provide the kind of communications that engage the reader, meaning the specific person, and not simply a generalised estimation of what an employee wants to hear, providing help with drafting emails and messages guaranteed to appeal to the end reader.
“Historically, we thought of the cognitive processing part, interpreting and giving feedback on information, as unique to humans,” says Sherman.
“But with AI we’re seeing machines perceive information, infer, and make recommendations based in much the way humans would do in the HR space.”
Debra Clark, head of wellbeing at Towergate Health and Protection, adds: “The secret to good utilisation of benefits and wellbeing support is personalised, bite-sized chunks of relevant communication. There’s every chance that ChatGPT could support that.
“[We would] also be able to see what people were engaging with, which elements of articles were having more of an impact, and therefore generate or embellish content.”
This ability to provide engaging, personalised communications which help employees access, and integrally, understand, their benefits is only becoming more important in the face of trends such as the ‘Great Resignation’.
Samantha Carr, director of Slalom, says: “It’s important to engage and both understand and meet [employees’] needs, understanding the employee experience throughout the lifecycle, the moments that matter and how that relates to the [employee] experience, so that [employers] can maximise [their] impact.”
This also works the other way, with AI systems like ChatGPT providing the chance for employees to feed back in a more nuanced, representative way.
This might even include listening in to meetings, such as focus groups and employee forums, and providing real-time insight into staff wants and needs, adds Carr.
“It’s about giving greater opportunity to effectively listen through more mediums,” she explains.
“Typically, we’ve relied on engagement surveys and some focus groups, but the potential inputs start to expand massively, to identify trends, things are important, things that employees are talking about or care about.”
Dealing with the data
This ability to build two-way communication channels can help streamline business processes and could take some of the guesswork out of communications.
Carr points to programmes such as Copilot, which offers to pull information from Microsoft 365, providing greater understanding of how messages have been delivered in the past, quickly creating draft communications without the need to reinvent the wheel each time. This, alongside information about which messages, mediums and timings are most likely to engage each individual, can automate and add certainty to the entire process.
However, too much data sharing can be a problem, particularly when it comes to sensitive topics, such as healthcare and wellbeing. Clarke suggests trialling these communications with more generic topics initially, and perhaps waiting until the technology has been developed to have greater protections built in.
Sherman agrees: “The ‘garbage in, garbage out’ rule totally applies in this space as well and HR folks should be very concerned about the quality of that data. I also wouldn’t be submitting anything that was considered proprietary knowledge or information, at least right now.”
Carr adds: “The return is only as good as the data it is pulling from, so what we’ve been doing is working with organisations to get them ready for what we call the modern culture of data. This is really about how [employers] govern [their] data, how [they] control it, is it clear, who owns it, and how [they] keep it accurate.”
There is also a difference between personal and personalised data; allowing a system to understand trends as to what information is important to an employee is one thing, but sensitive data, particularly around health and wellbeing, must be handled with caution.
Laura Kearsley, partner and solicitor at Nelsons, says: “AI systems such as ChatGPT learn from any information inputted into them and, therefore, confidential information inputted by HR teams could be used by the systems to develop and improve regardless of its confidential nature.
“HR teams have access to all sorts of confidential data about employees, some of which will be special category data for the purposes of the GDPR and employers owe strict duties to keep this data secure and to alert employees if it is to be shared with other data processors. Employees are likely to be cautious about their employers inputting their data into AI systems.”
The human touch
ChatGPT marks a turning point where AI is becoming more human-like, but the point where it is interchangeable with a person is still far off, if realistic at all, says Clarke.
“Wellbeing, benefits and health are still very personal, and a big part of ChatGPT is trying to humanise its responses, but my concern is that it would still need sense-checking,” she says.
“If it came down to an individual’s personal view of their wellbeing or their health, would you trust an AI-generated response? There would have to be checks in place that it wasn’t suggesting all sorts of random things as solutions.”
One thing stopping AI from replacing humans is the difference between creative and innovative thought. ChatGPT and the like can certainly think creatively, often in ways that might not have occurred to humans looking at the same bank of information. However, innovative thought, which means coming up with something entirely new, is still out of its reach.
“[We are] never going to get a different thought, because it’s generated based on existing articles,” Clarke explains. “So, [we] need someone to push boundaries and to think outside the box.”
At the moment, AI’s role is about replacing humans on repetitive, time-consuming tasks. People are still vital in terms of training and sense checking, to ensure AI works as it should, says Sherman.
Kearsley explains: “ChatGPT produces information which will sound plausible, but it is only as accurate as the sources it relies upon which might be incorrect or out of date. Using ChatGPT to produce employee communications might risk these containing inaccurate information and employers need to be alert to this.”
Carr adds: “[Employers are] going to need humans to train these AI tools on things like ethics, [their] organisation, policies, and tone of voice. It’s a journey to educate and train AI. [Employers] absolutely need someone in the interim who is sense checking.
“It will hopefully start to bring in some efficiencies and productivity, pulling together an outline communication or proposal, that a human being can then intelligently interpret.”
In addition, no matter how well AI might come to perform on a Turing Test, employees will still want real, human conversations, particularly for sensitive wellbeing issues, adds Clarke.
“People do still like to talk to people about the stuff that really matters,” she says.
Some level of automation can also be counter-productive, says Carr. For example, while being able to train an AI to put out updates from a business leader, using their language and tone of voice, can help create a personal feel while freeing up leadership for other commitments, it can detract from authenticity, which employees are looking for now more than ever.
Diversity and inclusion
While some of the pitfalls of feeding poor data into ChatGPT might be simply missing the mark on communications, there are more concerning implications that have to be considered.
“What these AI machines are really good at is finding very small associations and then exploiting them like crazy,” Sherman explains.
“So if there’s a really small association, say, between gender and being selected for a role, these machines detect that and exacerbate it much more, and [we] end up with huge bias.”
Employers must, therefore, be aware not only of what data is being fed into a system, but also who is feeding it. A team which itself lacks diversity, or fosters unconscious bias, is going to recreate this, often unwittingly, in any AI they train.
“If it’s always the same type of person putting in the same type of content at the start to then be analysed, [they are] going to get the same type of output,” Clarke says.
“So, [employers] still need that diversity of thought at the beginning otherwise [they] are going to get a very narrow sort of output of communications. That’s where there’s got to be some kind of policing.”
When using data in any context, employers are increasingly and acutely aware of the importance of being sensitive to protected characteristics. However, with AI’s ability to make minute connections not visible to the human eye, there might be some characteristics that do not look protected, but have invisible connections, again leading to biased results, warns Sherman.
In addition to this, the ‘black box’ way that programmes such as ChatGPT work, namely, it is not always clear how, or why, they have come to a certain point of reasoning, can make it difficult to monitor when these unseen biases are coming into play.
This is not to say that employers should shy away from using these innovations. Indeed, as time goes on, the ‘Amazon effect’ may mean that doing so risks being left behind with the luddites, and not meeting employee expectations. Organisations must simply remember not to rush in, to take a staggered approach, testing and learning as they go, rather than grabbing at AI as the next best thing.
Sign up to our newsletters
Receive news and guidance on a range of HR issues direct to your inbox
Read also: