Artificial intelligence is distinguished from simpler forms of automation by its ability to learn from its experiences with people. However, at its core, AI is simply an algorithm programmed by humans.
While an algorithm can’t be inherently prejudiced, the people who created it may be. They might not mean to be – but any unconscious bias or privileged perspective will shape the algorithm they’ve created. Sometimes it’s more complicated. In 2015, Google and Flickr had issues with their algorithms miscategorising photos, resulting in the AI giving some of the images racist and offensive tags. While AI has improved a great deal since then, there are still some programming issues. For example, some virtual assistants still struggle to understand a variety of accents.
If you’re creating a virtual assistant or other AI, make sure it is unbiased and doesn’t disadvantage a group of customers and alienate the very customers it’s supposed to help. AI may be more powerful than automation, but it also carries more risk and more responsibility.
Why bias in AI is a problem for retail
There are some sectors where bias is already accepted in a customer service setting. For example, car insurance premiums are often higher for younger drivers than they are older ones simply because younger people tend to have more claims. But there have also been some reports of unacceptable and possibly illegal bias, where insurance companies may have been guilty of racial discrimination when generating quotes.
In retail, bias can largely be managed by training employees. Retailers serve people based on one criterion: their ability to pay for the product or service. If someone can afford the product, whether that’s by credit card, debit card, or a repayment scheme, they can buy it. There may be layers of privilege with things like VIP status and loyalty programmes, but generally people are (or should be) treated equally.
AI is different. Once it’s past the programming stage, true AI can’t be guided. It needs to think for itself and form its own conclusions. You can’t guarantee that it will distinguish what we believe are acceptable traits (such as whether someone can afford something or not) from those traits that we deem unacceptable (personal characteristics such as name, gender, and ethnicity).
True AI needs to interact with humans to learn, but that means it can make mistakes and perhaps form biases of its own. For example, if the first 30 calls it answers from a specific town are all from customers who happen to sound belligerent, it may decide that all customers from that town are belligerent. It could then choose to de-prioritise calls from that town. It may be a logical conclusion from a purely programmatic point of view, but a human customer service specialist would think differently.
For retailers, having a biased AI helping to run customer service could result in more dissatisfied customers, as some people start to realise that they aren’t receiving the same level of service as others.
Programmers are developing ways to stop machines from drawing these sorts of erroneous conclusions; it’s the equivalent of teaching children the morality of universal human acceptance. However, the more we try to guide AI, the less it is AI. Instead, it reverts to automation that relies on human input to guide its every decision.
AI is an interesting area of development but, right now, its potential for bias – whether it stems from the biases of its human programmers or its own learnings – means that it’s far too risky a proposition for a customer service function. Automation, supported by a team of human customer service experts, is where retailers should look to invest at the moment. Automated tools excel at handling simple queries, leaving human agents free to provide support for the more complex queries that need the emotional intelligence that machines are yet to develop.
Jack Barmby of Gnatta, who help retailers to deliver universal customer engagement through the use of specialised software.