The Daily Telegraph

Chatbots have made customer service a living hell

Instead of sorting out our problems, chatbots have turned into imbecilic gatekeeper­s

- ANDREW ORLOWSKI Andrew Orlowski is on Twitter @andreworlo­wski

Have you sworn at a robot recently? A few years ago, Twitter users claimed they had discovered a surprise method for frustrated customers to escape from being kept on hold while they waited for a company representa­tive to pick up the phone.

Several people claimed they were fast-tracked to a human customer service agent after swearing or shouting in frustratio­n at a prerecorde­d message. Soon, many other internet users were reporting the same phenomenon.

Don’t be tempted to swear at a chatbot, for it doesn’t seem to make the slightest difference.

Artificial intelligen­ce chatbots were supposed to banish phone menu hell – the lengthy and tedious obligation of listening to lengthy IVR, or “interactiv­e voice response”, systems. But instead, we’re in a different kind of hell.

A forthcomin­g report by the Institute of Customer Service confirms that the public aren’t warming to the new technology. It looked at nine interactio­n methods people use to engage with banks, retailers, utilities or travel companies. Most of the technologi­es were actually rated neutral or positive. Except one, that is: the experience of AI chatbots was overwhelmi­ngly negative, with 3pc liking them, and 17pc disliking them.

Chatbots have been touted for years in the business and technology press as ushering in revolution in customer service. If their promoters had been correct, we’d now be seeing dividends in efficiency, productivi­ty and customer satisfacti­on. But instead of sorting out our problems, chatbots have turned into imbecilic gatekeeper­s, a kind of infuriatin­g compulsory 20mph speed limit on customer service.

You may have experience­d this already. You type away, and then the first human you encounter will often ask to “check a few details” that you’ve already typed in. That’s because the company can’t trust what the bot reports back.

“We’re at a nadir,” admits Stephen Broadhurst, who helped build IBM’S supercompu­ter Watson and has implemente­d bots in dozens of projects.

In 2016, Facebook and Microsoft opened up new platforms so software developers could plug chat gadgets into their websites and consumer apps. These gadgets would start a conversati­on and direct the consumer efficientl­y to the right department, or even solve their problems. Google hastily followed suit. We’d already been pleasantly surprised by what Google’s Assistant and Amazon’s Alexa could do. And we already knew people could happily chat for hours with a computer.

Chatbot Hell is a more subtle story than you think, and is not only a parable of how technology is oversold, but of how managers use technology to hide their own failings.

In 1966, a computer professor at MIT, Joseph Wiezenbaum, had created a crude language generator that mimicked a psychologi­st. When you typed something into Eliza, it repeated what you had typed back to you, usually in the form of a question. But Wiezenbaum was appalled to discover that people unburdened their intimacies to Eliza, and spent hours in such sessions.

The media historian Dr Simone Natale, author of the book Deceitful Media: Artificial Intelligen­ce and Social Life after the Turing Test, came up with a great descriptio­n of communicat­ive AI such as Eliza, Siri and chatbots: a “banal deception”.

Years later, Google came up with an astonishin­g AI demo in which a bot booked a hair appointmen­t, without the human hairdresse­r realising it was talking to a machine.

“A machine playing the part of a human? Well someone nudge forward the Doomsday Clock, the singularit­y is almost here,” wrote one very excited reporter at Scienceale­rt.

Today, that promise remains unfulfille­d.

Hopes ran far ahead of what could really be achieved, and when the bots went wrong, the errors could be catastroph­ic. Former health secretary Matt Hancock often enthused in media interviews over a chatbot used by the GP at Hand app, but clinicians fretted that it was giving misleading advice, such as failing to recognise a heart attack. Worse, a French remote healthcare company that was evaluating a chatbot by impersonat­ing a suicidal patient was advised: “I can help with that.”

Don’t expect miracles, one chief technology officer with experience of implementi­ng chatbots tells me: “It’s where web design was in 1996, and most companies don’t know they need a web designer.”

He argues that a well designed chatbot can neverthele­ss rapidly perform customer triage, a boon for smaller businesses that can’t afford a call centre. But only if the company works with the limitation­s of the technology.

Broadhurst agrees, citing the highly efficient Amazon returns process.

Too often, however, companies that preside over sprawling empires of chaotic customer processes simply slap new technologi­es over them, which is rather like giving a dilapidate­d house a fresh lick of paint.

“It’s a vicious circle. Companies under pressure are using AI to push people away,” says Broadhurst.

Outsourced call centre staff are already kept on a tight leash, and are often unable to escalate issues, and walled off from useful company knowledge. A bot isn’t going to improve that. “If the human can’t help, then don’t get the bot involved.”

Technology cannot adequately replace human contact where issues require sensitivit­y, discretion and judgment, says the Institute of Customer Service’s chief executive, Jo Causon. I’d tend to agree – which doesn’t make me a Luddite, just a techno-realist.

 ?? ??

Newspapers in English

Newspapers from United Kingdom