PC Pro

Customer AI madness

-

Let’s agree a few ground rules on the whole matter of “computer says no”. I’m talking about the growing need for people to understand the limitation­s of labelling everything that makes a decision as some sort of “AI” device. Just ranting about the awful customer service you’ve received and blaming AI isn’t going to help either you or the company understand where the problems are coming from. In effect, an AI-caused complaint turns you into one of the company’s developers.

Although we should give some benefit of the doubt to the people and systems on the other side. The end of official lockdown means a lot of workers who have been away from the front line are coming back after a protracted break, and that makes for a rather bumpy process in getting up to date. Nearly two years is enough time for a whole lot of changes to be made that, with everyone 100% attentive and present, would be naturally absorbed into the working day. With WFH, though, the informatio­n is less continuous and not put to immediate use.

It’s easy to characteri­se this as a training issue – oh, just have a course author continuous­ly update the onboarding module – but apart from the painful cognitive load this imposes, there’s also the strange need to go through a process I’m beginning to call “handholdin­g”. Not the humans; the machines.

Let me put it like this. When an AI system doesn’t do what it was expected to do, what can the harassed helpline operator do about it? Not much, unless they can see what the machine was working with (the data), and then step forward and back through the moves the AI performs to see if some rogue tick box or ignored OK button has put the transactio­n at risk. This is a debugging tool, and developers have had them forever – but we weren’t expecting to find they’re necessary in an out-there, customer-facing Web 2.0 ecommerce system.

I can find people who treat a debugger being left in a system when it enters production as grounds for an enquiry and some sackings. However, they’re old-school managers and this is a new-school problem. It’s increasing­ly clear that AIs involved in what I think of as transactio­nal scutwork in modern commerce or banking need to take transparen­cy to a new level, so the humans stand some chance of working out what their new masters have actually been doing. It’s when things go wrong that we learn the most by opening up the guts of the machine and watching it tick. Excuse the mixed metaphor!

I understand that some types of AI are not amenable to this kind of review. I was being flippant talking about opening the guts there, but if you’re dealing with the machine learning type of AI, the ones that function much like the 87 neurons that make up the nematode worm’s brain, then there is no concept of a debugger. You can’t open up or look inside any running code because it has no sequential­ity: either it flashes and output signals, or it doesn’t.

Hopefully this means that false claimants of AI wonderfuln­ess will wind their necks back in when it comes to machine learning AI and “just trust the machine”. I don’t want to spend more time on hold waiting for a resolution to my stopped credit card, when they can’t even figure out which system has to have its hand held to explain its logic to the operator and the customer, alike.

 ??  ?? ABOVE To avoid facepalms, human workers need to be able to see what their AI systems are actually doing
ABOVE To avoid facepalms, human workers need to be able to see what their AI systems are actually doing

Newspapers in English

Newspapers from United Kingdom