When you buy a counterfeit product... and it breaks
One of the most remarkable features of artificial intelligence is just how quickly it’s being adopted across a wide range of industries and products. The nature of AI is that it’s enabling new ways to solve old problems (for example, language translation), or creating entirely new innovations that will shape our world for evermore (hello autonomous vehicles).
This is especially true for AI utilising deep learning and neural networks, which can effectively become so complex in their manipulation of data that, although the result that pops out the other end appears to be what we want, we don’t always know how that result came about.
It’s called the explainability problem, and it’s a bit of a concern. There are two key issues here: the first is that without understanding how the result was achieved it makes it harder to understand when and where the AI gets it wrong, why a particular decision was chosen over another one, and - perhaps most importantly - to trust that the AI is doing what is intended. Imagine for example bank loans being processed by an AI; your application is turned down, and if you could ask why it would just shrug its virtual shoulders. It’s not being obtuse, it just can’t explain how it got there.
The second is that the AI itself is, at this stage with deep learning, a function of data models and enormous amounts of training data. Image recognition that can label a picture of a dog for example is arrived at through thousands of sample images as it learns to recognise patterns of data points that represent a dog. The result however is just a probability. It doesn’t ‘see’ a dog, the data of the image of a dog just ‘looks like’ other data of images of a dog.
In an emerging field appropriately called explainable artificial intelligence (XAI) being pioneered by DARPA (the US Defence Advanced Research Projects Agency), the goal is to build AI utilising explanatory models of real world phenomena - that is, to be able to describe aspects of the world through the models, and utilise these to arrive at decisions. In the example of the image of a dog, it might understand what fur looks like, what tails look like, and what dog ears look like and upon seeing all of these in an image concludes that it’s an image of a dog. If you were to ask it how it arrived at its decision, it could say it has fur, a tail, and dog ears.
There’s another advantage of this design. Right now in order to train deep learning AI for tasks like image recognition, language translation, or recommendation engines (think Spotify or Netflix) tens of thousands, or sometimes hundreds of thousands, of examples need to be sourced and processed in order to train and refine the models to output an intended result. But with XAI, the theory goes that - as with humans - because the AI will be built with models that describe the world, it will be able to learn a task with just a handful of samples. To continue the dog example, it might be shown just a few images of a cat and be told that this is a cat, and it will adapt its knowledge of dogs -fur, tail, dog ears -- to cats, noting that cats have a fur and tail too, but the ears are different and so are the eyes. And now it can recognise cats and dogs.
It’s easy to see how, a little further down the track, we’ll start to see general AI built around XAI - for if we are to have android companions assisting us in the workplace or home, as so many scifi books and movies have stoked our imaginations with for so long, they will need to be able to process the world around us in similar ways as we do. And also, perhaps, explain how they think so we know we can trust them.
Until, of course, they go all Skynet. But that’s for another column!
In the future, Terminators will be able to explain why they are hunting you down.
ASHTON MILLS has been writing about technology for 20 years and still gets excited for the latest techy gear. He’s also the Outreach Manager for the Australian Computer Society (www.acs.org.au), you can email him on ashton.mills@ acs.org.au.