IN­VES­TI­GA­TOR

PC & Tech Authority - - CONTENTS -

When you buy a coun­ter­feit prod­uct... and it breaks

One of the most re­mark­able fea­tures of ar­ti­fi­cial in­tel­li­gence is just how quickly it’s be­ing adopted across a wide range of in­dus­tries and prod­ucts. The na­ture of AI is that it’s en­abling new ways to solve old prob­lems (for ex­am­ple, lan­guage trans­la­tion), or cre­at­ing en­tirely new in­no­va­tions that will shape our world for ev­er­more (hello au­ton­o­mous ve­hi­cles).

This is es­pe­cially true for AI util­is­ing deep learn­ing and neu­ral net­works, which can ef­fec­tively be­come so com­plex in their ma­nip­u­la­tion of data that, al­though the re­sult that pops out the other end ap­pears to be what we want, we don’t al­ways know how that re­sult came about.

It’s called the ex­plain­abil­ity prob­lem, and it’s a bit of a con­cern. There are two key is­sues here: the first is that with­out un­der­stand­ing how the re­sult was achieved it makes it harder to un­der­stand when and where the AI gets it wrong, why a par­tic­u­lar de­ci­sion was cho­sen over an­other one, and - per­haps most im­por­tantly - to trust that the AI is do­ing what is in­tended. Imag­ine for ex­am­ple bank loans be­ing pro­cessed by an AI; your ap­pli­ca­tion is turned down, and if you could ask why it would just shrug its vir­tual shoul­ders. It’s not be­ing ob­tuse, it just can’t ex­plain how it got there.

The sec­ond is that the AI it­self is, at this stage with deep learn­ing, a func­tion of data models and enor­mous amounts of train­ing data. Im­age recog­ni­tion that can la­bel a pic­ture of a dog for ex­am­ple is ar­rived at through thou­sands of sam­ple images as it learns to recog­nise pat­terns of data points that rep­re­sent a dog. The re­sult how­ever is just a prob­a­bil­ity. It doesn’t ‘see’ a dog, the data of the im­age of a dog just ‘looks like’ other data of images of a dog.

In an emerg­ing field ap­pro­pri­ately called ex­plain­able ar­ti­fi­cial in­tel­li­gence (XAI) be­ing pi­o­neered by DARPA (the US De­fence Ad­vanced Re­search Projects Agency), the goal is to build AI util­is­ing ex­plana­tory models of real world phe­nom­ena - that is, to be able to de­scribe as­pects of the world through the models, and utilise these to ar­rive at de­ci­sions. In the ex­am­ple of the im­age of a dog, it might un­der­stand what fur looks like, what tails look like, and what dog ears look like and upon see­ing all of these in an im­age con­cludes that it’s an im­age of a dog. If you were to ask it how it ar­rived at its de­ci­sion, it could say it has fur, a tail, and dog ears.

There’s an­other ad­van­tage of this de­sign. Right now in or­der to train deep learn­ing AI for tasks like im­age recog­ni­tion, lan­guage trans­la­tion, or rec­om­men­da­tion en­gines (think Spo­tify or Net­flix) tens of thou­sands, or some­times hun­dreds of thou­sands, of ex­am­ples need to be sourced and pro­cessed in or­der to train and re­fine the models to out­put an in­tended re­sult. But with XAI, the the­ory goes that - as with hu­mans - be­cause the AI will be built with models that de­scribe the world, it will be able to learn a task with just a hand­ful of sam­ples. To con­tinue the dog ex­am­ple, it might be shown just a few images of a cat and be told that this is a cat, and it will adapt its knowl­edge of dogs -fur, tail, dog ears -- to cats, not­ing that cats have a fur and tail too, but the ears are dif­fer­ent and so are the eyes. And now it can recog­nise cats and dogs.

It’s easy to see how, a lit­tle fur­ther down the track, we’ll start to see gen­eral AI built around XAI - for if we are to have an­droid com­pan­ions as­sist­ing us in the work­place or home, as so many scifi books and movies have stoked our imag­i­na­tions with for so long, they will need to be able to process the world around us in sim­i­lar ways as we do. And also, per­haps, ex­plain how they think so we know we can trust them.

Un­til, of course, they go all Skynet. But that’s for an­other col­umn!

In the fu­ture, Ter­mi­na­tors will be able to ex­plain why they are hunt­ing you down.

ASH­TON MILLS has been writ­ing about tech­nol­ogy for 20 years and still gets ex­cited for the lat­est techy gear. He’s also the Out­reach Man­ager for the Aus­tralian Com­puter So­ci­ety (www.acs.org.au), you can email him on ash­ton.mills@ acs.org.au.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.