Friends with ben­e­fits?

Forth etech no-op­ti­mists, ar­ti­fi­cial in­tel­li­gence may well be as close as we get to a su­per power. But, for the technopes­simists, the rise of ar­ti­fi­cial in­tel­li­gence could be has­ten­ing our own demise. So is this bur­geon­ing‘ su­per power’ a bless­ing or a cu

Idealog - - ARTIFICIAL INTELLIGENCE - Maya Breen.

We’ve all dreamed of hav­ing a su­per power at some point in our lives. As a child you may have longed to fly out of your bed­room win­dow into the night sky like Peter Pan. Per­haps you wanted to read other peo­ple's minds, live for­ever, or turn the clock back to re­verse a re­gret or to save a life.

That’s not go­ing to hap­pen. But with the rise of ar­ti­fi­cial in­tel­li­gence, some be­lieve we fi­nally have an op­por­tu­nity to aug­ment our hu­man ex­pe­ri­ence and cre­ate a true su­per power.

As a re­port from Chap­man Tripp and the In­sti­tute of Di­rec­tors called 'De­ter­min­ing our fu­ture: Ar­ti­fi­cial In­tel­li­gence' says: “The goal of much AI re­search is to push for­ward the bound­ary of ma­chine in­tel­li­gence with the even­tual goal of cre­at­ing ar­ti­fi­cial gen­eral in­tel­li­gence – a ma­chine that could suc­cess­fully per­form any in­tel­lec­tual task in any do­main that a hu­man can.”

For many, the idea of a ma­chine per­form­ing a task as well as or, worse still, bet­ter than a hu­man is a chilling propo­si­tion. But even if you’re in this con­cerned camp, the spread of ar­ti­fi­cial in­tel­li­gence as it seeps deeper into all of our lives is, as Kevin Kelly’s book puts it, in­evitable. There is too much eco­nomic in­cen­tive, but, as his­tory has shown, tech­no­log­i­cal ad­vances are not with­out their dan­gers. So can we get the bal­ance be­tween man and ma­chine right?

BOUNDLESS OP­POR­TU­NITY

So what ex­actly is ar­ti­fi­cial in­tel­li­gence? You’ve prob­a­bly heard terms like AI, ma­chine learn­ing and deep learn­ing spouted ev­ery way you turn these days. And while they are all in­ter­twined, they are not the same.

In short, deep learn­ing is part of ma­chine learn­ing, which is part of AI. In­tel’s Nidhi Chap­pell, head of ma­chine learn­ing, puts it suc­cinctly when he says: “AI is ba­si­cally the in­tel­li­gence – how we make machines in­tel­li­gent – while ma­chine learn­ing is the im­ple­men­ta­tion of the com­put­ing meth­ods that sup­port it. The way I think of it is: AI is the sci­ence and ma­chine learn­ing is the al­go­rithms that make the machines smarter.”

AI and ma­chine learn­ing al­ready in­flu­ence many as­pects of our lives – from fa­cial recog­ni­tion to au­to­mated trad­ing to voice ac­ti­vated as­sis­tants to rec­om­men­da­tion engines – and it’s set to im­pact many more in the com­ing years. New Zealand aims to be a keen surfer on this tech­no­log­i­cal wave, and Sci­ence and In­no­va­tion Min­is­ter Paul Gold­smith launched The AI Fo­rum of New Zealand (AIFNZ) in Welling­ton in June.

The chair of the new AI Fo­rum, an ini­tia­tive by NZTech, is Stu Christie, who is also an in­vest­ment man­ager at NZ Ven­ture In­vest­ment Fund with close to 30 years of in­dus­try ex­pe­ri­ence be­hind him. So why launch the or­gan­i­sa­tion now? He puts it down to a few things, like the col­lec­tion of a mass amount of data, be­ing able to process data of such scale, ad­vances in ma­chine and deep learn­ing, and ad­vances in sen­sory tech.

“So there’s been a whole bunch of dif­fer­ent tech­nolo­gies and the ca­pac­ity to be able to process that tech­nol­ogy which is now bring­ing that to the fore,” Christie says. “So all those com­po­nents are com­ing to­gether to be able to make [AI] hap­pen.”

SCI­ENCE NON- FI CTION

Ian Wat­son, an As­so­ci­ate Pro­fes­sor in Com­puter Sci­ence at the Univer­sity of Auck­land, has over 20 years ex­per­tise in AI and he says he ini­tially got into the field through an in­ter­est in sci­ence fic­tion when he was a kid.

“When I went into com­puter sci­ence the only real area of com­puter sci­ence that in­ter­ested me was AI,” he says.

He pre­dicts New Zealand will see a lot of ap­pli­ca­tions for AI in agri­cul­ture.

“We are now at the point where we can see that there will be robots for ex­am­ple, that could run a whole milk­ing shed and you wouldn’t need the milker there. We can see robots now that would be ca­pa­ble of pick­ing fruit, which of course would have a lot of im­pact on sea­sonal work.” Be­fore too long, he says it’ll be drones in­spect­ing the fence lines and mon­i­tor­ing stock rather than farm­ers.

Un­like Wat­son, Chris Auld, the di­rec­tor of De­vel­oper Ex­pe­ri­ence at Mi­crosoft NZ, says he’s a data guy – but he’s also a tech­nol­o­gist, busi­ness strate­gist and a Mi­crosoft Most Valu­able Pro­fes­sional (MVP) who has hap­pened to train as a lawyer. And for those of you who have ever been caught in Auck­land traf­fic, he’s got some good news as it is in the early stages of a project with Auck­land Trans­port to try and al­le­vi­ate the grid­lock.

“We’re talk­ing with them about these sorts of tech­nolo­gies and their po­ten­tial to help with con­ges­tion mon­i­tor­ing, con­ges­tion mod­el­ling, con­ges­tion al­le­vi­a­tion – the abil­ity to look and see through this im­age or video anal­y­sis where con­ges­tion might be and then to make in­tel­li­gent de­ci­sions about how we change traf­fic light tim­ings and work to reroute the net­work to ease that con­ges­tion.

“So there are huge op­por­tu­ni­ties in that sort of sim­u­la­tion and mod­el­ling. We have an ini­tia­tive that we’re run­ning around the world fo­cused on traf­fic man­age­ment and also traf­fic safety, driven by ar­ti­fi­cial in­tel­li­gence and ma­chine learn­ing.”

SI ZE MAT­TERS

Machines l ack the ca­pac­ity to be racist. Machines l ack the ca­pac­ity to be misog­y­nis­tic or sex­ist. Machines j ust l ack the abil­ity to be an ar­se­hole. So we should cel­e­brate that fact. Chris Auld / Mi­crosoft NZ

Christie says the world is wak­ing up to AI, but, be­cause New Zealand is small and ag­ile, we’re less

en­cum­bered by struc­tural is­sues in terms of our econ­omy and more able to em­brace the changes.

“We have an open labour force; we are easy to do busi­ness with; we’re a heav­ily con­nected first world coun­try but small enough also to be able to col­lab­o­rate very closely to­gether.”

He points out New Zealand is not lead­ing edge in AI as the deep re­search and de­vel­op­ment is largely be­ing done by the tech giants off­shore.

“So we’ve got to recog­nise our po­si­tion in the mar­ket and ac­tu­ally lever­age sus­tain­able, com­pet­i­tive ad­van­tages that we may have,” he says, ex­plain­ing the op­por­tu­ni­ties are big­gest in agri­cul­ture,g, man­u­fac­tur­ing,g, in­fra­struc­ture and trans­porta­tion.

How­ever, he does give spe­cial men­tion to Soul Machines (see pro­file page 97), which is de­vel­op­ing re­mark­able life­like avatars that dis­play emo­tional in­tel­li­gence.

“They are a stand­out for New Zealand right now. It’s just in­cred­i­ble what they are do­ing, rev­o­lu­tion­is­ing that par­tic­u­lar touch point, that cus­tomer in­ter­face. It’s also en­light­en­ing peo­ple in terms of what a dig­i­tal em­ployee may be.”

AR­TI­FI­CIAL SWEETENERS

Mark Rees, gen­eral man­ager of prod­uct – small busi­ness for ac­count­ing soft­ware gi­ant, Xero, says what is so ex­cit­ing about AI is “of­ten you don’t have the abil­ity to look at ev­ery­thing apart from the av­er­ages, but with some of these tools you can re­ally see what is the un­der­ly­ing struc­ture in the data, which is re­ally fas­ci­nat­ing. It’s like dis­cov­ery; it’s re­veal­ing, like ar­chae­ol­ogy.”

While there is plenty of chat­ter about the po­ten­tial for au­to­ma­tion to take jobs, he says AI is set to change the ac­count­ing process for the bet­ter and, in around five to ten years, low-value, com­modi­tised data en­try for ac­coun­tants will be low-fric­tion, per­haps even com­pletely au­to­mated, and will al­low them to do more pro­duc­tive things.

“We pro­vide re­ally smart, alert­ing rec­om­men­da­tions that helps busi­ness ad­vis­ers op­ti­mise the per­for­mance of their busi­ness cus­tomers. That’s what they fo­cus on, not the me­chan­i­cal side of data en­try or tax prepa­ra­tion but the machines are re­ally help­ing the busi­ness ad­vis­ers give re­ally smart ad­vice to their cus­tomers and the busi­nesses are run bet­ter be­cause of that.”

Al­though build­ing AI into the busi­ness of­fer­ing will help Xero’s ad­vis­ers, it’s a dis­rup­tion to them too.

“Our strat­egy is that we want to help the ac­coun­tants change their busi­ness into more high value ser­vices – it is a dis­rup­tion and with any dis­rup­tion, peo­ple have to make choices about how they re­spond to that, but I think it does pro­vide a real op­por­tu­nity for them to adapt their busi­nesses and fo­cus on busi­ness ad­vice … I think the mis­con­cep­tion is that it’s some­thing rad­i­cally new when it’s pro­gres­sively been baked into our ex­pe­ri­ences.”

BE CARE­FUL WHAT YOU WISH FOR

Sarah Hin­dle is gen­eral man­ager of Tech Fu­tures Lab, and the founder, Frances Val­in­tine, also sits on the AIFNZ Board. Hin­dle has ad­vised CEOs through­out her ca­reer on how to re­main ahead of the com­pet­i­tive curve when rapid change is im­mi­nent. She also stud­ied phi­los­o­phy at univer­sity, so she takes a slightly dif­fer­ent, more holis­tic view of this shift and the im­pact it may have on our hu­man ex­is­tence.

“I think what is be­com­ing re­ally clear now is ac­tu­ally we are com­put­ers and AI is show­ing us that the space be­tween our ears is ac­tu­ally not that much dif­fer­ent from some­thing that we can cre­ate with a ma­chine.”

Be­cause of the rapid de­vel­op­ments in this area, she says it is vi­tal that we start hav­ing a con­ver­sa­tion “of a na­ture that we have never had at any other point in his­tory, which is how do we re­ally want to live our lives? Do we need to be work­ing 9-5? What does the pur­pose of life look like? How might we sur­vive with­out get­ting an in­come five days a week? What other op­tions does that open up for us as a civil­i­sa­tion? I think that’s the most ex­cit­ing thing – just as a trig­ger for re­con­sid­er­ing our whole ex­is­tence.”

Tech Fu­tures Lab launched in July 2016 and, with many of its part­ners also very in­volved in AI de­vel­op­ment, Hin­dle says it has worked with 3,000 peo­ple and 250 com­pa­nies across ev­ery sec­tor to ‘ag­i­tate’ that con­ver­sa­tion.

“Of course you want to give peo­ple the se­cu­rity that it’ll all be fine, but I re­ally think it’s in our hands as to whether we re­ally make this the great­est thing that hu­mans have ever done by re­ally hav­ing a chance to re­cast that so­cial con­tract and what it looks like for us as hu­mans and elim­i­nate poverty and solve dis­eases and have a life where we do what we want. Or, we could re­ally muck it up and have a very split so­ci­ety.”

Per­son­ally, she doesn’t be­lieve that ev­ery­one will slide into a new job once they have been booted out of their old one by a cheaper, more ef­fi­cient ma­chine.

“I think ac­tu­ally what we are go­ing to need to do is fig­ure out a way whereby we don’t all have to be em­ployed 40 hours a week to sur­vive as dig­ni­fied hu­man be­ings. We need to have a very dif­fer­ent con­ver­sa­tion about what it means to be a valu­able mem­ber of so­ci­ety and to be a hu­man, so I think that my great­est reser­va­tion is our abil­ity and knowl­edge and will­ing­ness to have those con­ver­sa­tions and to have them quickly enough so that peo­ple can live a good life.”

The re­port by Chap­man Tripp and In­sti­tute of Di­rec­tors also in­di­cated that lower so­cioe­co­nomic com­mu­ni­ties would be the ones most likely to feel the ef­fects from AI de­vel­op­ment, with low-skilled and repet­i­tive jobs at the high­est risk of be­ing taken over by tech­nol­ogy.

Another re­cent re­port by The Royal So­ci­ety stated 35 per­cent of jobs in the United King­dom could have more than a 66 per­cent chance of suc­cumb­ing to au­to­ma­tion in the next few decades. But it also said “com­mon ground on the na­ture, scale, and tim­ing of po­ten­tial changes to the world of work as a re­sult of ma­chine learn­ing is hard to find”, so, at present, there are only guesses.

Jeremy Howard, the founder of and deep learn­ing re­searcher at fast.ai, has 25 years of ma­chine learn­ing study be­hind him. And, in a TED talk in De­cem­ber 2014 he mixed up “the won­der­ful and ter­ri­fy­ing im­pli­ca­tions of com­put­ers that can learn”.

“The ma­chine learn­ing rev­o­lu­tion will be very dif­fer­ent. The bet­ter com­put­ers get at in­tel­lec­tual ac­tiv­i­ties, the more they can build bet­ter com­put­ers to get bet­ter at in­tel­lec­tual ac­tiv­i­ties. So this is go­ing to be a change the world has never seen be­fore, so your pre­vi­ous un­der­stand­ing of what’s pos­si­ble is dif­fer­ent.

“Com­put­ers right now can do the things that hu­mans spend most of their time be­ing paid to do, so now’s the time to start think­ing about how we’re go­ing to ad­just our so­cial struc­tures and our eco­nomic struc­tures to be aware of this new re­al­ity."

FI ND AND RE­PLACE

As­so­ci­ate pro­fes­sor Wat­son says a ma­jor threat is the wider so­ci­etal im­pact re­sult­ing from ad­vances in AI.

“It’s all very well for an in­di­vid­ual com­pany to de­cide to lay off a third of its work­force – but then if ev­ery com­pany in that sec­tor de­cides to

The world i s awak­en­ing to AI but New Zealand’s small and ag­ile na­ture does make it more re­spon­sive and also we have l ess i ncum­bent struc­tural i ssues i n terms of our econ­omy. Stu Christie / AIFNZ

lay off a third of their work­force then sud­denly you’ve got an aw­ful lot of peo­ple who don’t have jobs to go to and that is po­ten­tially cat­a­strophic.

“Of course, if it’s left to in­di­vid­ual com­pa­nies to make de­ci­sions then they have to make de­ci­sions based on their bot­tom line – on their re­turn to share­hold­ers, that’s their re­spon­si­bil­ity. So re­ally so­ci­ety as a whole needs to think about this and think about the im­pacts.”

Many be­lieve one of Amer­ica’s most com­mon jobs – driv­ing trucks – could soon be ex­tinct due to the rise of au­ton­o­mous ve­hi­cles. So what will those mil­lions do? To ad­dress this, the Fo­rum’s Christie says we need to make sure to have “a re­it­er­a­tive ed­u­ca­tion sys­tem so that peo­ple can re­train in their lives and do that in ways which can get them up to speed and adapt­able and an ac­cept­ing so­ci­ety which does ac­cept that es­sen­tially peo­ple are go­ing through that process – the in­vest­ment will also have to come from busi­nesses, not just the in­di­vid­u­als to carry the bur­den of that re­train­ing”.

“The real op­por­tu­ni­ties here aren’t re­mov­ing peo­ple from the loop, they are giv­ing peo­ple bet­ter tools to make per­son to per­son in­ter­ac­tions bet­ter,” adds Mi­crosoft’s Auld.

Auld agrees with Christie that New Zealand is well-po­si­tioned to nav­i­gate this shift due to its close re­la­tion­ship be­tween cit­i­zens and gov­ern­ment. “We had the Min­is­ter pre­sent­ing [at the Fo­rum launch]; you can bump into the Prime Min­is­ter at the air­port. We don’t have many coun­tries in the world that are like that. We have a coun­try that is re­ally amenable to flex­i­ble, adapt­able, smart reg­u­la­tion, so I think that’s go­ing to be key.”

EXISTENTIAL THREATS

In Seat­tle ear­lier this year, the an­nual three-day Mi­crosoft Build con­fer­ence took place. Cu­ri­ously, Mi­crosoft CEO Satya Nadella opened it with a frank warn­ing to the deep tech­nol­o­gists in at­ten­dance of cre­at­ing a dystopian re­al­ity not dis­sim­i­lar to Ge­orge Or­well’s 1984. En­tre­pre­neur bil­lion­aire and SpaceX/Tesla CEO Elon Musk has gone as far to say AI is “our great­est existential threat” while Pro­fes­sor Stephen Hawk­ing has warned hu­mans will be help­less to com­pete with AI and will be ul­ti­mately ‘su­per­seded’.

Dur­ing Tech­week this year, Wat­son gave a lec­ture ex­plor­ing the ques­tion­able im­pacts and eth­i­cal im­pli­ca­tions of AI and says there is one area that AI should be for­bid­den to en­ter.

“I think prob­a­bly the only area that one would def­i­nitely say you don’t want AI is in terms of au­ton­o­mous weapons sys­tems – def­i­nitely not. It’s per­fectly fea­si­ble now that those sys­tems could ac­quire their own tar­get and be al­lowed to fire rock­ets but there’s a large num­ber of peo­ple who think that shouldn’t be per­mit­ted, that there should al­ways be a per­son in the loop, who can be held re­spon­si­ble for mak­ing the de­ci­sion,” he says. “Why would we want to re­lease weapons out there that can make their own de­ci­sions as to whether or not they should shoot us?”

Mi­crosoft’s Auld also says au­ton­o­mous weapons sys­tems are a great ex­am­ple be­cause “there’s some­thing unique about go­ing to war. It re­quires a hu­man, some­one to make moral de­ci­sions. I think that we should avoid putting machines into places where they have to make moral de­ci­sions, be­cause they can’t make moral de­ci­sions.”

To Auld, that’s some­thing to be taken ad­van­tage of.

“Machines lack the ca­pac­ity to be racist. Machines lack the ca­pac­ity to be misog­y­nis­tic or sex­ist. Machines just lack the abil­ity to be an ar­se­hole. So we should cel­e­brate that fact. We need to be care­ful about how we build these machines so that they don't make bi­ased de­ci­sions ac­ci­den­tally. But ar­ti­fi­cial in­tel­li­gence is not like hu­mans; it doesn’t have the in­nate ten­dency to cast judge­ment.”

CHECKS AND BAL­ANCES

Auld also at­tended the Seat­tle con­fer­ence but says the bleak fu­ture some are wor­ried about is a long way off.

“The thing about dystopian fu­tures is they’re an ex­tremely long way away,” he says, point­ing out that there has been tech­no­log­i­cal dis­rup­tion and tech-driven un­em­ploy­ment for a long time.

“I think the dis­rup­tion to peo­ple’s lives is prob­a­bly go­ing to oc­cur less quickly than it has in the past. I think we’ll see the pos­i­tive ben­e­fits ac­crued far more quickly than we find the neg­a­tive con­se­quences. But that’s not to say there won’t be neg­a­tive con­se­quences.”

The AI com­mu­nity here and around the world is work­ing on putting con­trols in place for their cre­ation. Google’s Deep­Mind, a world leader in AI re­search and a com­pany Musk him­self in­vested in, has de­vel­oped an AI ‘off-switch’. New Zealan­der Dr. Shane Legg is a co­founder of the Lon­don startup that was es­tab­lished in 2010 and was snapped up by Google four years on for about £400 mil­lion.

The Fu­ture of Life In­sti­tute launched a pro­gramme in 2015 to re­search AI safety, funded largely by a do­na­tion from Musk. Part­ner­ship on AI was formed to ex­plore best prac­tices on AI tech­nolo­gies and as an open plat­form to dis­cuss the im­pacts of AI. Non-profit OpenAI is an AI re­search com­pany, fur­ther­ing a safe path to ar­ti­fi­cial gen­eral in­tel­li­gence.

Wat­son men­tions Bill Gates’ sug­ges­tion that robots should be taxed if they are do­ing work, just as hu­mans are.

“That tax rev­enue could ob­vi­ously be used for so­cial se­cu­rity but it could also be used as a

lever to con­trol how fast au­to­ma­tion is rolled out – if the tax is quite high then the AIs are not as eco­nom­i­cally ef­fi­cient, they’re not as at­trac­tive. And if the tax is su­per low then they are very at­trac­tive so pol­icy mak­ers could play with that tax to con­trol how fast or slowly AIs are de­ployed and I’ve got no idea how gov­ern­ments would tax some­thing like that but they seem to be per­fectly ca­pa­ble of tax­ing any­thing they feel like. I’m sure they would be able to think of a way of do­ing it.”

GREAT POWER, GREAT RESPONSIBILIT Y

But is it likely that AI will ever reach hu­man­level in­tel­li­gence? A re­port from the Obama ad­min­is­tra­tion late last year said we won’t see machines “ex­hibit broadly-ap­pli­ca­ble in­tel­li­gence com­pa­ra­ble to or ex­ceed­ing that of hu­mans” in the next 20 years, but Google’s di­rec­tor of engi­neer­ing Ray Kurzweil cer­tainly thinks we will.

“By 2029, com­put­ers will have hu­man-level in­tel­li­gence,” Kurzweil said in an in­ter­view early this year, dur­ing the SXSW Con­fer­ence in Texas.

And more tech­nol­o­gists and vi­sion­ar­ies agree with him. IEEE Spec­trum asked a num­ber of them, in­clud­ing Rod­ney Brooks and Nick Bostrom, when we will have com­put­ers as ca­pa­ble as the brain and nearly all said it would hap­pen, but the time frame ranged from ‘soon’ to hun­dreds of years away.

Mi­crosoft’s Auld says it’s a deeply epis­te­mo­log­i­cal ques­tion, but adds, “I don’t think we’ll ever get there, and that’s prob­a­bly a good thing”.

Al­though Hin­dle shares the con­cerns of the likes of Musk and Hawk­ing, she says AI will re­de­fine what it means to be hu­man and what our lives will look like.

“There are lots of scary things about AI and I would be ly­ing if I would try to deny that, but I think what is ex­cit­ing about it is it al­most gives hu­mans a su­per power – it doesn’t just im­prove what we’re do­ing but it kind of gives us this ex­tra ca­pa­bil­ity by be­ing able to ac­cess in­for­ma­tion at speed that we’ve never had be­fore.”

Ge­off Colvin, the au­thor of Hu­mans are Un­der­rated: What High Achiev­ers Know That Bril­liant Machines Never Will, is con­fi­dent hu­mans and AI will live along­side each other. He says the great­est ad­van­tage we have over tech­nol­ogy is that which we al­ready pos­sess and are hard­wired to want only from each other – things like em­pa­thy, cre­ativ­ity and hu­mour – and that we must de­velop those abil­i­ties. Whether you are dread­ing a Ter­mi­na­tor- style fu­ture, or dream­ing of the way AI will im­prove our lives, one thing is cer­tain: AI is al­ready here and only gain­ing mo­men­tum. So, as Hin­dle says, “we’ve got to move with the machines, not against them, be­cause we can’t stop it”.

Why would we want to re­lease weapons out there that can make their own de­ci­sions as to whether or not they should shoot us? Ian Wat­son / Univer­sity of Auck­land

Newspapers in English

Newspapers from New Zealand

© PressReader. All rights reserved.