Ar­ti­fi­cial in­tel­li­gence in 2017 means re­spect, not fear

The Globe and Mail (Alberta Edition) - - GLOBE FOCUS - MARK KINGWELL Pro­fes­sor of phi­los­o­phy at the Univer­sity of Toronto MAR­GARET WENTE will re­turn

Garry Kas­parov, for­mer chess world cham­pion and cur­rent in­tel­lec­tual-at-large, was in Toronto this week to pro­mote his lat­est work, Deep Think­ing. The book’s nar­ra­tive is driven by Mr. Kas­parov’s era-chang­ing 1997 chess matches with the IBM com­puter Deep Blue. The car­bon/ non-car­bon op­po­nents split their two six-game con­tests, but Mr. Kas­parov’s de­feat is all peo­ple re­mem­ber, an anx­i­ety-in­duc­ing mo­ment of ma­chine-over-hu­man su­pe­ri­or­ity.

Since then, Mr. Kas­parov has re­tired from com­pet­i­tive chess, be­come an out­spo­ken critic of Rus­sian Pres­i­dent Vladimir Putin, and writ­ten ex­ten­sively on games, cul­ture, politics and tech­nol­ogy. He is a gifted speaker, charis­matic and en­gag­ing, who re­li­ably makes ironic Ter­mi­na­tor and Ma­trix ref­er­ences when dis­cussing the rise of the ma­chines, about which he is sen­si­ble and level-headed.

Nev­er­the­less, fear re­mains the dom­i­nant emo­tion when hu­mans talk about tech­no­log­i­cal change. Are self-driv­ing cars bet­ter de­scribed as self-crash­ing? Is the In­ter­net of Things, where we ea­gerly al­low in­for­ma­tion-steal­ing al­go­rithms into our rec rooms and kitchens, the end of pri­vacy? Is the Sin­gu­lar­ity im­mi­nent?

But fright is closely sec­onded by won­der. Your smart­phone makes Deep Blue look, as Mr. Kas­parov has said, like an alarm clock. In your pocket lies com­put­ing power ex­po­nen­tially greater than a Cray su­per­com­puter from the 1970s that oc­cu­pied an en­tire room and re­quired an elab­o­rate cool­ing sys­tem. Look at all the things I can do, not to men­tion dates I can make, while walk­ing heed­lessly down the side­walk! This is fa­mil­iar ter­rain. The de­bate about ar­ti­fi­cial in­tel­li­gence is re­mark­able for not be­ing a de­bate at all but rather, as with Trump-era politics or the cul­tural-ap­pro­pri­a­tion is­sue, a se­ries of con­cep­tual stand­offs. Can we get past the typ­i­cal stale­mates and break some new ground on ar­ti­fi­cial in­tel­li­gence?

I think we can, and Mr. Kas­parov him­self makes the first part of the ar­gu­ment. We can pro­gram non-hu­man sys­tems, he notes, to do what we al­ready know how to do. Deep Blue won against him us­ing brute force sur­veys of pos­si­ble fu­ture moves, some­thing hu­man play­ers do less quickly. But when it comes to things we hu­mans don’t un­der­stand about our­selves, and so can’t trans­late into code, the stakes are dif­fer­ent. In­tu­ition, cre­ativ­ity, em­pa­thy – th­ese are qual­i­ties of the hu­man mind that the mind it­self can­not map. To use Ju­lian Jaynes’s mem­o­rable image, we are like flash­lights, il­lu­mi­nat­ing the ex­ter­nal world but not the mech­a­nisms by which we per­ceive it.

Two things now be­come rel­e­vant. The first is that we are get­ting bet­ter at solv­ing this age-old philo­soph­i­cal co­nun­drum. If, for ex­am­ple, neu­ro­science and MRI scans are not the com­plete an- swer, they do be­gin to il­lu­mi­nate the brain-con­scious­ness re­la­tion­ship. Con­trary to what past philoso­phers ar­gued, con­scious­ness may be ex­pli­ca­ble.

At the same time, com­put­ers are get­ting smarter. They can self­cor­rect, us­ing neu­ral net­works and re­in­force­ment loops to learn things out­side their orig­i­nal pro­gram­ming. An­other com­puter, Al­phaGo, man­aged to de­feat lead­ing hu­man play­ers of the an­cient Chi­nese game Go, which re­wards in­sight and bold­ness. If Deep Blue is a bull­dozer, Al­phaGo is a For­mula One racer.

This might sound like the cue for an­other round of ma­chine­fear ver­sus ma­chine-cheer. But the best voices in the crit­i­cal lit­er­a­ture about tech­nol­ogy – Martin Hei­deg­ger, Jac­ques El­lul, Mar­shall McLuhan – know that the point here is self-un­der­stand­ing, not de­nun­ci­a­tion. Com­put­ers are some­thing that we hu­mans en­act and en­able. They are more like un­ruly chil­dren than alien visi­tors. And so we must re­flect on what we have wrought, and the ethics of our own com­plic­ity.

The rea­son to be wary of so- called smart ap­pli­ances is hu­man per­fidy and weak­ness rather than non-hu­man mal­ice. If we sur­ren­der our soli­tude, we im­pair our abil­ity to main­tain a ro­bust pub­lic sphere of in­di­vid­ual in­tegrity. A cam­era does not, by it­self, watch you; it is the cor­po­ra­tion, or the state, or the mar­ket­ing firm that does that. We have met the en­emy and, as so of­ten, he is us.

But the ethics of ar­ti­fi­cial in­tel­li­gence do not end there. Know­ing right from wrong, weigh­ing rights and re­spon­si­bil­i­ties, are among those things we cur­rently do not know how to pro­gram. Con­scious non-hu­man en­ti­ties, sup­pos­ing we ever en­counter them, cre­ated by us or not, will de­mand the same re­spect as the hu­man kind.

In turn, we will be jus­ti­fied in de­mand­ing from them the same du­ties of re­spect and care. Hu­man-ma­chine encounters will be, maybe un­ex­pect­edly, tu­to­ri­als in how we ought to treat each other – who­ever is other.

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.