BUILD­ING AN ETH­I­CAL MA­CHINE

Inc. (USA) - - LAUNCH - BY TOM FOS­TER

Aca­demics worry how A.I. will be pro­grammed to nav­i­gate eth­i­cal dilem­mas. Founders of A.I.-driven com­pa­nies don’t. But they should.

Ste­fan Heck, the CEO of Bay Area–based Nauto, is the rare en­gi­neer who also has a back­ground in phi­los­o­phy—in his case, a PhD. Heck’s com­pany works with com­mer­cial ve­hi­cle fleets to in­stall com­put­er­vi­sion and A.I. equip­ment that stud­ies road con­di­tions and driver be­hav­ior. It then sells in­sights from that data about hu­man driv­ing pat­terns to au­tonomous-ve­hi­cle com­pa­nies. Es­sen­tially, Nauto’s data helps shape how driver­less cars be­have on the road— or, put more broadly, how ma­chines gov­erned by ar­ti­fi­cial in­tel­li­gence make life- or- death de­ci­sions.

This is where the back­ground in phi­los­o­phy comes in handy. Heck spends his days try­ing to make roads safe. But the safest de­ci­sions don’t al­ways con­form to sim­ple rules. To take a ran­dom ex­am­ple: Nauto’s data shows that driv­ers tend to ex­ceed the posted speed limit by about 15 per­cent—and that it’s safer at times for driv­ers to go with the flow of that traf­fic than to fol­low the speed limit. “The data is un­equiv­o­cal,” he says. “If you fol­low the let­ter of the law, you be­come a bot­tle­neck. Lots of peo­ple pass you, and that’s ex­tremely risky and can in­crease the fa­tal­ity rate.”

Much chat­ter about A.I. fo­cuses on fears that su­per-smart robots will one day kill us all, or at least take all of our jobs. But the A.I. that al­ready sur­rounds us must weigh mul­ti­ple risks and make tough trade­offs ev­ery time it en­coun­ters some­thing new. That’s why aca­demics are in­creas­ingly grap­pling with the eth­i­cal de­ci­sions A.I. will face. But, among the en­trepreneurs shap­ing the fu­ture of A.I., it’s of­ten a topic to be­lit­tle or avoid. “I’m a unique spec­i­men in the de­bate,” Heck says. He shouldn’t be. As robot brains in­creas­ingly drive de­ci­sions in in­dus­tries as di­verse as health care, law en­force­ment, and bank­ing, whose ethics should they fol­low? Hu­mans live by a sys­tem of laws and mores that guide what we should and shouldn’t do. Some are ob­vi­ous: Don’t kill, don’t steal, don’t lie. But some are on-the-fly judg­ment calls— and some of th­ese present no good choice. Con­sider the clas­sic phi­los­o­phy rid­dle known as the “trol­ley prob­lem.” You are the con­duc­tor of a run­away trol­ley car. Ahead of you is a fork in the track. You must choose be­tween run­ning over, say, five peo­ple on one side and one per­son on the other. It’s easy enough to de­cide to kill the fewest peo­ple pos­si­ble. But: What if the five peo­ple are all wear­ing prison jump­suits, while the one is wear­ing a grad­u­a­tion cap and gown? What if the sin­gle per­son is your child?

Con­sider how such dilem­mas play out with driver­less cars, which have at­tracted an es­ti­mated $100 bil­lion in in­vest­ment glob­ally and en­com­pass giant, es­tab­lished com­pa­nies such as Ford, GM, and Google; giant no­longer-star­tups like Didi Chux­ing, Lyft, and Uber; and a vast ecosys­tem of star­tups like Heck’s that cre­ate ev­ery­thing from map­ping soft­ware to cam­eras, rideshar­ing ser­vices, and data ap­pli­ca­tions. Or con­sider those dilem­mas more than some founders in this sec­tor do. “There’s no right an­swer to th­ese prob­lems—they’re brain teasers de­signed to gen­er­ate dis­cus­sion around moral­ity,” a founder of a com­pany that makes au­tonomous-ve­hi­cle soft­ware told me. “Hu­mans have a hard time fig­ur­ing out the an­swers to th­ese prob­lems, so why would we ex­pect that we could en­code them?” Be­sides, this founder con­tends, “no one has ever been in th­ese sit­u­a­tions on the road. The ac­tual rate of oc­cur­rence is van­ish­ingly low.”

That’s a com­mon viewpoint among in­dus­try ex­ec­u­tives, says Ed­mond Awad, a post­doc­toral as­so­ciate at MIT Me­dia Lab who in 2016 helped cre­ate a web­site called the Moral Ma­chine, which pro­posed mil­lions of driver­less­car prob­lem sce­nar­ios and asked users to de­cide what to do. “Most of them are miss­ing the point of the trol­ley prob­lem,” he says. “The fact that it is ab­stract is the point: This is how we do sci­ence. If all you fo­cus on is likely sce­nar­ios, you don’t learn any­thing about dif­fer­ent sce­nar­ios.”

He poses a trol­ley-prob­lem sce­nario to il­lus­trate. “Say a car is driv­ing in the right lane, and there’s a truck in the lane to the left and a bi­cy­clist just to the right. The car might edge closer to the truck to make sure the cy­clist is safer, but that would put more risk on the oc­cu­pant of the car. Or it could do the

DEPART­MENT OF “SO, YOU’RE ‘LEAN­ING NO’?’’ “You couldn’t give me enough nee­dles to poke my eyes out.” —Gary Erick­son, the founder of Clif Bar, re­act­ing to the prospect of tak­ing his com­pany pub­lic.

op­po­site. What­ever de­ci­sion the al­go­rithm makes in that sce­nario would be im­ple­mented in mil­lions of cars.” If the sce­nario arose 100,000 times in the real world and re­sulted in ac­ci­dents, sev­eral more—or fewer—bi­cy­clists could lose their lives as a re­sult of the ma­chines’ de­ci­sion. That kind of trade­off goes al­most un­no­ticed, Awad con­tin­ues, when we drive our­selves: We ex­pe­ri­ence it as a one-off. But driver­less cars must grap­ple with it at scale.

On top of that, to­day’s ar­ti­fi­cial in­tel­li­gence isn’t sim­ply a mat­ter of pre­coded if-then state­ments. Rather, in­tel­li­gent sys­tems learn and adapt as they are fed data by hu­mans and even­tu­ally ac­cu­mu­late ex­pe­ri­ence in the real world. And what that means is that, over time, it’s im­pos­si­ble to know quite how or why a ma­chine is mak­ing the de­ci­sions it’s mak­ing. When it comes to A.I. pow­ered by deep learn­ing, à la driver­less cars, “there is no way to trace the eth­i­cal trade­offs that were made in reach­ing a par­tic­u­lar con­clu­sion,” bluntly states Shel­don Fer­nan­dez, CEO of the Toronto-based startup Dar­winAI.

And what data a sys­tem learns from can in­tro­duce all kinds of un­ex­pected prob­lems. Fer­nan­dez cites an au­tonomous-ve­hi­cle com­pany that his firm has worked with: “They no­ticed a sce­nario where the color in the sky made the car edge right­ward when it should have been go­ing straight. It didn’t make sense. But then they re­al­ized that they had done a lot of train­ing in the Ne­vada desert, and that they were train­ing the car to make right turns at a time of day when the sky was that color. The com­puter said, ‘If I see this tint of sky, that’s my in­flu­encer to start turn­ing this di­rec­tion.’ ”

More eth­i­cally com­pli­cated are sce­nar­ios in which, say, an al­go­rithm used for credit un­der­writ­ing be­gins pro­fil­ing ap­pli­cants on the ba­sis of race or gen­der, be­cause those fac­tors cor­re­late with some other vari­able. Dou­glas Mer­rill, a for­mer Google CIO who’s now CEO of ZestFi­nance, which makes ma­chine-learn­ing soft­ware tools for the fi­nan­cial in­dus­try, re­calls a client whose al­go­rithm no­ticed that credit risk in­creased with the amount of mileage ap­pli­cants had on their cars. It also no­ticed that res­i­dents of a par­tic­u­lar state were higher risks.

“Both of those sig­nals make a cer­tain amount of sense,” Mer­rill says— but “when you put the two to­gether, it turned out to be an in­cred­i­bly high in­di­ca­tor of be­ing African Amer­i­can. If the client had im­ple­mented that sys­tem, it would have been dis­crim­i­nat­ing against a whole racial group.”

Mer­rill has made A.I. trans­parency ZestFi­nance’s call­ing card, but ul­ti­mately he thinks the govern­ment will have to step in. “Ma­chine learn­ing must be reg­u­lated. It is un­rea­son­able—and un­ac­cept­able and unimag­in­able—that the peo­ple who have their hands on the things that have the hands on the rud­ders of our lives don’t have a le­gal frame­work in which they must op­er­ate.”

Con­sider one ba­sic ques­tion: Should driver­less ve­hi­cles pro­tect their oc­cu­pants above all else, even a jay­walker? To Heck, the an­swer is clear: “You shouldn’t kill the in­te­rior oc­cu­pant over an ex­te­rior per­son,” he says. “But you should be able to ac­cept dam­age to the car in or­der to pro­tect the life of some­one out­side of it. You don’t want ego­tis­ti­cal ve­hi­cles.” That’s com­mon sense, but it’s still en­gi­neered soft­ware de­cid­ing whose lives mat­ter more.

That said, Heck, ever the philoso­pher, sees a moral im­per­a­tive to have th­ese de­bates—while not slow­ing down the march of tech­nol­ogy. “We kill 1.2 mil­lion peo­ple glob­ally ev­ery year in car ac­ci­dents,” he says. “Any de­lay we put on [au­to­mo­tive] au­ton­omy is killing peo­ple.” All the more rea­son for the in­dus­try to start think­ing through th­ese is­sues—now.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.