Ajung Moon, direc­tor of the Open Roboethics In­sti­tute, con­sid­ers the im­por­tance of in­still­ing morals into ar­ti­fi­cial in­tel­li­gence.

An in­dus­try grow­ing around ro­bot ethics is ded­i­cated to dis­sect­ing the tricky is­sues raised by ar­ti­fi­cial in­tel­li­gence

The Georgia Straight - - Contents - > BY KATE WIL­SON

Just over two years ago, Mi­crosoft re­leased a chat­bot on Twit­ter named Tay. Cre­ated to mimic the speech and spell­ing of a 19-year-old Amer­i­can girl, the pro­gram was de­signed to in­ter­act with other Twit­ter users and get smarter as it dis­cov­ered more about the world through their posts—a process called ma­chine learn­ing. Rather than be­com­ing an af­ter-school chum for bored teens, though, Tay was soon tweet­ing every­thing from “I’m smok­ing kush in front of the po­lice” to “I fuck­ing hate fem­i­nists and they should all die and burn in hell.” She was shut down 16 hours af­ter her launch.

Tay’s rants—which fea­tured racist slurs and Holo­caust de­nials—tapped into peo­ple’s big­gest anx­i­eties about the fu­ture of ar­ti­fi­cial in­tel­li­gence (AI). With no moral com­pass to guide them, the fear goes, ma­chines will be un­able to fol­low the same so­cial rules as hu­mans.

In re­sponse, an in­dus­try is grow­ing around ro­bot ethics.

UBC grad­u­ate Ajung Moon, direc­tor of the Open Roboethics In­sti­tute, has ded­i­cated her ca­reer to dis­sect­ing the tricky is­sues thrown up by ar­ti­fi­cial in­tel­li­gence. A mecha­tronic engi­neer by trade, Moon be­came in­ter­ested in the topic when a men­tor at her univer­sity men­tioned how South Korea was de­vel­op­ing au­ton­o­mous weapons to guard the de­mil­i­ta­rized zone. Re­al­iz­ing that there were few dis­cus­sions around what kind of ro­bots com­pa­nies should be cre­at­ing, she delved into the moral­ity of ma­chines in her grad­u­ate stud­ies.

“I’m a wo­man in her 30s with a tech­nol­ogy back­ground, born in Korea and raised in Canada,” she tells the Ge­or­gia Straight on the line from her of­fice in Seat­tle. “I have my own set of bi­ases. Those should not be as­sumed to be re­flec­tive of ev­ery­one’s val­ues—and yet, if I cre­ate ro­bots that take on those stan­dards, I have the power to repli­cate my views over and over. Ar­ti­fi­cial-in­tel­li­gence sys­tems act as a proxy for one per­son’s ideas, and a sin­gle set of opin­ions can be­come the rule. It’s in­cred­i­bly im­por­tant for us to be thought­ful about the de­ci­sions we make when we pro­gram ma­chines, and that’s where ethics comes into play.”

Moon fo­cuses her work not just on chat­bots or phys­i­cal ro­bots but on any sys­tem that uses ma­chine learn­ing—the abil­ity to get bet­ter at a task through ex­pe­ri­ence rather than di­rect pro­gram­ming—to power its ar­ti­fi­cial in­tel­li­gence. AI is al­ready ubiq­ui­tous. Google Maps, for in­stance, uses ma­chine learn­ing to pre­dict how long a jour­ney will take based on the in­for­ma­tion it in­ter­prets from oth­ers’ phones in real time and makes its own de­ci­sions about the best route to take. With ar­ti­fi­cial in­tel­li­gence now un­der­pin­ning every­thing from road safety to ré­sumé-read­ing, Moon be­lieves that com­pa­nies must in­ter­ro­gate the moral­ity be­hind their pro­gram­ming.

“There’s many dif­fer­ent ways to im­ple­ment ethics into ar­ti­fi­cial in­tel­li­gence,” she says. “I re­cently worked with Tech­ni­cal Safety B.C., which is an or­ga­ni­za­tion which over­sees the safe in­stal­la­tion of equip­ment. They wanted to take the huge amount of data that they gather and use it to make de­ci­sions about where haz­ards are most likely to arise. They could then send over a safety of­fi­cer to do some­thing about it be­fore there was a dan­ger.

“One of their em­ploy­ees pointed out that the ma­chine-learn­ing sys­tem could throw up a false find­ing and make a wrong pre­dic­tion about a haz­ard,” she con­tin­ues. “If they were to send out a safety of­fi­cer to that site, they would be wast­ing their time

and the com­pany’s money, but if they chose not to send a safety of­fi­cer, a huge ac­ci­dent could hap­pen in their ab­sence. We con­ducted an AI ethics as­sess­ment, which was the first one that we know of in the world. That gave them an ethics road map to break down why they make the de­ci­sions that they do. It’s avail­able for ev­ery­one to see, so they are able to jus­tify their choices.”

A num­ber of B.C. com­pa­nies use AI tech­nol­ogy but don’t let the pub­lic know how their ma­chines make de­ci­sions. In Moon’s view, those sys­tems can be eth­i­cally prob­lem­atic. High-pro­file or­ga­ni­za­tions like the Van­cou­ver Po­lice Depart­ment, for in­stance, use ma­chine learn­ing to pre­dict where and when cer­tain crimes are more likely to hap­pen—but by fail­ing to dis­close how those choices are be­ing made, they risk be­ing ac­cused of prej­u­dice or pro­fil­ing.

“Based on data the Van­cou­ver po­lice has gath­ered in the past, its AI sys­tem can make ac­cu­rate guesses about prop­erty crimes,” Moon says. “They can then pre­emp­tively send of­fi­cers to those lo­ca­tions. That’s when the idea of bias comes into play. The type of neigh­bour­hood is of­ten as­so­ci­ated with the type of peo­ple who live there, whether that be in terms of race or so­cioe­co­nomic sta­tus. If more offi-

cers are in those ar­eas, there will likely be more ar­rests for crimes that might not have oth­er­wise been seen. If you want to be fair in keep­ing ev­ery­body safe, what should fair­ness mean in this par­tic­u­lar con­text? If you don’t have a definition that’s loud and clear for ev­ery­one to see, you’re go­ing to run into trou­ble in the fu­ture.”

De­spite the po­ten­tial fail­ings of ar­ti­fi­cial in­tel­li­gence, Moon is op­ti­mistic about its fu­ture. At the up­com­ing B.C. Tech Sum­mit, she plans to dis­cuss how ro­bots and hu­mans need to be work­ing to­gether to make de­ci­sions, with the ma­chines of­fer­ing sug­ges­tions and peo­ple mak­ing the fi­nal call.

“There should be a healthy amount of con­cern in terms of what we should be do­ing about work­ers who will be dis­placed or the amount of large-scale dis­rup­tion that these tech­nolo­gies will bring,” she says. But I think we do need to point out the pos­i­tives of the tech­nol­ogy. Not only can AI do tasks more ef­fi­ciently than us, there are also ar­eas where we have huge short­ages of em­ploy­ees. In the care sec­tor, for in­stance, B.C. has big prob­lems hir­ing peo­ple to sup­port the el­derly, and that can be sup­ple­mented by ro­botic sys­tems. There are def­i­nitely use-cases that can change the world for the bet­ter.”

Ajung Moon speaks at the B.C. Tech Sum­mit at the Van­cou­ver Con­ven­tion Cen­tre West on Tues­day (May 15).

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.