TECH­NOL­OGY

Cre­at­ing sen­si­tive and as­tute ma­chines comes with a plethora of eth­i­cal con­sid­er­a­tions

The Georgia Straight - - Contents - By Kate Wil­son

Build­ing ro­bots to un­der­stand hu­man emo­tions is a po­lar­iz­ing topic. With in­di­vid­u­als herald­ing the tech­nol­ogy both as a dystopian night­mare that will ren­der peo­ple ob­so­lete and as vi­tal for im­prov­ing hu­man ex­pe­ri­ence, few can agree on what the fu­ture of ex­pres­sive ro­bots will look like.

Ros­alind Pi­card, how­ever, knows more than most. Cur­rently a pro­fes­sor of me­dia arts and sci­ences at the Mas­sachusetts In­sti­tute of Tech­nol­ogy as well as a co­founder of two tech star­tups, Pi­card wrote the book on cre­at­ing ro­bots with emo­tional in­tel­li­gence. Cred­ited with com­ing up in 1997 with the con­cept of af­fec­tive com­put­ing— a branch of com­puter sci­ence, she de­fines to the Ge­or­gia Straight, that ex­plores how ma­chines can be pro­grammed to de­lib­er­ately in­flu­ence hu­man emo­tions—she was in­spired to pur­sue the sub­ject af­ter ex­plor­ing the struc­tures of the brain.

“The lim­bic sys­tem is a term that’s not com­monly used these days,” she tells the Straight dur­ing a call from Bos­ton. “It’s an old term that refers to some parts of the brain that are con­sid­ered old also— those that are in­volved in me­mory, emo­tion, and at­ten­tion. But it was read­ing about re­gions in those struc­tures—to­day more com­monly called the tem­po­ral lobe—that got me in­ter­ested in emo­tion in the first place. The brain is mag­nif­i­cent. When you think about how much en­gi­neers do to build some­thing, and how much more space and en­ergy it takes up, it’s amaz­ing that it’s still nowhere near as smart as the brain.”

Build­ing on that re­search, she hy­poth­e­sized how ro­bots can be de­signed in a way that might not mimic a hu­man brain but, rather, af­fect how peo­ple’s brains re­spond to them. In­ter­act­ing with elec­tron­ics, Pi­card says, can of­ten lead to frus­tra­tion. By adding cam­eras to ro­bots in or­der to iden­tify move­ments or fa­cial ex­pres­sions, and by per­mit­ting au­dio record­ing to pick out a tone of voice, ma­chines can be taught to re­spond to in­di­vid­u­als’ re­ac­tions in an em­pa­thetic way.

“Let’s say that a com­puter is deal­ing with you when your flight was just screwed up,” she says. “You’re an­noyed and you’re an­gry. The com­put­ers of to­day would just sim­ply try to fix things and let you know your flight op­tions. That’s help­ful, and we don’t want them not to do that, but they might help you even more if they said, ‘Wow, that’s re­ally aw­ful to have that hap­pen to your flight. That must be re­ally frus­trat­ing.’ As soon as a com­puter or a per­son ac­knowl­edges your feel­ings, you tend to be able to get past them a lit­tle bit faster. That’s the sign of emo­tional in­tel­li­gence. It en­ables you to not just get the prob­lem solved but en­ables you to feel bet­ter, just like if you were deal­ing with a per­son.”

Those who take a neg­a­tive view of ma­chine learn­ing and AI ar­gue that com­put­ers will learn to ma­nip­u­late hu­mans with their in­tel­li­gence. Films like block­busters Ex Machina, Tran­scen­dence, and I, Ro­bot each imag­ine a world where com­put­ers are more emo­tion­ally per­cep­tive than peo­ple, while per­son­al­i­ties like the late Stephen Hawk­ing and Elon Musk have pub­licly voiced the risks of mak­ing ma­chines smarter. For Pi­card, that pos­si­bil­ity is a long way off.

“Un­der­stand­ing emo­tion is so hard that peo­ple don’t re­ally un­der­stand it yet,” she says. “We’re giv­ing com­put­ers bet­ter abil­ity to guess what they’re see­ing, but it still doesn’t mean that they un­der­stand any of it. When they’re more ac­cu­rate at pro­cess­ing their in­puts, we’re giv­ing them bet­ter in­struc­tions about what to do with it, which means that some­times they do the right thing. It looks like it gets you, and that it em­pathized. But it doesn’t re­ally un­der­stand us—it just sim­ply learned that when it sees you look­ing sad, it would be in­ap­pro­pri­ate to look happy. It’s just learned to act in an ap­pro­pri­ate way.”

As with any tech­nol­ogy de­signed to in­ter­act with peo­ple, cre­at­ing emo­tion­ally as­tute ma­chines comes with a host of eth­i­cal con­sid­er­a­tions. The ro­bots’ au­dio and video record­ings of the peo­ple around them, for in­stance, could be sold for mar­ket re­search. Com­pa­nies who cre­ate and sell em­pa­thetic ro­bots, too, could part­ner with busi­nesses to sub­tly ad­ver­tise through the com­puter’s words and ac­tions. In Pi­card’s view, it’s im­por­tant that or­ga­ni­za­tions make a de­lib­er­ate choice to de­velop tech­nol­ogy with the sole ben­e­fit of help­ing in­di­vid­u­als.

“I think it’s time for peo­ple to think about the fram­ing of tech­nol­ogy,” she con­tin­ues. “Do we just want to build tech­nol­ogy that doesn’t re­ally care about peo­ple, that just makes money for pow­er­ful peo­ple who own it? Or do we want to make tech­nol­ogy for peo­ple that truly makes their lives bet­ter? I think we need to get a lot more dis­crim­i­nat­ing about what we say is cool. We need to stop say­ing that some­thing is cool if it’s re­ally not mak­ing life bet­ter.”

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.