The Charm in the Ma­chine

COG­ITO’S AL­GO­RITHMS READ NONVERBAL CUES BET­TER THAN YOU CAN. WEL­COME TO THE AGE OF A.I.-DRIVEN EM­PA­THY

Inc. (USA) - - FEATURES - By Jeff Ber­covici

Cog­ito’s soft­ware, which is rapidly be­com­ing a must-have for com­pa­nies that run huge call cen­ters, has a very sim­ple aim: to help us all be more hu­man.

JOSHUA FEAST has a dis­tinct con­ver­sa­tional tic: a ner­vous laugh that es­capes when he’s wor­ried some­thing he says might come across as self-se­ri­ous or high­fa­lutin. Like, for in­stance, when he men­tions that he was the first New Zealan­der to be se­lected as a Ful­bright Scholar in en­trepreneur­ship, or tells you he once spent two weeks in a coma af­ter con­tract­ing malaria in In­done­sia, or says he was among the high­est-scor­ing sec­ondary-school stu­dents on a na­tional math exam.

Feast grew up in the sub­urbs of New Zealand’s cap­i­tal, Welling­ton, where it’s con­sid­ered un­wise to act like a big hairy deal. Ki­wis en­joy noth­ing more than cut­ting a “tall poppy” down to size. So Feast, a slen­der, be­spec­ta­cled 40-year-old, is stu­diously care­ful at all times to de­flect credit and min­i­mize his im­por­tance. “I’m not very com­fort­able talk­ing about my­self some­times,” he says, with ap­peal­ing hu­mil­ity that stops just the right dis­tance short of false mod­esty.

You can’t help an­a­lyz­ing some­one’s per­sonal af­fect af­ter talk­ing to Feast, be­cause

that’s his busi­ness. Feast is co-founder and CEO of Cog­ito Cor­po­ra­tion, a Bos­ton-based soft­ware startup that uses ar­ti­fi­cial in­tel­li­gence to mea­sure and im­prove the qual­ity of cer­tain key con­ver­sa­tions, such as sales and cus­tomer-ser­vice calls, in real time. Or, as Feast puts it, Cog­ito—which is Latin for “I think”— “helps peo­ple be more charm­ing in con­ver­sa­tion.”

“Charm” might sound like a hard thing to quan­tify, and it is. But Cog­ito’s other co-founder, MIT pro­fes­sor of me­dia arts and sci­ences Alex “Sandy” Pent­land, has spent the past 20 years do­ing just that. Through ex­per­i­ments in his Hu­man Dy­nam­ics Lab, Pent­land has shown how un­con­scious, nonverbal “hon­est sig­nals”—he wrote a book with that ti­tle in 2008—in­clud­ing tempo, em­pha­sis, and mimicry in­flu­ence the out­come of in­ter­ac­tions like salary ne­go­ti­a­tions, team meet­ings, and ro­man­tic courtships. “Lan­guage is cul­tur­ally some­thing that’s rel­a­tively new to hu­mans, but be­fore lan­guage we were al­ready so­cial be­ings,” says Pent­land, a poly­math and fu­tur­ist whose ex­per­tise ex­tends to evo­lu­tion­ary psy­chol­ogy, ar­ti­fi­cial in­tel­li­gence, and data science. “We’re all sort of brain­washed to think it’s about the words.” Thanks to Pent­land’s work, we now know the mech­a­nisms of hu­man con­nec­tion can, in fact, be boiled down to a set of math­e­mat­i­cal equa­tions. Those equa­tions are what Cog­ito is built on.

Cog­ito oc­cu­pies two floors of a de­cid­edly un­stylish of­fice build­ing in downtown Bos­ton. Amid over­flow­ing cu­bi­cles and faded yel­low walls, I watched over the shoul­der of prod­uct mar­ket­ing man­ager Eli Orkin as he han­dled a Cog­ito-as­sisted sim­u­lated cus­tomer-ser­vice call from one of his col­leagues, Chan­nah Ru­bin. As Ru­bin at­tempted to tell a story about the prob­lems she was hav­ing ob­tain­ing a new credit card, Orkin re­peat­edly cut her off, un­til a dis­creet slide-in no­ti­fi­ca­tion ap­peared on his com­puter screen: “Fre­quent over­laps.” Orkin over­cor­rected, let­ting Ru­bin ram­ble on, un­til he got a sec­ond prompt read­ing “Slow to re­spond.” A long, some­what con­de­scend­ing mono­logue earned him a “Speak­ing a lot” warn­ing.

Dur­ing the call, a color- coded me­ter in the cor­ner of Orkin’s screen of­fered a run­ning gauge of how well it was go­ing, shad­ing to yel­low and orange when he re­sponded too abruptly or slowly and back to green when nor­mal give-and­take was re­stored. “Con­ver­sa­tions are like a dance,” Feast ex­plains. “You can be in sync or out of sync.” Had Ru­bin be­come truly up­set, Orkin would have seen an “Em­pa­thy” prompt, a cue to say some­thing re­as­sur­ing. But just act­ing up­set wouldn’t do it. No one at Cog­ito can fool the soft­ware, which an­a­lyzes hun­dreds of sig­nals to tell if dis­tress is real.

Cog­ito, which has 75 em­ploy­ees, counts among its clients three of the five largest U.S. health in­sur­ance firms, two of the five largest dis­abil­ity in­sur­ers, and some of the big­gest credit card com­pa­nies. Cog­ito also has a men­tal health care prod­uct, an app called Com­pan­ion, that’s used by nurses and so­cial work­ers in pri­vate and Vet­er­ans Health Ad­min­is­tra­tion hos­pi­tals to flag pa­tients show­ing signs of PTSD and de­pres­sion. “Our dream would be to take ad­van­tage of this for a lot of im­por­tant con­ver­sa­tions,” says Feast. “It could be ne­go­ti­a­tions, it could be meet­ings, it could be im­prov­ing dat­ing ex­pe­ri­ences.” Any­thing, he says, where there’s a need to “be more emo­tion­ally in­tel­li­gent in real time.”

A quintessen­tially Homo sapi­ens trait is em­pa­thy— the abil­ity to in­tuit and be moved by oth­ers’ emo­tions— and it can be cru­cial for per­suad­ing, con­sol­ing, or se­duc­ing. Yet the data is clear: We hu­mans aren’t the em­pa­thy prodigies we think we are. In the mod­ern work­place, with its high at­ten­tion de­mands, packed sched­ules, and long hours, we’re pretty bad at it.

In cog­ni­tive psy­chol­ogy, as­sess­ing oth­ers’ thoughts and feel­ings through nonverbal cues is called per­son per­cep­tion. Some peo­ple do this eas­ily. Oth­ers find it im­pos­si­ble. Most of us mud­dle along some­where in the mid­dle.

And when it comes to as­sess­ing our own per­son-per­cep­tion skills, most of us are rank am­a­teurs. “If I were to ask a whole group of peo­ple, ‘On a scale of one to five, how good do you think you are at rec­og­niz­ing so­cial sig­nals in oth­ers?’ nearly all would rate them­selves a four or a five,” says Feast. He’s sym­pa­thetic to the de­luded. “One of the big prob­lems we have is we don’t get much feed­back on it in our daily lives. If you think I’m be­ing kind of rude right now, you’re prob­a­bly not go­ing to tell me.” In the ab­sence of feed­back, it’s hard to im­prove—and easy to think you don’t need to.

That’s where Cog­ito’s prom­ise lies. If it lays bare our lim­i­ta­tions, it’s only by show­ing how, for the first time, we have the tools to tran­scend them. In Feast’s view, the pes­simists who see A.I. mak­ing hu­mans ob­so­lete—those who worry over re­search that shows A.I. may un­em­ploy mil­lions over the next sev­eral years—are over­look­ing how much smarter and more pro­duc­tive and cre­ative we can be with its help. How much more hu­man it can make us, to put it bluntly. “In some ways, we think of our­selves as a cy­borg com­pany,” Feast says, “help­ing hu­mans be their best selves.”

IN NEW ZEALAND, there’s a say­ing: “You can make any­thing from No. 8 wire”— sheep farm­ers use it for fences. “No. 8 wire men­tal­ity” is Kiwi short­hand for a cando, scrappy spirit. Feast’s fam­ily em­bod­ied No. 8. His grand­fa­ther founded a large con­struc­tion busi­ness, and his fa­ther and un­cle ran a num­ber of en­ter­prises in real es­tate and de­vel­op­ment. Feast too felt the pull of build­ing, al­beit in a dig­i­tal realm. He earned an un­der­grad­u­ate de­gree in com­puter en­gi­neer­ing, and one of his first jobs was work­ing as a tech con­sul­tant for Ac­cen­ture. Among its clients was New Zealand’s Depart­ment of Child, Youth, and Fam­ily, which faced two huge challenges. First, even the best so­cial work­ers of­ten lasted only three to five years

WE HU­MANS AREN’T THE EM­PA­THY PRODIGIES WE THINK WE ARE.

be­fore burn­ing out, a phe­nom­e­non known as com­pas­sion fa­tigue. Sec­ond, be­low-av­er­age so­cial work­ers had trou­ble im­prov­ing, be­cause their work was hard to quan­tify.

Feast was still ru­mi­nat­ing on those lessons when, in 2005, he won that Ful­bright Schol­ar­ship in en­trepreneur­ship, which al­lowed him to study at MIT. One of Feast’s cour­ses there was a sem­i­nar con­sist­ing of guest lec­tures cu­rated by Pent­land. For the past few years, the shaggy-haired, hy­per­en­er­getic Pent­land had been fo­cus­ing his in­ves­ti­ga­tions on those un­con­sciously trans­mit­ted hon­est sig­nals. Pent­land and his team were in­ter­ested in how two strangers adopt each other’s phys­i­cal pos­tures and in­to­na­tions as they grow com­fort­able to­gether, and how a speaker us­ing a steady pat­tern of CHRISTO­PHER CHURCHILL em­pha­sis leaves an au­di­ence with the sense that he or she is well in­formed.

Among Pent­land’s plau­dits is this: He’s con­sid­ered a pi­o­neer of wear­able com­put­ing. (One of his grad stu­dents cre­ated Google Glass.) To con­duct their in­quiries, Pent­land’s team built a wear­able de­vice they called a “so­ciome­ter,” a shoul­der­mounted pack, roughly the

size of an iPhone, whose sen­sors gath­ered data about speech and move­ments dur­ing in­ter­ac­tions. In ex­per­i­ments, they demon­strated that those hon­est sig­nals (the term, bor­rowed from evo­lu­tion­ary bi­ol­ogy, refers to be­hav­iors that are hard to fake) could be used to ac­cu­rately pre­dict the out­come of salary ne­go­ti­a­tions, group de­ci­sions, and speed dat­ing.

As Feast learned about his pro­fes­sor’s re­search, he re­al­ized that soft­ware built us­ing Pent­land’s find­ings could help so­cial work­ers see if they were build­ing trust with their clients, or de­ter­mine which psy­chi­atric pa­tients need emer­gency coun­sel­ing. “I thought the po­ten­tial was un­be­liev­able if we could take it to mar­ket and make it a thing peo­ple could use,” he says.

Pent­land has helped found more than 20 star­tups based on his re­search, and he loved the idea. He’d grown up close with his grand­mother, who ran Michi­gan’s first in­pa­tient in­sti­tu­tion for chil­dren with psy­chi­atric prob­lems. “I ac­tu­ally learned to read in a men­tal in­sti­tu­tion,” he says. He signed on and helped re­cruit one of his for­mer PhD stu­dents, Ali Azarbaye­jani, to be CTO.

But all they had were the the­o­ret­i­cal un­der­pin­nings of a busi­ness—noth­ing close to a pro­to­type, much less a prod­uct. It was what ven­ture cap­i­tal­ists call a science project. “You’ve got all this aca­demic re­search that can’t move for­ward be­cause the chasm be­tween ex­per­i­men­tal re­sults and a fund­able tech­nol­ogy is too wide, too risky,” says Feast. Yet they were in luck. VCs hate throw­ing money down a hole with­out know­ing how deep it is. But one in­sti­tu­tion didn’t mind.

The De­fense Ad­vanced Re­search Projects Agency— Darpa—is an arm of the U.S. De­fense Depart­ment that in­cu­bates emerg­ing tech­nolo­gies. “It’s like a ven­ture cap­i­tal firm with­out a profit mo­tive,” says Rus­sell Shilling, a for­mer Darpa pro­gram man­ager who now de­vel­ops ed­u­ca­tional tech at a non­profit called Dig­i­tal Prom­ise. “You’re try­ing to think about what might be pos­si­ble in 10 to 20 years and then cre­ate a pro­gram to build it in three to four.” In 2010, Shilling was tasked with scout­ing out tech­nolo­gies that could be used to flag sol­diers re­turn­ing from Afghanistan and Iraq who might be show­ing early signs of PTSD. Feast’s idea was a per­fect fit, and Pent­land’s rep­u­ta­tion helped their ap­pli­ca­tion se­cure an easy ap­proval.

The tech­ni­cal chal­lenge they had em­barked on was in­deed daunt­ing, re­quir­ing models for turn­ing speech, with all its nu­ances and in­flec­tions, into neatly la­beled data that can be fed into ma­chine-learn­ing al­go­rithms, which would then try to ex­tract be­hav­ioral pat­terns from it. Look­ing for hints of emo­tional states in raw au­dio is an or­der of mag­ni­tude more dif­fi­cult than speech recog­ni­tion: A word has a be­gin­ning and an end. Clues to its mean­ing can be de­rived from the words around it. But signs of a speaker’s de­pres­sion might be scat­tered through­out a long con­ver­sa­tion.

Cog­ito hit all of the mile­posts in its Darpa pro­posal, and in 2012, it had Feast’s dream prod­uct. Called Com­pan­ion, it’s an app nurses, psy­chol­o­gists, and so­cial work­ers can use to mon­i­tor the psy­cho­log­i­cal states of their pa­tients, who record and up­load au­dio diaries. It’s been a use­ful tool. “This has enor­mous prom­ise in chang­ing the way we do men­tal health care as well as med­i­cal care,” says David Ah­ern, di­rec­tor of be­hav­ioral in­for­mat­ics and e-health at Bos­ton’s Brigham and Women’s Hospi­tal. Ah­ern has been over­see­ing a three-year study in­volv­ing more than 200 pa­tients who suf­fer from phys­i­cal and psy­chi­atric dis­or­ders. Re­search has shown that med­i­cal pa­tients who de­velop emo­tional health prob­lems cost more to treat and re­spond less well to treat­ment; as a re­sult, says Ah­ern, “it be­hooves the med­i­cal sys­tem to do a bet­ter job in de­tec­tion and treat­ment of be­hav­ioral health be­cause it drives out­comes and drives the costs.” While his study’s re­sults aren’t yet in, feed­back from pa­tients and clin­i­cians has been pos­i­tive. Some prac­ti­tion­ers be­lieve Com­pan­ion has helped avert sui­cides.

But Feast quickly came to see it was a prod­uct with limited rev­enue po­ten­tial. In the con­vo­luted world of health care, in­sur­ers of­ten pay more read­ily for a treat­ment, even an ex­pen­sive and in­ef­fec­tive one, than for a pre­ven­tive ser­vice. “Mak­ing a busi­ness purely around mak­ing peo­ple more well isn’t al­ways a good propo­si­tion,” Feast says, del­i­cately. There were much big­ger mar­kets to crack.

COM­PA­NIES HAVE their own ver­sion of the per­son-per­cep­tion prob­lem. A big brand might have tens of thou­sands of em­ploy­ees han­dling cus­tomers’ calls and com­plaints. Each of those con­tacts is an op­por­tu­nity for the cus­tomer to form an im­pres­sion, pos­i­tive or neg­a­tive. Yet each is han­dled by a cus­tomer­re­la­tions rep­re­sen­ta­tive who—like all of us—prob­a­bly over­rates his or her skill at ba­sic things like lis­ten­ing, demon­strat­ing em­pa­thy, and es­tab­lish­ing trust. Geeta Wil­son, vice pres­i­dent of con­sumer ex­pe­ri­ence at the health in­surer Hu­mana, says she lives in fear, as do many ex­ec­u­tives, of this “dis­crep­ancy be­tween how we think of our­selves in­ter­nally and how our cus­tomers think of us.” She adds, “You don’t want to be in a place where you’re rat­ing your­self bet­ter than your cus­tomer is.”

But un­like us over­con­fi­dent in­di­vid­u­als, com­pa­nies such as Hu­mana put a lot of money and ef­fort into clos­ing that gap. They ask call­ers to stay on the line and take au­to­mated sur­veys. They send email ques­tion­naires and con­duct fo­cus groups to cal­cu­late met­rics like CSAT (cus­tomer sat­is­fac­tion), NPS (Net Pro­moter Score—the will­ing­ness to rec­om­mend a brand to oth­ers), and VOC (voice of the cus­tomer, de­rived from sur­veys and fo­cus groups). Calls to a con­tact cen­ter are recorded, and ran­dom sam­ples of each em­ployee’s in­ter­ac­tions are re­viewed for qual­ity.

These meth­ods have short­com­ings. Sur­veys and re­views are slow, and small sam­ple sizes skew find­ings. “Every hu­man as­sess­ment, it’s al­ways go­ing to have some mar­gin of er­ror,” says Wil­son. “Even if you’re re­ally good at that as a com­pany, you’ll al­ways be off.”

In 2016, Hu­mana tried some­thing new. Wil­son, who had

heard about Cog­ito from Christo­pher Kay, Hu­mana’s chief in­no­va­tion of­fi­cer, of­fered 200 of her call-cen­ter as­so­ci­ates as guinea pigs in a test of a new prod­uct. In a six-month pi­lot study, these as­so­ci­ates took all of their calls us­ing Cog­ito’s real-time con­ver­sa­tion-anal­y­sis tool. The re­sults were hard to ig­nore: Cus­tomers whose calls were han­dled by those us­ing the Cog­ito app re­ported a 28 per­cent higher NPS. Is­sue res­o­lu­tion im­proved by 6 per­cent while av­er­age call time and es­ca­la­tions—when call­ers de­mand to speak with man­agers—both went down. Hu­mana is now in the process of rolling out Cog­ito to thou­sands more of its cus­tomer­re­la­tions as­so­ci­ates, and it’s run­ning a sec­ond pi­lot, this one of Cog­ito’s ap­pli­ca­tion tai­lored for use in sales.

Call- cen­ter agents have a lot in com­mon with so­cial work­ers: They don’t last ei­ther. The av­er­age call cen­ter’s turnover runs from 30 to 45 per­cent an­nu­ally. Agents also have com­pas­sion fa­tigue, but theirs builds up over hours, not years. Thanks to on­line self-ser­vice tools and chat­bots, most easy queries never make it to a call cen­ter. The ques­tions that do come in are of­ten dif­fi­cult and fraught with emo­tion.

What cus­tomers want, above all, is to feel that they’re in good hands, says Dou­glas Kim, Cog­ito’s chief rev­enue and cus­tomer-suc­cess of­fi­cer. The sense that some­one knows what he or she is talk­ing about or cares about what you have to say is ex­actly the sort of thing con­veyed pri­mar­ily through nonverbal sig­nals, Pent­land’s re­search has shown. But main­tain­ing the be­hav­iors that send those sig­nals, such as an­swer­ing ques­tions with­out hes­i­ta­tion, gets in­creas­ingly hard as cog­ni­tive fa­tigue creeps in over the course of a shift; to ex­cel, work­ers need some­thing stronger than a cup of cof­fee. “I use an anal­ogy,” says Kim. “It’s like when you drive a newer car and you have

BODY LAN­GUAGE Cog­ito co-founders Sandy Pent­land (left) and Joshua Feast. The two met when Feast took a Pent­land-cu­rated sem­i­nar at MIT. Pent­land’s re­search on nonverbal so­cial cues gave Feast the inspiration for the com­pany.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.