Wanted: ‘Per­fect babysit­ter.’ Must pass AI scan

The Charlotte Observer (Sunday) - - Obituaries/news - BY DREW HAR­WELL Wash­ing­ton Post

When Jessie Battaglia started look­ing for a new babysit­ter for her 1-yearold son, she wanted more in­for­ma­tion than she could get from a crim­i­nal-back­ground check, par­ent com­ments and a face-to-face in­ter­view.

So she turned to Pre­dic­tim, an on­line ser­vice that uses “ad­vanced ar­ti­fi­cial in­tel­li­gence” to as­sess a babysit­ter’s per­son­al­ity, and aimed its scan­ners at one can­di­date’s thou­sands of Face­book, Twit­ter and In­sta­gram posts.

The sys­tem of­fered an au­to­mated “risk rat­ing” of a 24-year-old can­di­date, say­ing she was at a “very low risk” of be­ing a drug abuser. But it gave a slightly higher risk as­sess­ment – a 2 out of 5 – for bul­ly­ing, ha­rass­ment, be­ing “dis­re­spect­ful” and hav­ing a “bad at­ti­tude.”

The sys­tem didn’t ex­plain why it had made that de­ci­sion. But Battaglia, who had be­lieved the sit­ter was trust­wor­thy, sud­denly felt pangs of doubt.

“So­cial me­dia shows a per­son’s char­ac­ter,” said Battaglia, 29, who lives near Los An­ge­les. “So why did she come in at a 2 and not a 1?”

Pre­dic­tim is of­fer­ing par­ents the same play­book that dozens of other tech firms are sell­ing to em­ploy­ers around the world: ar­ti­fi­cial-in­tel­li­gence sys­tems that an­a­lyze a per­son’s speech, fa­cial ex­pres­sions and on­line his­tory with prom­ises of re­veal­ing the hid­den as­pects of their pri­vate lives.

The tech­nol­ogy is re­shap­ing how some com­pa­nies ap­proach re­cruit­ing, hir­ing and re­view­ing work­ers, of­fer­ing em­ploy­ers an un­ri­valed look at job can­di­dates through a new wave of in­va­sive psy­cho­log­i­cal as­sess­ment and sur­veil­lance.

The tech firm Fama says it uses AI to po­lice work­ers’ so­cial me­dia for “toxic be­hav­ior” and alert their bosses. And the re­cruit- ment-tech­nol­ogy firm HireVue, which works with com­pa­nies such as Ge­ico, Hil­ton and Unilever, of­fers a sys­tem that au­to­mat­i­cally an­a­lyzes ap­pli­cants’ tone, word choice and fa­cial move­ments dur­ing video in­ter­views to pre­dict their skill and de­meanor on the job. (Can­di­dates are en­cour­aged to smile for best re­sults.)

But crit­ics say Pre­dic­tim and sim­i­lar sys­tems present their own dan­gers by mak­ing au­to­mated and pos­si­bly life-al­ter­ing de­ci­sions vir­tu­ally unchecked.

The sys­tems de­pend on black-box al­go­rithms that give lit­tle de­tail about how they re­duced the com­plex­i­ties of a per­son’s in­ner life into a cal­cu­la­tion of virtue or harm. And even as Pre­dic­tim’s tech­nol­ogy in­flu­ences par­ents’ think­ing, it re­mains en­tirely un­proven, largely un­ex­plained and vul­ner­a­ble to quiet bi­ases over how an ap­pro­pri­ate babysit­ter should share, look and speak.

There’s this “mad rush to seize the power of AI to make all kinds of de­ci­sions with­out en­sur­ing it’s ac­count­able to hu­man be­ings,” said Jeff Ch­ester, the ex­ec­u­tive direc­tor of the Cen­ter for Dig­i­tal Democ­racy, a tech ad­vo­cacy group. “It’s like peo­ple have drunk the dig­i­tal Kool-Aid and think this is an ap­pro­pri­ate way to gov­ern our lives.”

Pre­dic­tim’s scans an­a­lyze the en­tire his­tory of a babysit­ter’s so­cial me­dia, which, for many of the youngest sit­ters, can cover most of their lives. And the sit­ters are told they will be at a great dis­ad­van­tage for the com­pet­i­tive jobs if they refuse.

Pre­dic­tim’s chief and co-founder Sal Parsa said the com­pany, launched last month as part of the Univer­sity of Cal­i­for­nia at Berke­ley’s SkyDeck tech in­cu­ba­tor, takes eth­i­cal ques­tions about its use of the tech­nol­ogy se­ri­ously. Par­ents, he said, should see the rat­ings as a com­pan­ion that “may or may not re­flect the sit­ter’s ac­tual at­tributes.”

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.