Child pro­tec­tive agen­cies are haunted when they fail to save chil­dren. Of­fi­cials in the US be­lieve a new data anal­y­sis pro­gram is help­ing them make bet­ter judg­ment calls, writes Dan Hur­ley

Irish Examiner - - Front Page -

Pro­tec­tion agen­cies are dev­as­tated when they fail chil­dren. US of­fi­cials say a new anal­y­sis pro­gram helps them make bet­ter calls.

THE call to Pitts­burgh’s hot­line for child abuse and ne­glect came in at 3.50pm on the Wed­nes­day just be­fore Christ­mas.

Sit­ting in one of 12 cu­bi­cles, in a for­mer fac­tory now oc­cu­pied by the Al­legheny County Po­lice Depart­ment and the back of­fices of the Depart­ment of Chil­dren, Youth, and Fam­i­lies, the call screener, Ti­mothy Byrne, lis­tened as a preschool teacher de­scribed what a three-year-old child had told him.

The lit­tle girl had said that a man, a friend of her mother’s, had been in her home when he “hurt their head and was bleed­ing and shak­ing on the floor and the bath­tub”. The teacher said he had seen on the news that the mother’s boyfriend had over­dosed and died in the home.

Ac­cord­ing to the case records, Byrne searched the depart­ment’s com­puter database for the fam­ily, find­ing al­le­ga­tions dat­ing back to 2008: Parental sub­stance abuse, in­ad­e­quate hy­giene, do­mes­tic vi­o­lence, in­ad­e­quate pro­vi­sion of food and phys­i­cal care, med­i­cal ne­glect, and sex­ual abuse by an un­cle in­volv­ing one of the girl’s two older sib­lings.

But none of those al­le­ga­tions had been sub­stan­ti­ated. And while the cur­rent claim, of a man dy­ing of an over­dose in the child’s home, was shock­ing, it fell short of the min­i­mal le­gal re­quire­ment for send­ing out a case­worker to knock on the fam­ily’s door and open an in­ves­ti­ga­tion.

Be­fore clos­ing the file, Byrne had to es­ti­mate the risk to the child’s fu­ture well­be­ing. Screen­ers like him hear far more alarm­ing stories of chil­dren in peril nearly ev­ery day. He keyed into the com­puter: “Low risk.”

In the box where he had to select the likely threat to the chil­dren’s im­me­di­ate safety, he chose “No safety threat”. Had the de­ci­sion been left solely to Byrne — as these de­ci­sions are left to screen­ers and their su­per­vi­sors in ju­ris­dic­tions around the world — that might have been the end of it. He would have, in in­dus­try par­lance, screened the call out.

That’s what hap­pens to around half of the 14,000 or so al­le­ga­tions re­ceived each year in Al­legheny County — re­ports that might in­volve charges of se­ri­ous phys­i­cal harm to the child, but can also in­clude just about any­thing that a dis­grun­tled land­lord, non­cus­to­dial par­ent, or nag­ging neigh­bour de­cides to call about.

Na­tion­ally, 42% of the 4m al­le­ga­tions re­ceived in 2015, in­volv­ing 7.2m chil­dren, were screened out, of­ten based on sound le­gal rea­son­ing but also be­cause of judg­ment calls, opin­ions, bi­ases, and be­liefs.

And yet more US chil­dren died in 2015 as a re­sult of abuse and ne­glect — 1,670, ac­cord­ing to the fed­eral Ad­min­is­tra­tion for Chil­dren and Fam­i­lies; or twice that many, ac­cord­ing to lead­ers in the field, than died of can­cer.

This time, how­ever, the de­ci­sion to screen out or in was not Byrne’s alone. In Au­gust 2016, Al­legheny County be­came the first ju­ris­dic­tion in the world to let a pre­dic­tive­an­a­lyt­ics al­go­rithm — the same kind of so­phis­ti­cated pat­tern anal­y­sis used in credit re­ports, the au­to­mated buy­ing and sell­ing of stocks, and the hir­ing, fir­ing, and field­ing of base­ball play­ers on World Se­ries-win­ning teams — of­fer up a sec­ond opin­ion on ev­ery in­com­ing call, in hopes of do­ing a bet­ter job of iden­ti­fy­ing the fam­i­lies most in need of in­ter­ven­tion.

And so Byrne’s fi­nal step in as­sess­ing the call was to click on the icon of the Al­legheny Fam­ily Screen­ing Tool.

Af­ter a few sec­onds, his screen dis­played a ver­ti­cal colour bar, run­ning from a green 1 (low­est risk) at the bot­tom to a red 20 (high­est risk) on top. The as­sess­ment was based on a sta­tis­ti­cal anal­y­sis of four years of prior calls, us­ing well over 100 cri­te­ria main­tained in eight data­bases for jails, psy­chi­atric ser­vices, pub­lic-wel­fare ben­e­fits, drug and al­co­hol treat­ment cen­tres, and more. For the three­year-old’s fam­ily, the score came back as 19 out of a pos­si­ble 20.

Over the course of an 18-month in­ves­ti­ga­tion, of­fi­cials in the county’s Of­fice of Chil­dren, Youth and Fam­i­lies (CYF) of­fered me ex­tra­or­di­nary ac­cess to their files and pro­ce­dures, on the con­di­tion that I not iden­tify the fam­i­lies in­volved.

Ex­actly what in this fam­ily’s back­ground led the screen­ing tool to score it in the top 5% of risk for fu­ture abuse and ne­glect can­not be known for cer­tain. But a close in­spec­tion of the files re­vealed that the mother was at­tend­ing a drug-treat­ment cen­tre for ad­dic­tion to opi­ates; that she had a his­tory of ar­rest and jail on drug-pos­ses­sion charges; that the three fa­thers of the lit­tle girl and her two older sib­lings had sig­nif­i­cant drug or crim­i­nal his­to­ries, in­clud­ing al­le­ga­tions of vi­o­lence; that one of the older sib­lings had a life­long phys­i­cal dis­abil­ity; and that the two younger chil­dren had re­ceived di­ag­noses of de­vel­op­men­tal or men­tal­health is­sues.

Find­ing all that in­for­ma­tion about the mother, her three chil­dren and their three fa­thers in the county’s maze of data­bases would have taken Byrne hours he did not have; call screen­ers are ex­pected to ren­der a de­ci­sion on whether or not to open an in­ves­ti­ga­tion within an hour at most, and usu­ally in half that time. Even then, he would have had no way of know­ing which fac­tors, or com­bi­na­tions of fac­tors, are most pre­dic­tive of fu­ture bad out­comes. The al­go­rithm, how­ever, searched the files and ren­dered its score in sec­onds. And so now, de­spite Byrne’s ini­tial scep­ti­cism, the high score prompted him and his su­per­vi­sor to screen the case in, mark­ing it for fur­ther in­ves­ti­ga­tion. Within 24 hours, a CYF case­worker would have to “put eyes on” the chil­dren, meet the mother, and see what a score of 19 looks like in flesh and blood.

For decades, de­bates over how to pro­tect chil­dren from abuse and ne­glect have cen­tred on which remedies work best: Is it bet­ter to pro­vide ser­vices to par­ents to help them cope or should kids be whisked out of the home as soon as pos­si­ble? If they are re­moved, should they be placed with rel­a­tives or fos­ter par­ents? Be­gin­ning in 2012, though, two pi­o­neer­ing so­cial sci­en­tists work­ing on op­po­site sides of the globe — Emily Put­nam-Horn­stein, of the Univer­sity of South­ern Cal­i­for­nia, and Rhema Vaithi­anathan, now a pro­fes­sor Emily Put­nam-Horn­stein (top) and Rhema Vaithi­anathan (bot­tom), two pi­o­neer­ing so­cial sci­en­tists work­ing on op­po­site sides of the globe, helped de­velop the Al­legheny County al­go­rithm. at the Auck­land Univer­sity of Technology in New Zealand — be­gan ask­ing a dif­fer­ent ques­tion: Which fam­i­lies are most at risk and in need of help?

“Peo­ple like me are say­ing, ‘You know what, the qual­ity of the ser­vices you pro­vide might be just fine — it could be that you are pro­vid­ing them to the wrong fam­i­lies,’” says Vaithi­anathan.

Aged is in her early 50s, she em­i­grated from Sri Lanka to New Zealand as a child; Put­nam-Horn­stein, a decade younger, has lived in Cal­i­for­nia for years. Both share an en­thu­si­asm for the prospect of us­ing pub­lic data­bases for the pub­lic good.

Three years ago, the two were asked to in­ves­ti­gate how pre­dic­tive an­a­lyt­ics could im­prove Al­legheny County’s han­dling of mal­treat­ment al­le­ga­tions, and they even­tu­ally found them­selves fo­cused on the callscreen­ing process.

They were brought in fol­low­ing a se­ries of tragedies in which chil­dren died af­ter their fam­ily had been screened out — the night­mare of ev­ery child-wel­fare agency.

One of the worst fail­ures oc­curred on June 30, 2011, when fire­fight­ers were called to a blaze com­ing from a third-floor apart­ment on East Pitts­burgh-McKeesport Boule­vard.

When fire­fight­ers broke down the locked door, the body of 7-year-old KiDonn Pol­lard-Ford was found un­der a pile of clothes in his bed­room, where he had ap­par­ently sought shel­ter from the smoke.

KiDonn’s 4-year-old brother, KrisDon Wil­liams-Pol­lard, was un­der a bed, not breath­ing. He was re­sus­ci­tated out­side, but died two days later in the hos­pi­tal.

The chil­dren, it turned out, had been left alone by their mother, Ki­aira Pol­lard, 27, when she went to work that night as an ex­otic dancer. She was said by neigh­bours to be an ador­ing mother; the older boy was get­ting good grades in school.

For CYF, the bit­ter­est part of the tragedy was that the depart­ment had re­ceived nu­mer­ous calls about the fam­ily but had screened them all out as un­wor­thy of a full in­ves­ti­ga­tion.

In­com­pe­tence on the part of the screen­ers? No, says Vaithi­anathan, who spent months with Put­namHorn­stein bur­row­ing through the county’s data­bases to build their al­go­rithm, based on all 76,964 al­le­ga­tions of mal­treat­ment made be­tween April 2010 and April 2014.

“What the screen­ers have is a lot of data,” she told me, “but it’s quite dif­fi­cult to nav­i­gate and know which fac­tors are most im­por­tant. Within a sin­gle call to CYF, you might have two chil­dren, an al­leged per­pe­tra­tor, you’ll have mom, you might have another adult in the house­hold — all these peo­ple will have his­to­ries in the sys­tem that the per­son screen­ing the call can go in­ves­ti­gate. But the hu­man brain is not that deft at har­ness­ing and mak­ing sense of all that data.”

She and Put­nam-Horn­stein linked many dozens of data points — just about ev­ery­thing known to the county about each fam­ily be­fore an al­le­ga­tion ar­rived — to pre­dict how the chil­dren would fare af­ter­wards.

What they found was startling and dis­turb­ing: 48% of the low­est-risk fam­i­lies were be­ing screened in, while 27% of the high­est-risk fam­i­lies were be­ing screened out. Of the 18 calls to CYF be­tween 2010 and 2014 in which a child was later killed or gravely in­jured as a re­sult of parental mal­treat­ment, eight cases, or 44%, had been screened out as not worth in­ves­ti­ga­tion.

Ac­cord­ing to Rachel Berger, a pe­di­a­tri­cian who di­rects the child­abuse re­search cen­tre at Chil­dren’s Hos­pi­tal of Pitts­burgh and who led re­search for the fed­eral Com­mis­sion to Elim­i­nate Child Abuse and Ne­glect Fa­tal­i­ties, the prob­lem is not one of find­ing a nee­dle in a haystack but of find­ing the right nee­dle in a pile of nee­dles.

“All of these chil­dren are liv­ing in chaos,” she says. “How does CYF pick out which ones are most in dan­ger when they all have risk fac­tors? You can’t be­lieve the amount of subjectivity that goes into child­pro­tec­tion de­ci­sions. That’s why I love pre­dic­tive an­a­lyt­ics.

“It’s fi­nally bring­ing some ob­jec­tiv­ity and science to de­ci­sions that can be so un­be­liev­ably life-chang­ing.”

The morn­ing af­ter the al­go­rithm prompted CYF to in­ves­ti­gate the fam­ily of the three-year-old who wit­nessed a fa­tal drug over­dose, a case­worker named Emily Lankes knocked on their front door.

The weath­ered, two-storey brick build­ing was sur­rounded by razed lots and boarded-up homes. No­body an­swered, so Lankes drove to the child’s preschool. The lit­tle girl seemed fine. Lankes then called the mother’s mo­bile. The wo­man asked re­peat­edly why she was be­ing in­ves­ti­gated, but agreed to a visit the next af­ter­noon.

The home, Lankes found when she re­turned, had lit­tle fur­ni­ture and no beds, though the 20-some­thing mother in­sisted she was in the process of se­cur­ing those and that the chil­dren slept at rel­a­tives’ homes.

All the ap­pli­ances worked. There was food in the fridge. The mother’s dis­po­si­tion was hy­per and erratic, but she in­sisted she was clean of drugs and at­tend­ing a treat­ment cen­tre.

All three chil­dren de­nied hav­ing any wor­ries about how their mother cared for them. Lankes would still need to con­firm the mother’s story with her treat­ment cen­tre, but for the time be­ing, it looked as though the al­go­rithm had struck out.

Charges of faulty fore­casts have ac­com­pa­nied the emer­gence of pre­dic­tive an­a­lyt­ics into pub­lic pol­icy. And when it comes to crim­i­nal jus­tice, where an­a­lyt­ics are now en­trenched as a tool for judges and pa­role boards, even larger com­plaints have arisen about the se­crecy sur­round­ing the work­ings of the al­go­rithms them­selves — most of which are de­vel­oped, mar­keted, and closely guarded by pri­vate firms.

That is a chief ob­jec­tion lodged against two Florida com­pa­nies: Eck­erd Con­nects, a non­profit, and its for-profit part­ner, MindShare Technology. Their pre­dic­tive-an­a­lyt­ics pack­age, called Rapid Safety Feed­back, is now be­ing used, say the


com­pa­nies, by child-wel­fare agen­cies in Con­necti­cut, Louisiana, Maine, Oklahoma, and Ten­nessee.

EARLY last month, the Illi­nois Depart­ment of Chil­dren and Fam­ily Ser­vices an­nounced it would stop us­ing the pro­gram, for which it had al­ready been billed $366,000 (€304,000) — in part be­cause Eck­erd and MindShare re­fused to re­veal de­tails about what goes into their for­mula, even af­ter the deaths of chil­dren whose cases had not been flagged as high risk.

The Al­legheny Fam­ily Screen­ing Tool de­vel­oped by Vaithi­anathan and Put­nam-Horn­stein is dif­fer­ent: It is owned by the county.

Its work­ings are pub­lic. Its cri­te­ria are de­scribed in aca­demic publi­ca­tions and picked apart by lo­cal of­fi­cials. At pub­lic meet­ings held in down­town Pitts­burgh be­fore the sys­tem’s adop­tion, lawyers, child ad­vo­cates, par­ents, and even for­mer fos­ter chil­dren asked hard ques­tions not only of the aca­demics but also of the county ad­min­is­tra­tors who in­vited them.

“We’re try­ing to do this the right way, to be trans­par­ent about it and talk to the com­mu­nity about these changes,” said Erin Dal­ton, a deputy di­rec­tor of the county’s depart­ment of hu­man ser­vices and leader of its data-anal­y­sis depart­ment.

She and oth­ers in­volved with the Al­legheny pro­gram said they have grave wor­ries about com­pa­nies sell­ing pri­vate al­go­rithms to pub­lic agen­cies. “It’s con­cern­ing,” Dal­ton told me, “be­cause pub­lic wel­fare lead­ers who are try­ing to pre­serve their jobs can eas­ily be sold a bill of goods. They don’t have a lot of so­phis­ti­ca­tion to eval­u­ate these prod­ucts.”

Another crit­i­cism of such al­go­rithms takes aim at the idea of fore­cast­ing fu­ture be­hav­iour. De­ci­sions on which fam­i­lies to in­ves­ti­gate, the ar­gu­ment goes, should be based solely on the al­le­ga­tions made, not on pre­dic­tions for what might hap­pen in the fu­ture.

Dur­ing a 2016 White House panel on fos­ter care, Gla­dys Car­rión, then the com­mis­sioner of New York City’s Ad­min­is­tra­tion for Chil­dren’s Ser­vices, ex­pressed wor­ries about the use of pre­dic­tive an­a­lyt­ics by child-pro­tec­tion agen­cies.

The third crit­i­cism of us­ing pre­dic­tive an­a­lyt­ics in child wel­fare is the deep­est and the most un­set­tling. Os­ten­si­bly, the al­go­rithms are de­signed to avoid the faults of hu­man judg­ment. But what if the data they work with are al­ready fun­da­men­tally bi­ased?

Stud­ies by Brett Drake, a pro­fes­sor in the Brown School of So­cial Work at Washington Univer­sity in St Louis, have at­trib­uted the dis­pro­por­tion­ate num­ber of black fam­i­lies in­ves­ti­gated by child-wel­fare agen­cies across the US not to bias, but to their higher rates of poverty.

Sim­i­larly, a 2013 study by Put­namHorn­stein and oth­ers found that black chil­dren in Cal­i­for­nia were more than twice as likely as white chil­dren there to be the sub­ject of mal­treat­ment al­le­ga­tions and placed in fos­ter care. But af­ter ad­just­ing for so­cioe­co­nomic fac­tors, she showed that poor black chil­dren were ac­tu­ally less likely than their poor white coun­ter­parts to be the sub­ject of an abuse al­le­ga­tion or to end up in fos­ter care.

Poverty, all close ob­servers of child wel­fare agree, is the one nearly univer­sal at­tribute of fam­i­lies caught up in the sys­tem. As I rode around with case­work­ers on their vis­its and sat in on fam­ily-court hear­ings, I saw at least as many white par­ents as black — but they were all poor, liv­ing in the county’s rough­est neigh­bour­hoods.

Poorer peo­ple are more likely not only to be in­volved in the crim­i­naljus­tice sys­tem but also to be on pub­lic as­sis­tance and to get their men­tal­health or ad­dic­tion treat­ment at pub­licly funded clin­ics — all sources of the data vac­u­umed up by Vaithi­anathan’s and Put­nam-Horn­stein’s pre­dic­tive-an­a­lyt­ics al­go­rithm.

Marc Ch­erna, who as di­rec­tor of Al­legheny County’s Depart­ment of Hu­man Ser­vices has over­seen CYF since 1996, longer than just about any such of­fi­cial in the coun­try, con­cedes that bias is prob­a­bly un­avoid­able in his work.

He had an independent ethics re­view con­ducted of the pre­dic­tive-an­a­lyt­ics pro­gram be­fore it be­gan. It con­cluded not only that im­ple­ment­ing the pro­gram was eth­i­cal, but also that not us­ing it might be un­eth­i­cal.

“It is hard to con­ceive of an eth­i­cal ar­gu­ment against use of the most ac­cu­rate pre­dic­tive in­stru­ment,” stated the re­port. By adding ob­jec­tive risk mea­sures into the screen­ing process, the screen­ing tool is seen by many << Poverty, all ob­servers of child wel­fare agree, is the one nearly univer­sal at­tribute of fam­i­lies caught up in the sys­tem. of­fi­cials in Al­legheny County as a way to limit the ef­fects of bias.

“We know there are racially bi­ased de­ci­sions made,” says Wal­ter Smith Jr, a deputy di­rec­tor of CYF, who is black. “There are all kinds of bi­ases. If I’m a screener and I grew up in an al­co­holic fam­ily, I might weigh a par­ent us­ing al­co­hol more heav­ily.

“If I had a par­ent who was vi­o­lent, I might care more about that. What pre­dic­tive an­a­lyt­ics pro­vides is an op­por­tu­nity to more uni­formly and evenly look at all those vari­ables.”

For two months fol­low­ing Emily Lankes’s visit to the home of the chil­dren who had wit­nessed an over­dose death, she tried re­peat­edly to get back in touch with the mother to com­plete her in­ves­ti­ga­tion — call­ing, tex­ting, mak­ing unan­nounced vis­its to the home. All her at­tempts went without suc­cess. She also called the treat­ment cen­tre six times in hopes of con­firm­ing the mother’s so­bri­ety, without reach­ing any­one.

Fi­nally, on the morn­ing of Fe­bru­ary 2, Lankes called a sev­enth time. The mother, she learned, had failed her three latest drug tests, with traces of both co­caine and opi­ates found in her urine. Lankes and her su­per­vi­sor, Liz Reiter, then sat down with Reiter’s boss and a team of other su­per­vi­sors and case­work­ers.

“It is never an easy de­ci­sion to re­move kids from home, even when we know it is in their best in­ter­est,” Reiter told me. But, she says, “When we see that some­one is us­ing mul­ti­ple sub­stances, we need to as­sure the chil­dren’s safety. If we can’t get into the home, that makes us worry that things aren’t as they should be. It’s a red flag.”

The team de­cided to re­quest an emer­gency cus­tody au­tho­ri­sa­tion from a fam­ily-court judge. By late af­ter­noon, with au­tho­ri­sa­tion in hand, they headed over to the fam­ily’s home, where a po­lice of­fi­cer met them.

The old­est child an­swered their knock. The mother wasn’t home, but all three chil­dren were, along with the mother’s el­derly grand­fa­ther. Lankes called the mother, who an­swered for the first time in two months and be­gan yelling about what she con­sid­ered an un­war­ranted

in­tru­sion into her home.

But she gave Lankes the names of fam­ily mem­bers who could take the chil­dren for the time be­ing. Cloth­ing was gath­ered, bags packed, and win­ter jack­ets put on. Then it was time for the chil­dren to get in the car with Lankes, a vir­tual stranger em­pow­ered by the gov­ern­ment to take them from their mother’s care.

At a hear­ing the next day, the pre­sid­ing of­fi­cial or­dered the mother to get clean be­fore she could have her chil­dren re­turned.

The drug-treat­ment cen­tre she had been at­tend­ing ad­vised her to en­ter re­hab, but she re­fused. “We can’t get in touch with her very of­ten,” Reiter re­cently told me. “It’s pretty clear she’s not in a good place. The two youngest kids are ac­tu­ally with their dads now. Both of them are do­ing re­ally, re­ally well.” Their older brother, 13, is liv­ing with his great­grand­fa­ther.

IN De­cem­ber, 16 months af­ter the Al­legheny Fam­ily Screen­ing Tool was first used, Ch­erna’s team shared pre­lim­i­nary data with me on how the pre­dic­tive­an­a­lyt­ics pro­gram was af­fect­ing screen­ing de­ci­sions.

So far, they had found that black and white fam­i­lies were be­ing treated more con­sis­tently, based on their risk scores, than they were be­fore the pro­gram’s in­tro­duc­tion.

The per­cent­age of low-risk cases be­ing rec­om­mended for in­ves­ti­ga­tion had dropped — from nearly half, in the years be­fore the pro­gram be­gan, to around a third. That meant case­work­ers were spend­ing less time in­ves­ti­gat­ing well-func­tion­ing fam­i­lies, who in turn were not be­ing has­sled by an in­tru­sive gov­ern­ment agency. At the same time, high-risk calls were be­ing screened in more of­ten. Not by much — just a few per­cent­age points. But in the world of child wel­fare, that rep­re­sented progress.

To be cer­tain that those re­sults would stand up to scru­tiny, Ch­erna brought in a Stan­ford Univer­sity health-pol­icy re­searcher, Jeremy Gold­haber-Fiebert, to in­de­pen­dently assess the pro­gram.

“My pre­lim­i­nary anal­y­sis to date is showing that the tool ap­pears to be hav­ing the ef­fects it’s in­tended to have,” says Gold­haber-Fiebert. In par­tic­u­lar, the kids who were screened in were more likely to be found in need of ser­vices, “so they ap­pear to be screen­ing in the kids who are at real risk”.

Hav­ing demon­strated in its first year of op­er­a­tion that more high-risk cases are now be­ing flagged for in­ves­ti­ga­tion, Al­legheny’s Fam­ily Screen­ing Tool is draw­ing in­ter­est from child-pro­tec­tion agen­cies from across Amer­ica.

Dou­glas County, Colorado, mid­way be­tween Den­ver and Colorado Springs, is work­ing with Vaithi­anathan and Put­nam-Horn­stein to im­ple­ment a pre­dic­tive-an­a­lyt­ics pro­gram there, while the Cal­i­for­nia Depart­ment of So­cial Ser­vices has com­mis­sioned them to con­duct a pre­lim­i­nary anal­y­sis for the en­tire state.

Ch­erna and Dal­ton are al­ready over­see­ing a re­tool­ing of Al­legheny County’s al­go­rithm. So far, they have raised the pro­gram’s ac­cu­racy at pre­dict­ing bad out­comes to more than 90% from around 78%.

More­over, the call screen­ers and their su­per­vi­sors will now be given less dis­cre­tion to over­ride the tool’s rec­om­men­da­tions — to screen in the low­est-risk cases and screen out the high­est-risk cases, based on their pro­fes­sional judg­ment. “It’s hard to change the mind­set of the screen­ers,” Dal­ton told me.

“It’s a very strong, dug-in cul­ture. They want to fo­cus on the im­me­di­ate al­le­ga­tion, not the child’s fu­ture risk a year or two down the line. They call it clin­i­cal de­ci­sion-mak­ing. I call it some­one’s opin­ion. Get­ting them to trust that a score on a com­puter screen is telling them some­thing real is a process.”

Newspapers in English

Newspapers from Ireland

© PressReader. All rights reserved.