The Hu­man Fac­tor

To im­prove pa­tient safety, hos­pi­tals urged to ad­just for how staff use new tech­nol­ogy

Modern Healthcare - - NEWS - By Sabriya Rice

When clin­i­cal staff at a MedS­tar Health hos­pi­tal near Wash­ing­ton mis­un­der­stood a con­fus­ing pop-up box on a dig­i­tal blood-sugar reader in 2011, they mis­tak­enly gave in­sulin to a pa­tient with low blood sugar, which caused her to go into a di­a­betic coma. Hos­pi­tal staff had ear­lier made a seem­ingly mi­nor cus­tomiza­tion to the glu­come­ter, which led to the er­ror.

In 2013, a pa­tient ad­mit­ted to North­west Com­mu­nity Hos­pi­tal in Ar­ling­ton Heights, Ill., did not re­ceive his pre­vi­ously pre­scribed psy­chi­atric medicine for nearly three weeks dur­ing a hos­pi­tal stay be­cause the phar­macy’s com­puter sys­tem was pro­grammed to au­to­mat­i­cally dis­con­tinue or­ders for cer­tain types of drugs af­ter a pre­de­ter­mined time. There was no alert pro­grammed into the sys­tem to let the pa­tient’s care team know the drug order had been sus­pended.

Ex­perts say these types of ad­verse events and near-misses are com­mon, and they of­ten hap­pen when new tech­nol­ogy is in­tro­duced with­out ad­e­quate anal­y­sis of how staff will in­ter­act with new de­vices. But re­port­ing of such events is spo­radic, and there are few mea­sures in place to help health- care providers learn from oth­ers’ mis­takes. And it’s not al­ways the tech­nol­ogy that is prob­lem­atic, safety lead­ers say, but how thor­oughly new tools are tested, un­der­stood by users and in­te­grated into the care-de­liv­ery process.

“We have a cas­cade of gad­gets and equip­ment that’s just rain­ing down on the health­care sys­tem,” said Rose­mary Gib­son, a se­nior ad­viser to the Hast­ings Cen­ter, a health­care ethics re­search group. Pro­duc­tiv­ity de­mands are forc­ing physi­cians, nurses and other clin­i­cal staff to work faster, and when that di­rec­tive is cou­pled with new de­vices and equip­ment, “even the most com­pe­tent peo­ple in the world can’t do that safely,” she said.

Re­cent stud­ies have found that rapid im­ple­men­ta­tion of new med­i­cal tech­nol­ogy—elec­tronic health records, pa­tient mon­i­tor­ing de­vices, sur­gi­cal ro­bots and other

tools —can lead to ad­verse pa­tient events when it is not thought­fully in­te­grated into work­flow. The right pro­cesses re­quire un­der­stand­ing the de­vices and the users. Test­ing in con­trolled en­vi­ron­ments of­ten does not ad­e­quately con­sider the “hu­man fac­tor,” or how peo­ple in­ter­act with tech­nol­ogy in high-pres­sure, real-life sit­u­a­tions.

From 2011 to 2013, hu­man-fac­tor is­sues were the most fre­quently iden­ti­fied root causes of “never-events” such as med­i­ca­tion er­rors and treat­ment de­lays, ac­cord­ing to a Joint Com­mis­sion re­port. “It’s the in­ter­face of the hu­man with the tech­nol­ogy that cre­ates a prob­lem,” said Dr. Ana Pu­jol­sMcKee, the com­mis­sion’s chief med­i­cal of­fi­cer.

Re­spond­ing to these grow­ing con­cerns, as well as their own alarm­ing ex­pe­ri­ences, some hos­pi­tals and health sys­tems, such as MedS­tar, have es­tab­lished hu­man-fac­tor re­search teams. These teams in­ves­ti­gate what could go wrong in the de­ploy­ment of new tech­nolo­gies and rec­om­mend ways to min­i­mize their threat to pa­tient safety. Hu­man-fac­tors en­gi­neers scru­ti­nize new de­vices from a hu­man and tech­ni­cal per­spec­tive, of­ten test­ing them in sim­u­la­tion sce­nar­ios as close to re­al­ity as pos­si­ble.

Com­plex sys­tems hide root causes

A grow­ing num­ber of stud­ies point to the need for bet­ter sur­veil­lance of pa­tient-safety events as­so­ci­ated with tech­nol­ogy in­te­gra­tion. In June, re­searchers at the Vet­eran Health Ad­min­is­tra­tion Cen­ter for In­no­va­tions in Qual­ity, Ef­fec­tive­ness and Safety in Hous­ton re­ported that com­pli­cated and con­fus­ing elec­tronic health records pose a se­ri­ous threat to pa­tient safety. The more com­plex a sys­tem, the more dif­fi­cult it is to trace the root cause of a mis­take. They said the prob­lem is not just tech­no­log­i­cal com­plex­ity, but how peo­ple use the sys­tem. Of­ten, such events hap­pen un­der the radar, and when they are re­ported, they are of­ten at­trib­uted to user or pro­gram­ming er­ror.

A Food and Drug Ad­min­is­tra­tion re­port on de­vice re­calls this year said ra­di­ol­ogy de­vices such as lin­ear ac­cel­er­a­tors and CT scan­ners were the most fre­quently re­called de­vices. But for the most part, “the prob­lems have not been with the tech­nol­ogy in it­self, but rather with clin­i­cal use of the tech­nol­ogy,” ac­cord­ing to the re­port. Soft­ware is­sues, sys­tem com­pat­i­bil­ity, user in­ter­faces and clin­i­cal-de­ci­sion sup­port ac­counted for more than two-thirds of ra­di­ol­ogy de­vice re­calls.

Some ex­perts rec­om­mend manda­tory train­ing for newly in­tro­duced de­vice or tech­nol­ogy, while oth­ers call for more trans­parency to al­low hos­pi­tals to quickly share us­abil­ity is­sues and so­lu­tions.

“The prob­lem is not al­ways the tool,” said Dr. David Chang of the Univer­sity of Cal­i­for­nia San Diego. Chang coau­thored a re­cent ar­ti­cle in JAMA Surgery that found a brief but sig­nif­i­cant in­crease in prosta­te­c­tomy surgery er­rors as­so­ci­ated with the ini­tial rapid ex­pan­sion of sur­gi­cal robot use. “The peo­ple us­ing it, that’s the part many are not pay­ing at­ten­tion to,” he said. A na­tional sur­veil­lance sys­tem would help physi­cians learn from each oth­ers’ ex­pe­ri­ences, he said.

MedS­tar Health, a 10-hos­pi­tal not-for-profit sys­tem, launched its Na­tional Cen­ter for Hu­man Fac­tors in Health­care in 2010 to ad­dress safety is­sues as­so­ci­ated with new tech­nol­ogy de­ploy­ment. The 2011 glu­come­ter in­ci­dent was among the first events it in­ves­ti­gated. The cen­ter works with MedS­tar hos­pi­tals, as well as med­i­cal-de­vice and health­in­for­ma­tion tech­nol­ogy de­vel­op­ers, to dis­cover prob­lems and de­ter­mine what changes in the health­care en­vi­ron­ment or the prod­ucts will pro­duce safe and ef­fec­tive out­comes. Any clin­i­cal staffers who might po­ten­tially touch a par­tic­u­lar piece of equip­ment could find them­selves in the cen­ter’s sim­u­la­tion lab, in­clud­ing sur­geons, anes­the­si­ol­o­gists, nurses, paramedics and other med­i­cal tech­ni­cians.

Over the past year, the MedS­tar team has eval­u­ated dozens of de­vices, in­clud­ing health IT soft­ware, in­fu­sion pumps, pa­tient beds and wound-treat­ment de­vices, among oth­ers. About half of the projects were re­searched for man­u­fac­tur­ers, while the other half were eval­u­ated to ex­am­ine new or ex­ist­ing de­vices the health sys­tem flagged as pos­ing po­ten­tial haz­ards.

At the cen­ter’s two sim­u­la­tion labs, man­nequins with au­to­mated voices serve as pa­tients and are out­fit­ted with sen­sors that send cues to staff mon­i­tors in­di­cat­ing the suc­cess or fail­ure of a process. The sen­sors beep when there are sud­den changes in the pa­tient’s blood pres­sure or heart rate. Clin­i­cal staff who par­tic­i­pate in the lab sim­u­la­tions wear a head­piece that tracks their eye move­ments, which helps hu­man-fac­tors en­gi­neers an­a­lyze where safety is­sues are crop­ping up on the de­vices be­ing tested.

In one sim­u­la­tion last week, staff at MedS­tar’s cen­ter demon­strated how an er­ror could eas­ily oc­cur with a car­diac de­fib­ril­la­tor used by the sys­tem’s hos­pi­tals. The me­chan­i­cal pa­tient called for a nurse, played by para­medic Ch­eryl Ca­ma­cho, who sum­moned the at­tend­ing physi­cian, played by another para­medic, Les Becker. He de­cided the pa­tient’s heart was in dis­tress and or­dered a syn­chro­nized

shock to be de­liv­ered at a low level us­ing a de­fib­ril­la­tor, a process that helps re-es­tab­lish nor­mal heart rhythms in a pa­tient with an ar­rhyth­mia or in car­diac ar­rest.

The nurse pushed a but­ton to put the de­vice into syn­chro­nized shock mode so the en­ergy would hit the pa­tient’s chest at a less-vul­ner­a­ble mo­ment for the heart. Another but­ton was pushed to is­sue the jolt. The pa­tient did not im­prove, so Becker im­me­di­ately or­dered a more pow­er­ful shock. Less than a minute later, the sec­ond jolt was is­sued, but be­tween the first and sec­ond de­fib­ril­la­tion, the ma­chine de­faulted back to a non-syn­chro­nized shock mode, which could have made a real pa­tient’s heart stop beat­ing.

“We know that even well-trained doc­tors who know how to use it right will nat­u­rally make that er­ror,” the cen­ter’s direc­tor, Dr. Terry Fair­banks, said fol­low­ing the sim­u­la­tion. “We can’t de­pend on doc­tors re­mem­ber­ing. We need to de­sign the de­vice so that it sig­nals to the doc­tor that it has changed modes.”

Fair­banks and his col­leagues rely on MedS­tar’s front­line providers to dis­cover prob­lems like this and re­port them to the cen­ter for hu­man-fac­tors test­ing. But some­times clin­i­cal staff are anx­ious about re­port­ing prob­lems be­cause they blame them­selves. “If you don’t work on open­ing up the cul­ture, they might keep it quiet,” he said. “Then you don’t learn about where op­por­tu­ni­ties are to de­sign out the mis­take.”

In in­ves­ti­gat­ing the 2011 glu­come­ter in­ci­dent, Fair­banks and his hu­man-fac­tors team re­con­structed the fol­low­ing chain of events: A nurse tech­ni­cian had taken a blood sugar read­ing for the pa­tient, who had been ad­mit­ted through the emer­gency de­part­ment with an ini­tial di­ag­no­sis of low glu­cose. The tech­ni­cian was sur­prised to see a mes­sage on the dig­i­tal de­vice that read: “crit­i­cal value, re­peat lab draw for >600.” That seemed to in­di­cate the pa­tient’s blood sugar had soared to a dan­ger­ously high level. The tech­ni­cian showed the pop-up mes­sage to a nurse, who agreed with the tech­ni­cian’s read­ing. They re­peat­edly checked the pa­tient’s blood sugar us­ing the de­vice and kept get­ting the same ap­par­ently high blood-sugar re­sult.

“It never came to mind that the glu­come­ter was in­cor­rect,” the nurse said in a video MedS­tar officials posted on YouTube in March. The video was shown to bring aware­ness to MedS­tar’s “no-blame cul­ture,” which staff said helped them un­cover the root cause of the ad­verse event.

What Fair­banks found was that the blood-sugar read­ing on the de­vice was not tech­ni­cally in­cor­rect. The prob­lem was that the pop-up warn­ing vis­ually blocked the de­vice’s true read­ing in­di­cat­ing that the pa­tient’s blood sugar was crit­i­cally low. The pop-up had been cus­tom­ized by hos­pi­tal staff to send an alert when a pa­tient’s blood-sugar lev­els reach a crit­i­cal point. Since an ex­tremely low blood-sugar level is rel­a­tively rare, the de­vice had been cus­tom­ized to launch a pop-up warn­ing about crit­i­cally high lev­els.

It’s not un­com­mon for med­i­cal-de­vice and health-IT users to make mi­nor cus­tomiza­tions to en­sure that clin­i­cal terms, con­cepts and dis­plays con­form to the ex­pec­ta­tions and prac­tices of that par­tic­u­lar hos­pi­tal’s staff.

But as MedS­tar learned from the glu­come­ter in­ci­dent, such cus­tomiza­tion can be tricky. In­sulin was ad­min­is­tered to the pa­tient, caus­ing her al­ready low blood sugar to drop even fur­ther. She slipped into a di­a­betic coma and was taken to the in­ten­sive-care unit, Fair­banks said. She re­cov­ered and the hos­pi­tal is­sued an apol­ogy. The nurse, who was ini­tially sus­pended for al­low­ing the pa­tient to re­ceive in­sulin, re­turned to her job. All Meds­tar hos­pi­tals now use an up­dated model of the glu­come­ter that does not in­clude a pop-up mes­sage.

Sim­u­la­tion labs im­prove per­for­mance

There are no avail­able data on how many hos­pi­tals and sys­tems em­ploy a sim­i­lar in­ves­tiga­tive ap­proach to pa­tient-safety risks as­so­ci­ated with new tech­nolo­gies and how they are de­ployed in clin­i­cal set­tings. But the So­ci­ety for Sim­u­la­tion in Health­care, which sup­ports us­ing sim­u­la­tion to im­prove per­for­mance and re­duce er­rors, has iden­ti­fied 165 sim­u­la­tion cen­ters in the U.S. Many, how­ever, fo­cus on train­ing clin­i­cal staff on new pro­ce­dures and de­vices rather than work­ing out hu­man in­ter­ac­tion prob­lems with new tech­nol­ogy.

Still, more hos­pi­tals are as­sem­bling mul­ti­dis­ci­plinary teams to eval­u­ate sig­nif­i­cant tech­no­log­i­cal changes such as EHR im­ple­men­ta­tion. But some prob­lems don’t get flagged un­til af­ter they cause pa­tient-safety risks.

That’s what hap­pened with the pa­tient at North­west Com­mu­nity Hos­pi­tal who did not re­ceive his pre­scrip­tion psy­chi­atric medicine, Cloza­p­ine, for nearly three weeks in 2013. Hos­pi­tal officials found that the com­put­er­ized pre­scrip­tion sys­tem au­to­mat­i­cally dis­con­tin­ued the drug order af­ter seven days be­cause it had a de­fault “au­to­matic stop” value for cer­tain high-risk drugs. There was no pro-

grammed cue to alert the med­i­cal team to ei­ther re­sub­mit the order or to can­cel the pa­tient’s pre­scrip­tion med­i­ca­tion. The hos­pi­tal had been us­ing that com­put­er­ized sys­tem since 2009.

Fol­low­ing a re­view of 15 drugs with such stop val­ues, the hos­pi­tal has re­moved most of them, keep­ing only a few based on the man­u­fac­tur­ers’ rec­om­men­da­tions, said its phar­macy direc­tor, Ja­son Alonzo. High-risk drugs for which the stop val­ues were re­moved are now re­viewed daily by hos­pi­tal pharmacists.

“What we’ve all learned is that the tech­nol­ogy will do ex­actly what we tell it to do,” said Kim­berly Nagy, chief nurs­ing of­fi­cer at North­west Com­mu­nity. Nagy said it is dif­fi­cult to tell whether other pa­tients had been af­fected; the 2013 in­ci­dent was the first to bring the is­sue to the hos­pi­tal’s at­ten­tion.

Mary Lo­gan, pres­i­dent of the As­so­ci­a­tion for the Ad­vance­ment of Med­i­cal In­stru­men­ta­tion, which de­vel­ops stan­dards for med­i­cal-de­vice man­u­fac­tur­ers, said hos­pi­tals should stan­dard­ize the way they pur­chase new tech­nolo­gies and get key users in­volved be­fore mak­ing the buy­ing de­ci­sion.

“This is where a lot of or­ga­ni­za­tions make mis­takes,” she said. “The team that does the tech­nol­ogy as­sess­ment should not be driven by the one person who wants the shiny ob­ject.”

That means hav­ing a wide range of clin­i­cal staff on hos­pi­tal value-anal­y­sis com­mit­tees. Those com­mit­tees, she said, should first ask two key ques­tions: What prob­lem are we try­ing to solve? And, is a par­tic­u­lar tech­nol­ogy go­ing to solve it?

While Lo­gan’s group fo­cuses on de­vices such as ven­ti­la­tors, in­fu­sion pumps, mon­i­tors and pace­mak­ers, the same prin­ci­ples ap­ply to any new tech­nol­ogy, she said.

If the tool re­quires cus­tomiza­tion, the staffers pro­gram­ming the tool should un­der­stand that even small changes or up­grades could have un­in­tended con­se­quences and pro­duce pa­tient-safety risks.

Even if a new tech­nol­ogy un­ques­tion­ably of­fers im­proved qual­ity of care, the Joint Com­mis­sion’s Pu­jols-McKee cau­tions that there should be height­ened aware­ness about how to safely im­ple­ment it in the com­plex health­care set­ting. “Of­ten­times, the thought is, if we have the tech­nol­ogy we’re safer,” she said. “But that is in­cor­rect.”

Staff at MedS­tar Health’s Na­tional Cen­ter for Hu­man Fac­tors in Health­care use a sim­u­la­tion lab to demon­strate how a pa­tientsafety event could eas­ily oc­cur with a car­diac de­fib­ril­la­tor. The cen­ter looks for po­ten­tial us­abil­ity and hu­man fac­tor prob­lems with new and cur­rent tech­nolo­gies.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.