The Boston Globe

Study looks at health care app use

Questions user engagement for digital therapeuti­cs

- By Mario Aguilar

As companies selling health care apps struggle to prove to a skeptical system that they really deliver results, we’re about to start hearing a lot more about “engagement.”

A new paper scrutinizi­ng six clinical trials supporting four mental health apps cleared by the Food and Drug Administra­tion argues there’s an urgent need to close the “gap between intention and real-world efficacy for digital therapeuti­cs” — specifical­ly, the dearth of data on how much people actually use digital treatments.

The authors dove into published studies, excavated what numbers they could, and ultimately concluded that some apps may be of fleeting interest to users. They argue that to see long-term success of software-based treatments, sometimes called digital therapeuti­cs, the industry must confront this reality.

P. Murali Doraiswamy, a professor of psychiatry at Duke University School of Medicine and lead author of the study, said he’s raising the issue not to bring down the industry, but to highlight an opportunit­y to fix the evidence gap.

“Because if we don’t fix it, then we are fooling ourselves,” he said.

The treatments covered in the study are marketed by two leading companies based in Boston: Akili Interactiv­e and Pear Therapeuti­cs. Akili received FDA clearance in 2020 for its video gamebased treatment for childhood attention deficit hyperactiv­ity disorder. Pear has three cleared apps for the treatment of substance use disorder, opioid use disorder, and insomnia. Pear went public at a $1.6 billion valuation last year, and Akili earlier this year announced plans to go public. Both companies are aggressive­ly commercial­izing their products, which must be prescribed by doctors, in the hopes of eventually reaching millions of people.

So far health insurers and other payers have been hesitant to cover digital treatments over some of the very concerns Doraiswamy and his colleagues are raising. He explained that whereas it’s relatively easy to tell whether or not patients in a drug trial have taken their

pills, engagement is more elusive than the compliance numbers reported in studies of apps might suggest.

“Is it people opening the app and, you know, muddling through it?” he asked. “Is it people who are really, really engaging with the app? If the app has 20 modules, are people going through all 20 modules?”

To calculate engagement, the authors searched for ways to quantify how much of a prescribed treatment participan­ts completed. For example, in the pivotal trial supporting the clearance of Akili’s EndeavorRx, participan­ts were asked to complete 100 missions in the game, and the average number of missions completed was 83, thus 83 percent engagement. It’s a flawed number, Doraiswamy conceded.

“True engagement would actually be how engaged were they with the games, did they really improve in their scores over time,” he said. “We don’t have access to that informatio­n.”

Though Akili’s gamified approach appeared to create the highest initial engagement of the products reviewed, the published data showed waning participan­t interest over time. Other trials examined in the study showed engagement as low as 58 percent. The authors also looked at attrition, or how many people enrolled in the experiment­al arm of the study ultimately drop out, which ranged from 6 percent to as high as roughly 60 percent. High levels of attrition can indicate that an interventi­on isn’t engaging or valuable to participan­ts and can also introduce bias into a trial’s final results.

All of these products have been found to be safe and effective by the FDA and reported significan­t benefits in trials compared with controls, which raises the important question of what the bar for engagement should look like, both for regulators and for the health care system more broadly. It appears that participan­ts receive clinical benefit from products even if they don’t consume all the content, just as someone on medication might get benefits even if they miss a dose here and there. The issue is that these kinds of products are so new that experts don’t know what good engagement really means, let alone how to measure it.

And because the field is nascent, there still isn’t a clear consensus on what counts as compelling research for digital products yet. Many companies like Pear have continued to collect data from users to establish that products have meaningful and lasting effects.

“These therapeuti­c products, like others, require a continuum of evidence that includes goldstanda­rd clinical trials, followed by real-world clinical outcomes, and real-world health economic outcomes,” Pear’s chief medical officer, Yuri Maricich, wrote in an e-mail. He said what’s important is that study endpoints for digital treatments be compared with endpoints for drugs tackling similar diseases. For example, “A study of a [prescripti­on digital therapeuti­c] for depression should be similar to a depression drug study,” he said.

Doraiswamy actually agrees, which is why he is advocating for more standardiz­ed reporting similar to drug trials: “My concern is if we don’t know the true attrition rates, then people will think these apps are amazing,” he said, suggesting that improper reporting could conceal that a small fraction of people in a study receive benefits from treatment.

The Digital Therapeuti­cs Alliance, an industry advocacy group, is currently working with stakeholde­rs to develop guidelines that can be used to evaluate products precisely because “it is becoming increasing­ly clear that there isn’t an exact bar set for the specific types and quality of clinical evidence that qualifies a DTx evidence package as ‘sufficient,’” chief policy officer Megan Coder wrote in an e-mail.

Real-world data can also be helpful to offset concerns that clinical trial sponsors can encourage an unrealisti­c level of compliance in their studies. Doraiswamy’s paper points out that in some cases, trials for products intended to be used at home were tested in clinics, and patients were encouraged to stick to their treatments through both supervisio­n and in some cases financial incentives.

In fact, the steps that have been shown to improve engagement in clinical studies may be important tools to consider as app makers move from shortterm studies of interventi­ons for acute conditions to products aimed at helping people manage mental health conditions for long periods of time. If human touch here and there can keep people engaged or compliance checking tools stop patients from dropping off, they might have a place as part of the interventi­ons.

“As the first generation of apps, it’s fine to get it out there to learn from what worked and what didn’t work, but certainly I think we shouldn’t rest on this,” Doraiswamy said, adding: “It’s not enough to just get to a certain percentage of patients to improve. We want them to stay well and for them to stay well, they have to continue to want to use these [apps] over time.”

 ?? MIKE REDDY FOR STAT ??
MIKE REDDY FOR STAT

Newspapers in English

Newspapers from United States