Business Standard

How companies scour our digital lives for clues to our health

- NATASHA SINGER

Your digital footprint — how often you post on social media, how quickly you scroll through your contacts, how frequently you check your phone late at night — could hold clues to your physical and mental health.

That at least is the theory behind an emerging field, digital phenotypin­g, that is trying to assess people’s wellbeing based on their interactio­ns with digital devices. Researcher­s and technology companies are tracking users’ social media posts, calls, scrolls and clicks in search of behavior changes that could correlate with disease symptoms. Some of these services are opt-in. At least one is not. People typically touch their phones 2,617 per day, according to one study — leaving a particular­ly enticing trail of data to mine.

“Our interactio­ns with the digital world could actually unlock secrets of disease,” said Dr Sachin H Jain, chief executive of CareMore Health, a health system, who has helped study Twitter posts for signs of sleep problems. Similar approaches, he said, might someday help gauge whether patients’ medicines are working. “It could help with understand­ing the effectiven­ess of treatments,” he said.

The field is so new and so little studied, however, that even proponents

warn that some digital phenotypin­g may be no better at detecting health problems than a crystal ball.

If a sociable person suddenly stopped texting friends, for instance, it might indicate that he or she had become depressed, said Dr Steve Steinhubl, director of digital medicine at the Scripps Translatio­nal Science Institute in San Diego. Or “it could mean that somebody’s just going on a camping

trip and has changed their normal behavior,” he said. “It’s this whole new potential for snake oil,” Dr Steinhubl said. That is not stopping the rush into the field — by start-ups and giants like Facebook — despite questions about efficacy and data privacy.

One of the most ambitious efforts is being conducted by Facebook. The company recently announced that it was using artificial intelligen­ce to scan posts and live video streams on its social network for signs of possible suicidal thoughts. If the system detects certain language patterns — such as friends posting comments like “Can I help?” or “Are you OK?” — it may assign a certain algorithmi­c score to the post and alert a Facebook review team.

In some cases, Facebook sends users a supportive notice with suggestion­s like “Call a helpline.” In urgent cases, Facebook has worked with local authoritie­s to dispatch help to the user’s location. The company said that, over a month, its response team had worked with emergency workers more than 100 times. Some health researcher­s applauded Facebook’s effort, which wades into the complex and fraught realm of mental health, as well intentione­d. But they also raised concerns. For one thing, Facebook has not published a study of the system’s accuracy and potential risks, such as inadverten­tly increasing user distress.

“It’s a great idea and a huge unmet need,” Dr Steinhubl said. Even so, he added, Facebook is “certainly right up to that line of practicing medicine not only without a license, but maybe without proof that what they are doing provides more benefit that harm.”

For another thing, Facebook is scanning user posts in the US and some other countries for signs of possible suicidal thoughts without giving users a choice of opting out of the scans. “Once you are characteri­sed as suicidal, is that forever associated with your name?” said Frank Pasquale, a law professor at the University of Maryland who studies emerging health technologi­es. “Who has access to that informatio­n?” Will Nevius, a Facebook spokesman, said Facebook deleted the algorithmi­c scores associated with posts after 30 days. The cases involving emergency responders are kept in a separate system that is not tied to users’ profiles, he said.

 ??  ??

Newspapers in English

Newspapers from India