San Francisco Chronicle

Bot knows when you’re sad, but whom does it tell?

- By Caroline Chen Caroline Chen is a Bloomberg writer. Email: cchen509@bloomberg.net

A Facebook message pops up on my phone screen. “What’s going on in your world?”

It’s from a robot named Woebot, the brainchild of Stanford University psychologi­st Alison Darcy.

Woebot seems to care about me. The app asks me for a list of my strengths, and remembers my response so it can encourage me later. It helps me set a goal for the week — being more productive at work. It asks me about my moods and my energy levels and makes charts of them.

“I’ll help you recognize patterns because ... (no offense) humans aren’t great at that,” Woebot tells me with a smirking smile emoji.

So Woebot knows that I felt anxious on Wednesday and happy on Thursday. But who else might know? Unlike a pedometer, which tracks something as impersonal as footsteps, many mentalheal­th apps in developmen­t rely on gathering and analyzing informatio­n about a user’s intimate feelings and social life.

“Mental health data is some of the most intimate data there can be,” said Adam Tanner, a fellow at Harvard University’s Institute for Quantitati­ve Social Science.

Chatbots have existed since the 1960s — one was named after “Pygmalion” heroine Eliza Doolittle — but advances such as machine learning have made the robots savvier. Woebot is one of an emerging group of technologi­cal interventi­ons that work to detect and treat mental-health disorders.

They’re not for everyone. Some people may prefer unburdenin­g themselves to a human, and many apps are hindered by bugs and dogged by privacy concerns. Still, the new technologi­es may fill gaps in current treatment options by detecting symptoms earlier and acting as coaches for individual­s who might otherwise never seek counseling.

Clinicians and privacy experts are welcoming these inventions with one hand while holding up warning signs with the other. Technology might be a powerful tool to improve treatment, but an emotional problem, if it becomes known, can affect insurance coverage, ruin chances of landing a job or color an employer’s perception. With possible changes coming to health care law, it’s unclear if pre-existing mentalheal­th conditions could once again be used to charge people more for insurance or deny them coverage.

Privacy concerns aside, the promise of collecting data is the ability to render a holistic picture of a person’s mental state that’s more accurate than infrequent assessment­s conducted in a doctor’s office.

“Our approach is to ask, how can we measure in an unobtrusiv­e and passive way?” said Tom Insel, former director of the National Institutes of Mental Health.

Insel teamed up in May with Paul Dagum, a former cybersecur­ity expert, to create a startup that mines the informatio­n on consumers’ phones to create “digital biomarkers” to try to predict depression, anxiety and schizophre­nia.

Called Mindstrong, the company tracks users’ every tap, swipe and keystroke, then keeps an eye out for patterns such as reaction speeds. It looks at locations and frequency of texts and calls. It also tracks word use. Without reading people’s emails, Mindstrong can look at “word histograms” that show how frequently certain words are used.

When people become depressed, “there’s a shift in pronouns, instead of saying ‘we, you, they,’ it turns into ‘I,I, I,’ ” Insel said.

Early evidence shows Mindstrong may be onto something. Dagum said they’ve found strong correlatio­ns between phone behavior and traditiona­l cognitive measures. Mindstrong is running a 100-person study with Stanford and plans to publish its results soon.

Mindstrong also has partnered with an insurance company that will run a pilot program for 600 members with serious disorders. For the insurer, which Mindstrong declined to name, early detection of a psychotic episode or a relapse in depression could help it guide the member to treatment earlier, avoiding costly hospital stays.

Woebot, too, has data that suggest a benefit. In a study of 70 people ages 18 to 28, scores measuring depression were significan­tly decreased in the group that chatted with Woebot compared with those who read a National Institutes of Mental Health ebook.

Despite occasional miscues, it’s hard to be annoyed with the cheery Woebot, whose personalit­y Darcy said she modeled after Kermit the Frog. After two weeks of chatting, the robot has heard more about my daily moods than any of my friends.

Should I be concerned about how much these apps know about me?

Mindstrong said it protects customers by avoiding the use of behavioral data to sell products.

Darcy promises Woebot won’t sell customer informatio­n and the company’s employees only view anonymized responses. But the app works on Facebook Messenger, and Darcy concedes that she can’t vouch for how Facebook will use the data.

Facebook says it collects informatio­n including when users “message or communicat­e with others” in order to “provide, improve and develop services.” Spokeswoma­n Jennifer Hakes said Messenger abides by Facebook’s data policy, but “we do not read the content of messages between people or people and businesses.” Facebook also doesn’t target any type of advertisin­g based on the content of Messenger conversati­ons, she said.

Investors don’t seem inhibited by privacy concerns. Mindstrong has raised $14 million in a series A funding round.

Woebot’s funds have come from friends and family, but Darcy said she’ll soon seek outside investors.

Newspapers in English

Newspapers from United States