Ottawa Citizen

Social media posts could suggest those at risk

Algorithms would power system to pore over millions of social media posts

- JOANNE LAUCIUS

Molecular biologist Zachary Kaminsky attracted attention in 2014 when he and his colleagues at Johns Hopkins University discovered a biomarker that suggested doctors might be able to identify suicide risk — and even prevent suicide — with a blood or saliva test.

Now a researcher at the Royal Ottawa Mental Health Centre, Kaminsky is excited about something that doesn’t even need a test: artificial intelligen­ce that can parse millions of social media posts to find words or images that flag thoughts of suicide.

Adolescent­s often disclose suicide risk factors on social media that they don’t tell their doctors, Kaminsky told a “knowledge exchange” on suicide prevention at Algonquin College on Friday.

Artificial intelligen­ce can take the pulse of millions of people by simply turning data into a mathematic­al score, he said.

“People tell you how they feel every day at every moment,” Kaminsky told an audience of about 150, including representa­tives from Ottawa post-secondary institutio­ns.

“People don’t realize that they are putting out these signals,” said Kaminsky, the DIFD Mach- Gaensslen chair in suicide prevention research at The Royal’s institute of mental health research. “The beauty of AI is that it’s all training data. We don’t know who anyone is. We’re not reading stuff. Everything is just converted to numbers. It’s very clinical and non-invasive.”

The subject of suicide and young people has attracted attention following the deaths of five University of Ottawa students over 10 months.

In opening the exchange, Algonquin president Claude Brulé said a 2019 survey of students at the college found that 11 per cent reported considerin­g suicide and 2.5 per cent reported attempting suicide in the previous 12 months.

“Today’s exchange has the potential to change the way we think,” Brulé said.

It’s well known that people put their most candid thoughts on social media. But how to use this informatio­n to prevent suicides is almost impossible without the time to read thousands of posts, Kaminsky said.

Enter artificial intelligen­ce, which can train itself to recognize words and patterns of words and be automated to respond. The algorithm could send a suicidal person informatio­n about counsellin­g, for example. Twitter already uses your data to generate product ads that may interest you, so this is not much different, Kaminsky said.

An AI system could also be used to find “hot spots” for suicidal thinking, opening opportunit­ies to deploy prevention campaigns in schools or neighbourh­oods.

While 10 per cent of the general population has thought about suicide, only 0.5 per cent act on these thoughts. AI can isolate those who think about suicide as opposed to those who are likely to act, Kaminsky said.

Prediction­s for individual­s will never be perfect. There will also be false negatives, Kaminsky said. “You can’t always prevent suicide, but it is valuable to know who is at risk and who is not at risk.”

Kaminsky’s initial research, which lasted two years, scanned Twitter accounts from English-speakers all over the world, except the United Kingdom. The tests did not look for words like “die or “suicide,” but rather for words like “burden, loneliness, stress, depression, insomnia, anxiety” and “hopelessne­ss.”

Those words were chosen because researcher­s already knew they were related to feelings experience­d by suicidal people. People who think about suicide often think of themselves as a burden to others, for example.

Then AI takes over. Machine learning means AI can reach beyond those initial words to identify patterns and networks of other associated words.

The word “love” is commonly used among those who are thinking of suicide, for example. People who are distressed about the breakdown of romantic relationsh­ips tend to use “love” in associatio­n with swear words and pronouns such as I, she and he.

The system uses math to give each Twitter feed multiple scores. In the end, it can plot a year’s worth of scores instead of a year’s worth of tweets and show patterns of thought.

Kaminsky noted some people tend to get better after they tweet about suicide, while some people tend to get worse. The larger the response from the social network, the more likely people will get better.

“The more we’re trained to help our friends, the more power we have,” Kaminsky said. “Research can’t be the magic answer. We still have to interact.”

Kaminsky has only tested the algorithm on Twitter, but the technology can be used with other social media that are based on images instead of words. One example is TikTok, a platform for short-form videos popular among teens and people of university age.

He believes an algorithm to identify suicide risk is only a few years away from use.

“I’m excited about the next couple of years. We will go from ‘We built it’ to ‘This is what we are going to do with it,’ ” Kaminsky said.

“In science, you can be on the frontier, or you can be in the applicatio­n. In psychiatry, there’s a big need for applicatio­n. I think there’s room for tools that do something different. It’s exciting when you find something that you think is true, and you can build something that hasn’t been built before.” jlaucius@postmedia.com

 ?? ERROL MCGIHON ?? Molecular biologist Zachary Kaminsky is a suicide-prevention researcher at The Royal’s institute of mental health research. He spoke at Algonquin College on Friday about developing AI-based tools to predict and get help for people at risk of suicide.
ERROL MCGIHON Molecular biologist Zachary Kaminsky is a suicide-prevention researcher at The Royal’s institute of mental health research. He spoke at Algonquin College on Friday about developing AI-based tools to predict and get help for people at risk of suicide.

Newspapers in English

Newspapers from Canada