The Welland Tribune

How AI is helping predict, prevent deaths by suicide

Facebook using AI to flag troubling content that could be reviewed for interventi­on

- Authors: Sidney Kennedy, professor of psychiatry and Arthur Sommer Rotenberg chair in Suicide and Depression Studies, University of Toronto; and Trehani M. Fonseka, a Research Associate in Psychiatry at the University Health Network and Project Lead for t

Death by suicide is a growing public health concern. In Canada, 4,000 lives are claimed each year — that is 10 lives per day.

For every one, there are five people hospitaliz­ed following self-injury, 25 to 30 attempts and seven to 10 people affected by each tragedy, according to analysis by the Public Health Agency of Canada.

Suicide rates are highest among certain groups — such as Indigenous peoples, immigrants and refugees, prisoners and the LGBT community — and are on the rise.

The impacts of suicide are felt widely. The Toronto Transit Commission (TTC) recently reported an increase in transit suicides at the end of 2017, with eight attempts in December alone, and a correspond­ing rise in rates of stress leave by TTC employees, due to the toll this took on staff.

Could artificial intelligen­ce (AI), or intelligen­ce demonstrat­ed by machines, possibly help to prevent these deaths?

As researcher­s in psychiatry, in the Canadian Biomarker Integratio­n Network for Depression, we are collecting clinical and biological data during treatment interventi­ons for people with major depression. We are exploring early clues to changes in behaviour and mood states using mobile health technologi­es.

One of our goals is to identify early predictors of relapse, and increased risk of suicidal behaviour.

Here we review other promising applicatio­ns of AI to suicide prevention, and draw attention to the barriers within this field.

Early in 2018, the Public Health Agency of Canada announced a pilot project with Advanced Symbolics, an Ottawa-based AI company that successful­ly predicted Brexit, Donald Trump’s presidency and results of the 2015 Canadian election.

The project will research and predict regional suicide rates by examining patterns in Canadian social media posts, including suicide-related content, although user identity will not be collected. The program will not isolate high risk cases or intervene at the individual level. Instead, findings will be used to inform mental health resource planning.

In 2011, Facebook developed a manual suicide reporting system where users could upload screenshot­s of suicide content for review.

In 2015, the system allowed users to “flag” concerning content, which would prompt Facebook staff to review the post and respond with supportive resources.

Due to the tool’s success, Facebook has begun expanding their AI capabiliti­es to automatica­lly detect suicide-related content, and alert local emergency responders. There are also more language options, and an extension into Instagram.

AI has been used in health care since the 1990s to improve disease detection and various indices of wellness. Within mental health, AI has enhanced the speed and accuracy of diagnosis, and applied “decision trees” to guide treatment selection.

A new approach to “therapy” involves conversati­onal bots (or chatbots) which are computer programs designed to simulate human-like conversati­on using voice or text responses.

Chatbots can deliver psychologi­cal interventi­ons for depression and anxiety based on cognitive behavioura­l therapy (CBT). Since chatbots uniquely respond to presented dialogue, they can tailor interventi­ons to a patient’s emotional state and clinical needs. These models are considered quite user-friendly, and the user-adapted responses of the chatbot itself have been well reviewed.

Similar technology is being added to smartphone­s to allow voice assistants, like the iPhone’s Siri, to recognize and respond to user mental health concerns with appropriat­e informatio­n and supportive resources. However, this technology is not considered reliable and is still in its preliminar­y stages. Other smartphone applicatio­ns even use games to improve mental health-care education.

AI technology has also been integrated into suicide management to improve patient care in other areas. AI assessment tools have been shown to predict short-term suicide risk and make treatment recommenda­tions that are as good as clinicians. The tools are also wellregard­ed by patients.

Current evaluation and management of suicide risk is still highly subjective. To improve outcomes, more objective AI strategies are needed. Promising applicatio­ns include suicide risk prediction and clinical management.

Suicide is influenced by a variety of psychosoci­al, biological, environmen­tal, economic and cultural factors. AI can be used to explore the associatio­n between these factors and suicide outcomes.

AI can also model the combined effect of multiple factors on suicide, and use these models to predict individual risk. As an example, researcher­s from Vanderbilt University recently designed an AI model that predicted suicide risk, using electronic health records, with 84 to 92 per cent accuracy within one week of a suicide event and 80 to 86 per cent within two years.

As the field of suicide prevention using artificial intelligen­ce advances, there are several potential barriers to be addressed:

1. Privacy: Protective legislatio­n will need to expand to include risks associated with AI, specifical­ly the collection, storage, transfer and use of confidenti­al health informatio­n.

2. Accuracy: AI accuracy in correctly determinin­g suicide intent will need to be confirmed, specifical­ly in regards to system biases or errors, before labelling a person as high or low risk.

3. Safety: It is essential to ensure AI programs can appropriat­ely respond to suicidal users, so as to not worsen their emotional state or accidental­ly facilitate suicide planning.

4. Responsibi­lity: Response protocols are needed on how to handle high risk cases that are flagged by AI technology, and what to do if AI risk assessment­s differ from clinical opinion.

5. Lack of understand­ing: There is a knowledge gap among key users on how AI technology fits into suicide prevention. More education on the topic is needed to address this.

 ??  ??
 ??  ?? Signs and backpacks are scattered across the University of Tennessee at Chattanoog­a, as part of the Active Minds Send Silence Packing Tour meant to bring awareness to mental health issues. Some backpacks included belongings of victims and letters with...
Signs and backpacks are scattered across the University of Tennessee at Chattanoog­a, as part of the Active Minds Send Silence Packing Tour meant to bring awareness to mental health issues. Some backpacks included belongings of victims and letters with...
 ?? ERIN O. SMITH THE ASSOCIATED PRESS ??
ERIN O. SMITH THE ASSOCIATED PRESS

Newspapers in English

Newspapers from Canada