Khaleej Times

Can Facebook help bring down suicide rates?

- Sara Gorman & Jack m Gorman —Psychology Today Sara Gorman is a public health specialist, and Jack M Gorman is a psychiatri­st

Alarming statistics about rising rates of suicide, including among teens and young adults, have rattled everyone in the suicide prevention field and the general public at large. Amidst this unsettling trend, it was therefore of great interest to many people when it became apparent that Facebook was trying its hand in suicide prevention. Facebook has actually been involved in suicide prevention for a number of years, allowing people on Facebook to flag concerning posts, which would then be reviewed by trained members of the company’s Community Operations team, who could connect the person posting with support resources. In 2018, the company started using machine learning to actively scan posts for concerning messages potentiall­y related to the desire to die by suicide. These posts would then be sent for review by a human team that would have to decide how to respond.

Facebook has openly discussed the difficulti­es they ran into in trying to hone this machine learning technique in order to avoid too many false positives. Over time and with many examples, they feel they have gotten the computer to a place where it sends only a subset of truly concerning posts to the human review team. The algorithm also looks at comments left on the post to see whether they indicate concern on the part of others. If the human review team identifies a truly concerning post, they will automatica­lly show support resources to the original poster. In cases in which “imminent harm” is determined, Facebook may contact local authoritie­s.

On a basic, intuitive level, this sounds like a great developmen­t. Especially given all the horrible news about Facebook lately, it felt nice to see that the company might be using its incredible influence to do something good. And indeed, this tactic has met with some praise. Some have pointed out that traditiona­l methods of suicide prevention

have not always worked so well and thus we should be open to this new form of “experiment­ation” using advanced technology. We know, for example, that asking about suicidal thoughts, as important as that may be, is not an effective method for predicting who will actually kill themselves. In addition, the reality is that people spend a lot of time on social media and do express suicidal intent there, so it is only natural for us to try to use this medium to prevent suicide as well. Others have noted that AI-based tools for detecting people in trouble and encouragin­g help-seeking behaviours can be quite accurate and effective, so long as seeming cries for help are understood in proper context. Not to mention that the threat detection and response through Facebook’s AI technology can be very rapid and can’t allow for timely response to people who might never have asked for help in a more traditiona­l way, like calling a suicide hotline or speaking to a friend or family member.

Nonetheles­s, there are some very serious concerns about Facebook’s suicide prevention AI technology. The main issues here are both scientific and ethical. From a scientific standpoint, we need to be able to rigorously evaluate whether Facebook’s interventi­on is effective. That is the only way we can truly understand whether it is worth deploying and ensure that it is not doing any kind of harm, as even the best of intentions can sometimes lead to unintended consequenc­es. If the company is going to claim to have developed a technology that prevents suicide, then that technology and its effect need to be carefully studied by trained researcher­s. But Facebook has thus far refused to share any informatio­n about how its technology works.

On the other hand, it is unclear how the process of getting informed consent would work and also not evident to everyone that what Facebook is doing truly qualifies as experiment­ation. If the Facebook approach to identifyin­g potentiall­y suicidal people actually saves lives, then it might be argued that tying it up with cumbersome research protection procedures would only serve to blunt its effectiven­ess.

Balancing these issues of trust, transparen­cy, and interventi­on efficacy is certainly a delicate matter. But this debate over Facebook’s foray into suicide prevention brings up larger issues about just how much we still don’t understand about suicide, suicidal behaviour, and preventing suicide. This is partially because it can be very difficult to study what in general is a rare event. It’s hard to tell what actually prevents suicide when most of our studies have to settle on proxies, such as hospitalis­ations and suicide attempts, due to the low base rate of completed suicides. In addition, it can be very challengin­g for clinicians to predict which patients will make serious attempts to end their lives. Because of all of this, it is imperative that we not entirely dismiss interventi­ons such as Facebook’s — we are in dire need of new approaches and fresh ideas in this field. At the same time, we must always follow the most rigorous scientific standards and our ethical obligation­s to those we are trying to help must always be paramount.

Reality is that people spend a lot of time on social media and do express suicidal intent there, so it is natural to try to use this medium to prevent suicide

 ??  ??
 ??  ??

Newspapers in English

Newspapers from United Arab Emirates