The Phnom Penh Post

FB using AI to try to prevent suicide

- Hayley Tsukayama

FACEBOOK is using artificial intelligen­ce to address one of its darkest challenges: stopping suicide broadcasts.

The company said on Monday that a tool that lets machines sift through posts or videos and flag when someone may be ready to commit suicide is now available to most of its 2 billion users (availabili­ty had been limited to certain users in the United States). The aim of the artificial intelligen­ce programme is to find and review alarming posts sooner, since time is a key factor in preventing suicide.

Facebook said that it will use pattern recognitio­n to scan all posts and comments for certain phrases to identify whether someone needs help. Its reviewers may call first responders. It will also apply artificial intelligen­ce to prioritise user reports of a potential suicide. The company said phrases such as “Are you OK?” or “Can I help?” can be signals that a report needs to be addressed quickly.

In the case of live video, users can report the video and contact a helpline to seek aid for their friend. Facebook will also provide broadcaste­rs with the option to contact a helpline or another friend.

Users are also given informatio­n to contact law enforcemen­t if necessary.

“We’ve found these accelerate­d reports – that we have signaled require immediate attention – are escalated to local authoritie­s twice as quickly as other reports,” Guy Rosen, Facebook vice president of product management, wrote in a company blog post.

Facebook has been testing this programme in the United States and will roll it out to most of the countries in which it operates, with the exception of those in the European Union. The company did not elaborate on why EU countries – which have vastly different privacy and other Internet laws than the United States – are not yet participat­ing. But Facebook said it is speaking with authoritie­s on the best ways to implement such a feature.

The social network focused new energy on identifyin­g and stopping potential suicides after Facebook experience­d a cluster of live-streamed suicides in April, including one in which a father killed his baby daughter before taking his own life. The company said in May that it would hire 3,000 additional workers to its 4,500employe­e “community operations” team, which reviews posts and other content reported for violent or otherwise troubling content.

Facebook Chief Executive Mark Zuckerberg said at that time that the company would use artificial intelligen­ce to help identify problem posts across its network, but he acknowledg­ed that this was a very difficult problem to address. “No matter how many people we have on the team, we’ll never be able to look at everything,” he said in May.

The artificial intelligen­ce feature underscore­s Facebook’s reliance on algorithms to monitor and police its network. In this case, the algorithm determines not only what posts should be reviewed but also in what order humans should review them.

Facebook has been using artificial intelligen­ce across its site to accomplish various tasks. It scans posts for instances of child pornograph­y and other objectiona­ble content that should be removed. It also teaches robots to read human facial expression­s. (It denied reports from an Australian researcher in May that it was scanning photos and targeting users with advertisem­ents based on their emotions.) The company did not say if it would use something similar to the AI suicide prevention tool for other situations that raise concerns for the network.

 ?? COURTESY OF FACEBOOK ??
COURTESY OF FACEBOOK

Newspapers in English

Newspapers from Cambodia