Sunday Times (Sri Lanka)

Tech firms race to spot video violence

- By Jeremy Wagstaff

SINAGPORE, April 28 ( REUTERS) - Companies from Singapore to Finland are racing to improve artificial intelligen­ce so software can automatica­lly spot and block videos of grisly murders and mayhem before they go viral on social media.

None, so far, claim to have cracked the problem completely.

A Thai man who broadcast himself killing his 11- month- old daughter in a live video on Facebook this week, was the latest in a string of violent crimes shown live on the social media company. The incidents have prompted questions about how Facebook's reporting system works and how violent content can be flagged faster.

A dozen or more companies are wrestling with the problem, those in the industry say. Google - which faces similar problems with its YouTube service - and Facebook are working on their own solutions.

Most are focusing on deep learning: a type of artificial intelligen­ce that makes use of computeris­ed neural networks. It is an approach that David Lissmyr, founder of Paris- based image and video analysis company Sightengin­e, says goes back to efforts in the 1950s to mimic the way neurons work and interact in the brain.

Teaching computers to learn with deep layers of artificial neurons has really only taken off in the past few years, said Matt Zeiler, founder and CEO of New York- based Clarifai, another video analysis company.

It's only been relatively recently that there has been enough computing power and data available for teaching these systems, enabling “exponentia­l leaps in the accuracy and efficacy of machine learning”, Zeiler said.

Feeding images

The teaching system begins with images fed through the computer's neural layers, which then “learn” to identify a street sign, say, or a violent scene in a video.

Violent acts might include hacking actions, or blood, says Abhijit Shanbhag, CEO of Singapore- based Graymatics. If his engineers can't find a suitable scene, they film it themselves in the office.

Zeiler says Clarifai's algorithms can also recognise objects in a video that could be precursors to violence -- a knife or gun, for instance. But there are limits. One is the software is only as good as the examples it is trained on. When someone decides to hang a child from a building, it's not necessaril­y something the software has been programmed to watch for.

“As people get more innovative about such gruesome activity, the system needs to be trained on that,” said Shanbhag, whose company filters video and image content on behalf of several social media clients in Asia and elsewhere.

Another limitation is that violence can be subjective. A fast- moving scene with lots of gore should be easy enough to spot, says Junle Wang, head of R& D at France-based PicPurify. But the company is still working on identifyin­g violent scenes that don't involve blood or weapons. Psychologi­cal torture, too, is hard to spot, says his colleague, CEO Yann Mareschal.

And then there's content that could be deemed offensive without being intrinsica­lly violent -- an ISIS flag, for example -- says Graymatics's Shanbhag. That could require the system to be tweaked depending on the client.

Still need humans

Yet another limitation is that while automation may help, humans must still be involved to verify the authentici­ty of content that has been flagged as offens ive or dangerous, said Mika Rautiainen, founder and CEO of Valossa, a Finnish company which finds undesirabl­e content for media, entertainm­ent and advertisin­g companies.

Indeed, likely solutions would involve looking beyond the images themselves to incorporat­e other cues. PicPurify's Wang says using algorithms to monitor the reaction of viewers -- a sharp increase in reposts of a video, for example -- might be an indicator.

Michael Pogrebnyak, CEO of Kuznech, said his Russian-US company has added to its arsenal of pornograph­ic image- spotting algorithms - which mostly focus on skin detection and camera motion -- to include others that detect the logos of studios and warning text screens.

Facebook says it is using similar techniques to spot nudity, violence or other topics that don't comply with its policies. A spokespers­on didn't respond to questions about whether the software was used in the Thai and other recent cases.

Some of the companies said industry adoption was slower than it could be, in part because of the added expense. That, they say, will change. Companies that manage user- generated content could increasing­ly come under regulatory pressure, says Va l o s s a ' s Rautiainen.

“Even without tightening regulation, not being able to deliver proper curation will increasing­ly lead to negative effects in online brand identity,” Rautiainen says.

 ??  ?? A Graymatics employee show how footage of a pretend fight between co-workers can to be used to 'train' their software to watch and filter internet videos for violence, at their office in Singapore April 27, 2017. REUTERS/Edgar Su
A Graymatics employee show how footage of a pretend fight between co-workers can to be used to 'train' their software to watch and filter internet videos for violence, at their office in Singapore April 27, 2017. REUTERS/Edgar Su

Newspapers in English

Newspapers from Sri Lanka