Can machine learning beat trolls?
SAN FRANCISCO — From selfdriving cars to multilanguage translation, machine learning is underpinning many of the technology industry’s biggest advances with its form of artificial intelligence.
Now, Google’s parent company, Alphabet, says it plans to apply machine learning technology to promote more civil discourse on the Internet and make comment sections on sites a little less awful.
Jigsaw, a technology incubator within Alphabet, says it has developed a new tool for web publishers to identify toxic comments that can undermine a civil exchange of ideas. Starting Thursday, publishers could start applying for access to use Jigsaw’s software, called Perspective, without charge.
“We have more information and more articles than any other time in history, and yet the toxicity of the conversations that follow those articles are driving people away from the conversation,” said Jared Cohen, president of Jigsaw, formerly known as Google Ideas.
Discussion in comments sections often devolves into an offensive and hateful exchange unless it is carefully managed. This has prompted some publishers to turn off the comments section on articles because moderating them can be time-consuming and difficult.
Jigsaw had a team review hundreds of thousands of comments to identify the types of comments that might deter people from a conversation. Based on that data, Perspective provided a score from 0 to 100 on how similar the new comments are to the comments identified as toxic.
The same methodology is being provided to publishers, who could use the scores to have human moderators review comments only for responses that registered above a certain number, or allow a reader to filter out comments above a certain level of toxicity.
Jigsaw worked with The New York Times and Wikipedia to develop Perspective. The Times’ comments section is managed by a team of 14 moderators who manually review nearly every comment.
Because this requires considerable labour and time, The Times allows commenting on only about 10 per cent of its articles.
Cohen said the technology was in its early stages and might flag some false positives, but he expected that it would become more accurate over time with access to a greater set of comments.
Jigsaw, whose stated mission is to use technology to tackle “geopolitical challenges” such as cybersecurity attacks and online censorship, said it also saw opportunities for its machine-learning software to identify comments that are off-topic or unsubstantial.