Malta Independent

Much more to be done

-

some to go undetected.

Help for struggling tech giants

As I found tweets that I thought violated Twitter’s policies, I reported them. Most of them were removed quickly, even within an hour. But some obviously offensive posts took as long as several days to come down. There are still a few text-based tweets that have not been removed, despite clearly violating Twitter’s policies. That shows the company’s content review process is not consistent.

It may seem that Twitter is getting better at removing harmful content and that it’s taking down a lot of content and memes and suspending accounts, but a lot of that activity is not related to hate speech. Rather, much of Twitter’s attention has been on what the company calls “coordinate­d manipulati­on,” such as bots and networks of fake profiles run by government propaganda units.

In my view, the company could take a significan­t step to solicit the help of members of the public, as well as researcher­s and experts like my colleagues and me, to identify hateful content. It’s common for technology companies – including Twitter – to offer payments to people who report security vulnerabil­ities in their software. However, all the company does for users who report problemati­c content is send an automatica­lly generated message saying “thanks.” The disparity in how Twitter treats code problems and content reports delivers a message that the company prioritize­s its technology over its community.

Instead, Twitter could pay people for reporting content that is found to violate its community guidelines, offering financial rewards for stamping out the social vulnerabil­ities in its system, just as if those users were helping it identify software or hardware problems. A Facebook executive expressed concern that this potential solution could backfire and generate more online hate, but I believe the reward program could be structured and designed in a way to avoid that problem.

There are further problems with Twitter that go beyond what’s posted directly on its own site. People who post hate speech often take advantage of a key feature of Twitter: the ability of tweets to include links to other internet content. That function is central to how people use Twitter, sharing content of mutual interest from around the web. But it’s also a method of distributi­ng hate speech.

For instance, a tweet can look totally innocent, saying “This is funny” and providing a link. But the link – to content not posted on Twitter’s servers – brings up a hate-filled message.

In addition, Twitter’s content moderation system only allows users to report hateful and threatenin­g

If social media sites want to avoid becoming – or remaining – vectors for informatio­n warfare and plagues of hateful ideas and memes, they need to step up a lot ” more actively

tweets – but not accounts whose profiles themselves contain similar messages. Some of these accounts – including ones with profile pictures of Adolf Hitler, and names and Twitter handles that advocate burning Jews – don’t even post tweets or follow other Twitter users. Sometimes they may simply exist to be found when people search for words in their profiles, again turning Twitter’s search box into a delivery system. These accounts may also – though it’s impossible to know – be used to communicat­e with others on Twitter via direct message, using the platform as a covert communicat­ion channel.

With no tweets or other public activity, it’s impossible for users to report these accounts via the standard content reporting system. But they are just as offensive and harmful – and need to be evaluated and moderated just like other content on the site. As people seeking to spread hate become increasing­ly sophistica­ted, Twitter’s community guidelines – but more importantl­y its enforcemen­t efforts – need to catch up, and keep up.

If social media sites want to avoid becoming – or remaining – vectors for informatio­n warfare and plagues of hateful ideas and memes, they need to step up a lot more actively and, at the very least, have their thousands of full-time content-moderation employees search like a professor did over the course of a weekend.

This article is republishe­d from The Conversati­on under a Creative Commons license. Read the original article here: http://theconvers­ation.com/hate-speech-is-stilleasy-to-find-on-social-media-106020.

Newspapers in English

Newspapers from Malta