National Post (National Edition)

Twitter is ‘toxic place for women,’ says Amnesty Internatio­nal

- Dina Bass

SEATTLE • Women have been telling Twitter for years that they endure a lot of abuse on the platform. A new study from human rights watchdog Amnesty Internatio­nal attempts to assess just how much. A lot, it turns out.

About seven per cent of the tweets prominent women in government and journalism receive were found to be abusive or problemati­c. Women of colour were 34-per-cent more likely to be targets than white women. Black women specifical­ly were 84-percent more likely than white women to be mentioned in problemati­c tweets.

After an analysis that eventually included almost 15 million tweets, Amnesty Internatio­nal released the findings and in its report, described Twitter as a “toxic place for women.”

The organizati­on, which is perhaps best known for its efforts to free internatio­nal political prisoners, has turned its attention to tech firms lately, and it called on the social network to “make available meaningful and comprehens­ive data regarding the scale and nature of abuse on their platform, as well as how they are addressing it.”

“Twitter has publicly committed to improving the collective health, openness, and civility of public conversati­on on our service,” Vijaya Gadde, Twitter’s head of legal, policy, and trust and safety, said in a statement in response to the report.

“Twitter’s health is measured by how we help encourage more healthy debate, conversati­ons, and critical thinking. Conversely, abuse, malicious automation, and manipulati­on detract from the health of Twitter. We are committed to holding ourselves publicly accountabl­e towards progress in this regard.”

Together with Montrealba­sed AI startup Element AI, the project called “Troll Patrol” started by looking at tweets aimed at almost 800 female journalist­s and politician­s from the U.S. and the U.K. It didn’t study men. More than 6,500 volunteers analyzed 288,000 posts and labelled the ones that contained language that was abusive or problemati­c (“hurtful or hostile content” that doesn’t necessaril­y meet the threshold for abuse). Each tweet was analyzed by three people, according to Julien Cornebise, who runs Element’s London office, and experts on violence and abuse against women also spot-checked the volunteers’ grading.

Cornebise’s team used machine learning to extrapolat­e the human-generated analysis to a full set of 14.5 million tweets mentioning the same figures. The algorithm Cornebise’s team built did pretty well, he said, but not well enough to replace humans as content moderators.

Newspapers in English

Newspapers from Canada