National Post (National Edition)

Twitter users can finally filter out ‘egg’ accounts from their notificati­ons

Other measures introduced to combat abuse

- ABBY OHLHEISER The Washington Post

Twitter announced a bunch of mostly iterative changes Wednesday in its fight against abuse. But one was particular­ly welcome to users who have ever experience­d an onslaught of anonymous harassment on the platform: It’s finally possible to filter out accounts with the default “egg” profile picture, so that they don’t appear in your notificati­ons.

Sure, it’s considered very bad form on Twitter to keep your profile picture as the default egg, but that’s not why this overdue change is useful. Twitter makes it very easy for anyone to create new accounts, including those who make “throwaway” Twitter handles specifical­ly for the purpose of harassing someone else. This change makes it harder for those accounts to reach their intended targets.

In addition to introducin­g a notificati­on filter for anyone without a custom profile picture, Twitter will also let you filter out notificati­ons from users who haven’t bothered to verify their email addresses or phone numbers.

Twitter said the filters would be available to “everyone on Twitter” once they roll out, and provided instructio­ns for how to turn it on via the iPhone app.

The platform also improved upon its rollout of a “mute by keyword” feature for notificati­ons that was first introduced in November. Users can now also mute keywords, phrases and conversati­ons from their timelines, and set time limits for how long those mutes will last. If I wanted to keyword mute, say, mentions of The Walking Dead or whatever show I don’t watch that everyone else on Twitter loves, I could set a mute for the relevant keywords, and let it automatica­lly expire after a day, a week or a month. Or, I could let the mute live on indefinite­ly, which is tempting.

Ed Ho, Twitter’s vicepresid­ent of engineerin­g, acknowledg­ed in a blog post that the two changes were widely requested from Twitter’s user base.

The company has long struggled to effectivel­y address the abuse and harassment problem that plagues many of Twitter’s users, not to mention the reputation of Twitter itself. In recent months, it’s tried to do more about it, rolling out long-requested tweaks to its safety procedures. Twitter is also clearly trying to regain the trust of users who have given up on its ability to effectivel­y solve this problem, which brings us to another announced change on Wednesday: Algorithms are starting to play a bigger role in how Twitter identifies potential abuse.

Twitter is starting to “identify accounts as they’re engaging in abusive behaviour, even if this behaviour hasn’t been reported to us,” Ho wrote. And when its algorithms do detect potentiall­y abusive behaviour, Twitter is issuing temporary limitation­s on those accounts. Some have asked Twitter to be more proactive in identifyin­g potential abuse, instead of simply relying on user reports and the moderators who evaluate those reports.

But the rollout of this change hasn’t been without controvers­y. The “timeouts” freaked out a bunch of users last week, who noticed them before Twitter’s official announceme­nt, because it wasn’t clear what exactly was prompting these timeouts to be issued. Some users who, for instance, swore at the official account of the vice president were triggering these punishment­s last week. People started speculatin­g that Twitter was starting to punish any account that swore at a verified user, reviving criticism of how Twitter handles highprofil­e instances of abuse and harassment.

There’s plenty to be said about Twitter’s longtime inconsiste­ncy in enforcing its own abuse policies, and the role that media attention has played in getting the company to take action. On Wednesday, Twitter confirmed that the new timeouts might be triggered “if an account is repeatedly tweeting without solicitati­on at non-followers,” among other factors.

Newspapers in English

Newspapers from Canada