New Straits Times

Curbing bullying on Instagram

The effort is part of a larger attempt by the social networking platform to clean itself up, writes Kevin Roose

- NYT

IF you were to rank all the ways humans can inflict harm on one another, ranked by severity, it might be a few pages before you got to “intentiona­l inducement of FOMO”. Purposeful­ly giving someone else FOMO (fear of missing out) is not a crime, or even a misdemeano­ur.

But it is a big problem on Instagram, where millions of teenagers go every day to check on their peers. And it is one of the subtle slights that Instagram is focused on classifyin­g as part of its new anti-bullying initiative, which will use a combinatio­n of artificial intelligen­ce (AI) and human reviewers to try to protect its youngest users from harassment and pain.

The anti-bullying effort is part of a larger attempt by Instagram and its parent company, Facebook, to clean themselves up.

Both platforms have struggled to contain a flood of toxic behaviour, extreme content and misinforma­tion on their services.

Instagram is particular­ly vulnerable because of its young user base. About 70 per cent of American teenagers use the service, according to the Pew Research Centre. And 42 per cent of cyberbully­ing victims ages 12 to 20 reported being bullied on Instagram, according to a 2017 survey by the British anti-bullying organisati­on Ditch the Label.

Recently, I went to Instagram’s New York office with several other reporters to hear its executives describe how they’re trying to fight bullying.

It’s not the company’s first time talking about the topic — the former chief executive, Kevin Systrom, discussed bullying all the way back in 2016 — but it is a subject of renewed focus there.

Last year, Instagram announced an effort to use AI to label instances of bullying within photos.

This year, it said it would begin testing new features aimed at improving teenagers’ mental health, including the ability to hide “like” counts on posts.

“There are a lot of teens using Instagram, so we actually see new behaviours and words all the time, and we need to work quickly to understand if these new trends are harmful,” said Bettina Fairman, Instagram’s director of community operations.

These efforts are still unproven, and, like any Facebook-related promises, they’re best taken with a heaping handful of salt. But Instagram seems to be more aggressive about this than competing platforms like Twitter and Snapchat.

If you want to stamp out bullying, you first have to know what forms it takes. So late last year, Instagram began assembling focus groups of teenagers and parents and gathering feedback about what types of unwanted behaviour they encountere­d on the platform.

Some were the predictabl­e types of threats and insults, like rating users’ attractive­ness on a one-to-10 scale, a practice that Instagram already prohibits, while others were more unexpected.

Some teenagers reported feeling bullied when their exes showed off new boyfriends or girlfriend­s in a menacing way — for example, by tagging the jilted ex in the photo to trigger a notificati­on and rub in the fact that they had moved onto someone new.

Instagram came up with a name for this category of bullying “betrayals” and started training an algorithm to detect it.

“One of the things we learnt early on is that how we were defining bullying in our Community Guidelines doesn’t necessaril­y capture all the ways people feel like they’re being bullied,” said Karina Newton, Instagram’s global head of public policy.

Not all of these behaviours necessaril­y violate Instagram’s rules. The company has not yet decided where to draw every line; for now, it is just trying to understand bullying’s

many flavours and teach machines to flag them for human reviewers, who then decide whether or not they violate the platform’s rules.

Facebook and Instagram already use AI to detect various types of off-limits content, including nudity, child exploitati­on and terrorism-related material. But classifyin­g bullying is a bigger challenge, because doing so often depends on the context of a social interactio­n.

Take one of the examples used by the executives during the briefing: a photo of two teenage girls that was posted to Instagram with the caption “love you hoe”.

Normally, Instagram’s systems would pick up on the derogatory word “hoe” and flag the post to a human reviewer. But in context, it’s clear that the user meant it as a term of endearment, so the correct action would be to leave the post up.

Or consider a hypothetic­al photo of a teenage couple at the beach, posted to Instagram with the caption “Wish you were here, Amanda!”

Normally, that post would be bland and inoffensiv­e. But you can imagine contexts in which it would constitute bullying.

Are the people in the photo mocking Amanda for being the only senior not invited to Beach Week? If so, it could constitute “intentiona­l inducement of FOMO”.

Is Amanda the ex-girlfriend of the boy in the photo, being taunted by the new girlfriend? If so, it could be classified as a form of betrayal.

Is there a whale in the background that is tagged as Amanda, as a cruel joke about her weight? If so, it could be classified as an insult.

It’s odd to realise that what Instagram is describing, a planetary-scale AI surveillan­ce system for detecting and classifyin­g various forms of teenage drama — is both technicall­y possible and, sadly, maybe necessary.

It should make us all question whether a single company should have so much power over our social relationsh­ips, or whether any platform of Instagram’s size can be effectivel­y governed at all.

But if you have to have an Instagram-size platform, there are arguments in favour of using AI to seek out bad behaviour, rather than wait for users to report it.

One reason, Instagram’s executives said, is that teenagers often don’t report bullying when it happens to them. Some fear social repercussi­ons or retaliatio­n from their bullies while others fear that their parents will take away their phones.

Eventually, the company hopes its AI will be good enough to identify and remove all types of bullying on its own, without the need for human review.

But, executives cautioned, that day may be distant, especially outside the Englishspe­aking world, where it has fewer moderators and less local-language data available to train algorithms.

“Our algorithms aren’t yet as good as people when it comes to understand­ing context,” Fairman said.

Instagram’s critics probably won’t be satisfied that, after making billions of dollars in profits and contributi­ng to what researcher­s say is an epidemic of teenage depression and anxiety, the company is now trying to dismantle the culture of social media bullying it helped to create.

“Where were they five years ago? It’s about time, honestly,” said Jim Steyer, the chief executive of Common Sense Media, a nonprofit watchdog group that advocates better protection­s on children’s technology.

“This has been a huge issue for years, and most of these companies buried their heads in the sand until they were under pressure to do something about it.”

It’s true that Instagram’s anti-bullying effort may be useful for generating good public relations, and that the company seems to be making up some of the details as it goes along.

It’s also true that Instagram has a multitude of serious problems on its hands — including anti-vaccine misinforma­tion and rampant hate speech and extremism — and that building AI to detect bullying is probably a more convenient challenge than rethinking the ad-driven business model and platform design issues that encourage antisocial behavior in the first place.

But better too little, too late than nothing, ever. Instagram’s bully-detecting AI is a good idea, and a step toward giving young people an easier time navigating the vicissitud­es of 21st-century adolescenc­e.

For their sake, let’s hope it works.

 ??  ?? Instagram is particular­ly vulnerable because of its young user base.
Instagram is particular­ly vulnerable because of its young user base.
 ??  ?? Instagram product head Adam Mosseri discusses the social network’s anti-bullying efforts during the F8 Facebook Developers conference.
Instagram product head Adam Mosseri discusses the social network’s anti-bullying efforts during the F8 Facebook Developers conference.

Newspapers in English

Newspapers from Malaysia