PC Pro

“I seem to recall organising a coordinate­d intrusion of a forum inhabited by Morris dancers” Whatever happened to good-humoured trolling, Davey asks, before explaining why it’s not such a good idea to pay up for ransomware

-

Iam a man of a certain age, unfortunat­ely quite an old one these days. Which is why I can not only remember online life before the internet became accessible to the masses and the World Wide Web had yet to be invented, but also when trolling wasn’t a hateful activity. Indeed, back in the early days of online communitie­s I used to be quite well known on one of them, CIX, for being a pain in the arse – but one without a nasty bone in his body. I was trolling before trolling had been given a name.

Yet my particular brand of trolling, often fuelled by a little too much whisky, reflected my personalit­y: Winder by name, wind-up merchant by nature. I liked to make people laugh, although not everyone thought it funny at the time, of course. I seem to recall organising a coordinate­d intrusion of a forum inhabited by Morris dancers where we proceeded to start “dancing” by posting messages such as “jingle jangle” followed by “whacks a stick” and more jingling. I’m not sure that was appreciate­d, but it was humorous.

I, er, once “created” a fake account at an online service that existed purely to follow one user around while pretending to be their mum: “Your dinner’s ready” or “Are you wearing clean underpants?” were typical, and typically stupid, examples.

I even collaborat­ed with another well-known troll at the time, who went by the name of Fis, and who was rather more confrontat­ional than myself, although still mild-mannered compared to the obnoxious, threatenin­g and often illegal trolling of today. When someone messaged him to complain, he would forward that to me and I would reply. When someone replied to me, I would forward it back to Fis and so it went on. The end effect was many people thought we were the same person, and 20 years later I still get the odd email asking me about my days when I was known as Fis.

What these admittedly immature, often downright childish, activities had in common was that they all had humour at their heart, and a desire to stimulate a response, be that positive or negative. I just wanted to engage the community and, hopefully, make people smile. I like to think I succeeded more than I failed.

Somewhere, somehow, everything changed. I can’t put my finger on precisely when, but it would have coincided with the growth of the public internet. Human nature, sadly, will out, and the internet facilitate­d in broadcasti­ng the hateful, racist, sexist, homophobic, intolerant sides of some people. To this day, that trolling continues across social media, not only as direct messages but also in the form of memes.

Which brings me to the point of this stroll down my virtual memory lane to simpler, more innocent, internet times. Facebook has decided enough is enough and is trying to do something more positive to stop hateful memes in their tracks. That something is, having compiled a dataset containing more than 10,000 examples of such memes, a reward pool to the tune of $100,000 to be divided between developers who can create a meme “hate speech” detector.

This is a lot harder than it may at first sound. Your average person on the Clapham omnibus would imagine it’s just a matter of a team at Facebook monitoring posts, maybe with the help of some keyword filter that flags suspicious ones and deleting those that break the rules. Your average person, however, grasps neither the sheer size of Facebook and the volume of traffic it generates, nor that mixed content combining words and images introduces a very specific contextual difficulty. The volume of traffic makes a purely human interventi­on solution impractica­l and, frankly, all but impossible to achieve. The keywords filtering approach wouldn’t make things any easier as the problem here is one of context.

An average meme combines two media: an image and some text. The image might be perfectly acceptable, the text perfectly harmless; the combinatio­n of the two, however, introduces context and that could be neither. The analytical challenge of addressing this combinatio­n is confirmed in a Cornell University paper ( pcpro.link/311cornell) from May 2020 that looks at the problem of detecting hate speech in multimodal memes. This found that humans had an accuracy rate in detecting such memes of 84.7%, whereas state-ofthe-art deep-learning models could only achieve 64.7% accuracy.

I’ve explained why humans alone are not a viable solution for Facebook, although humans combined with an

 ?? @happygeek ?? Davey is a journalist and consultant specialisi­ng in privacy and security issues
@happygeek Davey is a journalist and consultant specialisi­ng in privacy and security issues
 ??  ?? BELOW Solve the hateful memes identifica­tion problem and win $50,000
BELOW Solve the hateful memes identifica­tion problem and win $50,000

Newspapers in English

Newspapers from United Kingdom