RESEARCHERS DEVELOP AI TO CURB HATE SPEECH ONLINE
LONDON: As online hate speech is increasingly threatening democracy, researchers are developing artificial intelligence similar to malware filters to ‘quarantine’ it, offering users a way to control exposure to it without resorting to censorship.
A linguist and an engineer at the University of Cambridge published their proposal in the journal Ethics and Information Technology.
They are using databases of threats and violent insults to build algorithms that can provide a score for the likelihood of an online message containing forms of hate speech. As these algorithms get refined, potential hate speech could be identified and ‘quarantined’.
Users would receive a warning alert with a ‘Hate O’meter’ – the hate speech severity score – the sender’s name, and an option to view the content or delete unseen, akin to spam and malware.
“Hate speech is a form of intentional online harm, like malware, and can therefore be handled by means of quarantining,” says co-author and linguist Stefanie Ullman.
“In fact, a lot of hate speech is actually generated by software such as Twitter bots.”
The researchers say their proposal is not a magic bullet, but sits between the “extreme libertarian and authoritarian approaches” of either entirely permitting or prohibiting certain language online. The user becomes the arbiter.
In the paper, the researchers refer to detection algorithms achieving 60% accuracy – not much better than chance. Co-author and engineer Marcus Tomalin’s machine learning lab has now got this up to 80%, and he anticipates continued improvement of the mathematical modelling.