ARGUMENT ALERT
SOCIAL NETWORKS are hotbeds for explosive conversations. Verbal clashes, abuse, bullying and harassment are commonplace on platforms such as Twitter and Facebook. Researchers at Cornell University may be able to help tackle this. In a study, spotted by The Verge, a group of scientists have figured out how to feed volumes of data on the indicators that a conversation is likely to turn sour. Looking back over a conversation, it isn’t difficult for humans to spot early warning signs. The researchers believe that when predictions become good enough, one can salvage a conversation in its initial stages.
For example, an obvious lack of polite words or overly confrontational questions and a tendency to personalise are signs that a conversation is headed down the wrong path. Give enough instances of this to an AI system and it will quickly learn to spot trouble. Potential uses for this could be better moderation on social networks and comment forums, preventing abuse or harassment and even better meetings at the workplace.