Fake news can fool robots – and you
UNINFORMATIVE as fake news may be, it’s shedding light on an important limitation of the algorithms that have helped make the likes of Facebook and Google multibillion-dollar companies: They’re no better than people at recognising what is true or right.
Remember Tay, the Microsoft bot that was supposed to converse breezily with regular folks on Twitter?
People on Twitter are nuts, so within 16 hours it was spewing racist and anti-Semitic obscenities and had to be yanked. More recently, Microsoft released an updated version called Zo, explicitly designed to avoid certain topics, on the smaller social network Kik. Zo’s problem is that she doesn’t make much sense.
The lesson from these experiments: Algorithms, machine learning, artificial intelligence or whatever else you’d like to call such things are not good at general knowledge and understanding. They can avoid a blacklist of topics, or respond in some special way to a whitelist, but that’s about it. They have no underlying model of the world that allows them to make nuanced distinctions between truth and falsehoods. Instead, they rely on pattern matching from a large corpus of consistently true information.
That’s not to say they can’t infer information, or that they are logically flawed. They excel in tiny, toy universes where the rules of the game are precisely understood and consistent – games such as chess or Go, for example. They can even handle trivia, as the success of IBM’s Watson in playing “Jeopardy!” has demonstrated.
Watson’s ability to study and recall data involves a lot of sophisticated machine learning and graph theory. But those data – the “ground truth” for Watson, consisting of articles, research reports, blogs and tweets found on the internet – must be reliable. If the internet were half wrong, or 99% wrong, Watson would be terrible at “Jeopardy!”
Our society is embroiled in a debate about what is true, what is opinion and what is propaganda, and it leaves most of us confused. Why should artificial intelligence be any different?
It would be great if an algorithmic gatekeeper could help us out. Google, for one, has done a pretty good job of algorithmically vetting websites for quality.
Even here, though, groups devoted to propaganda around Jews, women, Hitler and Muslims have managed to auto complete and search algorithms, leading users to bogus websites. Google has cleaned up the more embarrassing examples, but by employing a sophisticated version of blacklisting rather than any deep change in its algorithmic methodology.
Companies such as Google and Facebook, which have a lot of money riding on algorithms, will naturally try to make the case that the public should keep trusting them.
But in an environment of intentionally false information, users will need to move past the algorithms and decide which individuals – and which news sources – to rely on for vetted information. – Bloomberg
O’Neil is a mathematician who has worked as a professor, hedgefund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of Weapons of Math Destruction.