Saturday Star

Fake news can fool robots – and you

- CATHY O’NEIL

UNINFORMAT­IVE as fake news may be, it’s shedding light on an important limitation of the algorithms that have helped make the likes of Facebook and Google multibilli­on-dollar companies: They’re no better than people at recognisin­g what is true or right.

Remember Tay, the Microsoft bot that was supposed to converse breezily with regular folks on Twitter?

People on Twitter are nuts, so within 16 hours it was spewing racist and anti-Semitic obscenitie­s and had to be yanked. More recently, Microsoft released an updated version called Zo, explicitly designed to avoid certain topics, on the smaller social network Kik. Zo’s problem is that she doesn’t make much sense.

The lesson from these experiment­s: Algorithms, machine learning, artificial intelligen­ce or whatever else you’d like to call such things are not good at general knowledge and understand­ing. They can avoid a blacklist of topics, or respond in some special way to a whitelist, but that’s about it. They have no underlying model of the world that allows them to make nuanced distinctio­ns between truth and falsehoods. Instead, they rely on pattern matching from a large corpus of consistent­ly true informatio­n.

That’s not to say they can’t infer informatio­n, or that they are logically flawed. They excel in tiny, toy universes where the rules of the game are precisely understood and consistent – games such as chess or Go, for example. They can even handle trivia, as the success of IBM’s Watson in playing “Jeopardy!” has demonstrat­ed.

Watson’s ability to study and recall data involves a lot of sophistica­ted machine learning and graph theory. But those data – the “ground truth” for Watson, consisting of articles, research reports, blogs and tweets found on the internet – must be reliable. If the internet were half wrong, or 99% wrong, Watson would be terrible at “Jeopardy!”

Our society is embroiled in a debate about what is true, what is opinion and what is propaganda, and it leaves most of us confused. Why should artificial intelligen­ce be any different?

It would be great if an algorithmi­c gatekeeper could help us out. Google, for one, has done a pretty good job of algorithmi­cally vetting websites for quality.

Even here, though, groups devoted to propaganda around Jews, women, Hitler and Muslims have managed to auto complete and search algorithms, leading users to bogus websites. Google has cleaned up the more embarrassi­ng examples, but by employing a sophistica­ted version of blacklisti­ng rather than any deep change in its algorithmi­c methodolog­y.

Companies such as Google and Facebook, which have a lot of money riding on algorithms, will naturally try to make the case that the public should keep trusting them.

But in an environmen­t of intentiona­lly false informatio­n, users will need to move past the algorithms and decide which individual­s – and which news sources – to rely on for vetted informatio­n. – Bloomberg

O’Neil is a mathematic­ian who has worked as a professor, hedgefund analyst and data scientist. She founded ORCAA, an algorithmi­c auditing company, and is the author of Weapons of Math Destructio­n.

Newspapers in English

Newspapers from South Africa