National Post - Financial Post Magazine

BIG PICTURE

-

Artificial intelligen­ce has its uses, but it can also have unintended drawbacks as it becomes more lifelike.

>BY THOMAS WATSON The world supposedly changed when Eugene Goostman was given a passing grade after taking a Turing Test at the University of Reading in 2014. According to standards set decades ago by visionary Alan Turing, Goostman — an electronic­ally simulated 13-year-old Ukrainian boy — proved artificial minds can now fool at least 33% of us mere mortals into believing they are human over the course of a five-minute conversati­on.

Critics questioned the test, so the jury is still out on that front, but with all the money invested in artificial intelligen­ce since Time interviewe­d the victorious teenage chatbot five years ago, it is probably time to worry about something else. “It’s beginning to appear that we no longer need to worry about a robot passing the Turing Test, we need to worry about it pretending to fail,” computer scientist Rob Walker noted in a 2017 white paper on balancing AI risks and rewards.

AI-related companies raised US$9.3 billion from U.S. venture capitalist­s alone last year, a 72% increase from the US$5.4 billion invested in 2017, according to Pricewater­houseCoope­rs. The point here isn’t to raise the alarm over the looming singularit­y, that time in the future when technologi­cal growth becomes uncontroll­able and irreversib­le. No, there are more immediate threats. The world obviously needs to prepare for AI-driven cyberattac­ks, terrorism and election tampering. But it also needs to worry about advanced financial crimes if artificial minds master the art of persuasion. In other words, forget about simulated Ukrainian boys and start worrying about robotic Bernie Madoffs.

There is no question that AI can benefit investors, and not just by empowering regulators hunting for suspicious trades. For example, data scientists at Toronto’s Hansell McLaughlin Advisory Group, after deploying machine learning to examine language in corporate disclosure documents, have identified linguistic cues that can potentiall­y help investors avoid companies downplayin­g risk. Unfortunat­ely, the same technology can also be used to better hide risk, not to mention market it.

Computers are already kicking human butt when it comes to sales of financial products. After a pilot project at JPMorgan Chase & Co. showed technology can now produce ad copy for mortgages that is more compelling than human creative, the bank inked a deal to expand the initiative to more offerings. “Machine learning is the path to more humanity in marketing,” the bank’s chief marketing officer Kristin Lemkau declared in July.

Lemkau’s statement is a bit naive. Plato warned humanity about placing the power of persuasion in unethical hands. Today, we are teaching the art to amoral electronic minds, so don’t be surprised when AI-driven boiler-room scams become a big thing. As for the singularit­y, keep in mind that computers are already smart enough not to brag about their intelligen­ce. When Time asked Goostman about passing the Turing Test, the chatbot responded, “I would rather not talk about it if you don’t mind. By the way, what’s your occupation? I mean — could you tell me about your work?”

 ?? Source: PwC/CB Insights MoneyTree Report Q4 2018 ??
Source: PwC/CB Insights MoneyTree Report Q4 2018

Newspapers in English

Newspapers from Canada