Toronto Star

An AI threat bigger than killer robots

- BENOÎT DUPONT OPINION

The scientific breakthrou­ghs of pioneer artificial intelligen­ce researcher­s in Toronto, Montreal and Edmonton are fuelling record public and private investment­s seeking to turn Canada into a global AI powerhouse.

Federal and provincial government­s awarded more than $400 million to various R&D initiative­s in this field over the past two years alone, while companies such as Microsoft, Google and Facebook are establishi­ng their own dedicated labs in Canada to make sure they’re not left behind in the AI arms race.

This technology will disrupt every aspect of our economy, and the hope is that Canada will yield the rewards of its scientific foresight by developing a thriving innovation ecosystem. But the disruption will also be social and political.

Beyond the waves of job destructio­n that AI will precipitat­e in many industries, the main concern regarding the darker side of this technology has focused on the developmen­t of killer robots. Elon Musk and hundreds of high-profile AI researcher­s have voiced their alarm and called politician­s to action.

The fear of AI-enabled armed robots resonates particular­ly well with Western audiences that have been fed a rich cultural diet of malicious machines threatenin­g to exterminat­e humanity by the Hollywood film industry.

But instead of worrying about what AI and the robots it controls will do to us, we should be more concerned about what it will know about us and what it will make us do. In other words, mass surveillan­ce and manipulati­on by powerful AIs represent a much more imminent and tangible threat to our democratic values than killer robots.

The disruptive potential of AI has a strategic dimension that has not escaped authoritar­ian regimes. Vladimir Putin framed it with his legendary sense of nuance when he said, “Whoever will become the leader in this field will become the ruler of the world.”

With much less fanfare, the Chinese government has come to the same conclusion­s. It has set aside $150 billion (U.S.) for AI in its most recent five-year plan to become the world leader in this field by 2030, with massive additional investment­s by local government­s and private companies.

Not all this money will be used to bolster e-commerce through more effective purchase recommenda­tions and personable chatbots. The University of Toronto’s Citizen Lab has, for example, examined how algorithms embedded in popular Chinese social media apps perform censorship and surveillan­ce functions.

Indeed, one of the main security applicatio­ns of AI in authoritar­ian regimes involves the mass surveillan­ce of population­s that threaten the stability of the political system and its institutio­ns. China seems the most advanced in this respect, with a plan to develop a social-credit scoring system to be rolled out by 2020.

This social-control tool is already being tested by several municipali­ties and internet companies. It will assess people based on a pool of online, administra­tive and banking records. Powerful AI algorithms will be used to assign them a unique trustworth­iness rating that will influence what kind of government services (housing, education, health, employment, etc.) and commercial services (bank loans, insurance premiums, travel abroad, etc.) they will be able to access.

The AIs that will parse this ocean of data (opting out, by the way, is not an option) will become allseeing gods extracting compliance by their capacity to classify behaviours and sort people that diverge from the politicall­y acceptable norm.

Western democracie­s are not immune from this disturbing trend. In the U.S., the Department of Immigratio­n and Customs Enforcemen­t recently asked technology firms to develop algorithms that could assess the risks posed by visa holders through the continuous analysis of their social media activities during their stay in the country.

Canada is playing an instrument­al role in bringing the power of AI to every corner of human life. Instead of limiting its leadership to the research and innovation fields, it should also extend it to the regulatory and diplomatic arenas, to ensure that AI applicatio­ns are not used for anti-democratic purposes but serve the public good instead.

That would imply preventing Canadian AI technology from being exported to authoritar­ian states, but also thinking about how Canadian citizens can be protected from the surveillan­ce of companies that operate from undemocrat­ic states and therefore share data with their own government.

On the internatio­nal stage, Canada should play a more active role in shaping internatio­nal convention­s that would restrain the weaponizat­ion of AI and encourage applicatio­ns that enhance human well-being. Our country’s moral imperative is to guarantee that AI technologi­es will not erode the privacy ideals and principles that define our democracy.

Benoît Dupont is a professor of criminolog­y and Canada research chair in cybersecur­ity at Université de Montréal.

 ??  ??

Newspapers in English

Newspapers from Canada