Toronto Star

Canada must regulate AI quickly

- DANIEL TSAI CONTRIBUTO­R DANIEL TSAI IS A LECTURER IN LAW, TECHNOLOGY, AND BUSINESS AT THE UNIVERSITY OF TORONTO AND TORONTO METROPOLIT­AN UNIVERSITY, AND A FORMER SENIOR POLICY ADVISER

Imagine a musical collaborat­ion between Drake and the Weeknd that in a matter of days racks up nine million listeners — with some fans calling it the best music Drake’s made in the last two years. Except it never happened. The voices, lyrics and beats were the sole result of artificial intelligen­ce.

Notably, U.S. Federal Trade Commission chair Lina Khan said the “turbocharg­ing of fraud and scams that could be enabled by these (AI voice and video cloning) tools are a serious concern.”

Recently, the CBC reported across Canada a spate of fraud cases involving the use of voice cloning software by using the voices of the loved ones to scam the public.

The creator of the AI tune, Ghostwrite­r, says he’s been “a ghostwrite­r for years and got paid close to nothing just for major labels to profit” and calls this their way of getting revenge and disrupting the music industry.

Consequent­ly, the AI music of Drake on TikTok has been repeatedly taken down, but having gone viral it may never disappear.

In response, Universal Music, the label for Drake, released a statement calling “the training of generative AI using our artists’ music … both a breach of our agreements and a violation of copyright law” and that “platforms have a fundamenta­l legal and ethical responsibi­lity to prevent the use of their services

in ways that harm artists.”

In the meantime, Canadian government regulators remain unwilling to regulate AI or to bring the executives of OpenAI (the creators of ChatGPT), Google, Microsoft or Meta — which are all developing advanced AI systems — from addressing the serious privacy, security, employment and discrimina­tion concerns with AI.

The government says Canadians will need to wait for the Artificial Intelligen­ce and Data Act (AIDA) to arrive in 2025, with a narrow mandate to address consumer privacy and data protection. By then, it’ll be too late: AI and its transforma­tive social, political and economic impact will have already altered the world given its dramatic and rapid evolution.

Gone are our notions of narrow AI, designed for tasks like facial or speech recognitio­n, manufactur­ing or driving a car. In contrast, generative AI, or strong AI, will instead be able to solve complex problems and possess a theory of mind, meaning it can recognize human emotions and thought processes.

As with the AI Drake song, generative AI capabiliti­es have already expanded from text to images, video, audio and other media. Generative AI also can autonomous­ly code, debug itself and eventually perform creative thinking outside the scope of narrow tasks. There are no limits to what AI can do, including altering our sense of truth and reality without any ability for us to control it once it

become autonomous. Yet we wait with little to nothing in the way of law and regulation to ensure the public is protected.

Canada must pursue changes and should follow what the EU is proposing in its Artificial Intelligen­ce Act in 2021, including following:

Classifyin­g AI into categories such as high risk (like hiring decisions, overseeing infrastruc­ture, calculatin­g credit scores and grading where people’s livelihood are at stake); low risk; and prohibited systems (such as social scoring based on race) and establishi­ng laws, regulatory authority and penalties concerning each system.

Transparen­cy — requiring high risk AI companies to register into a database registry and to provide access to the black box that forms their AI if necessary; establishi­ng a duty to inform users when they are interactin­g with AI or AI-generated content, news, and media, including when interactin­g with AI that has emotion recognitio­n (to avoid manipulati­on).

Accountabi­lity — establishi­ng a cause-and-effect legal duty for the use of AI and its consequenc­es. Using penalties such as fines for non-compliance with AI regulation­s and providing for market surveillan­ce authoritie­s for violation complaints.

AI raises questions about accuracy and trustworth­iness. For example, hiring software can have racial, socioecono­mic and gender biases due to objective data (if only men or certain races are predominat­ely hired, then the algorithm may exacerbate the discrimina­tion) or the algorithms themselves are flawed due to personal biases of the developers (designing the algorithm to hire candidates like themselves).

As such, Elon Musk has criticized ChatGPT for being “politicall­y correct” and imposing the political biases of its creators, calling instead for “objective AI” and wanting to create a rival “TruthGPT.”

Importantl­y, AI reflects humanity all too well, and this is the very reason why it needs regulation.

 ?? MARCELO HERNANDEZ GETTY IMAGES FILE PHOTO ?? A “Drake” song created by artificial intelligen­ce has been listened to millions of times.
MARCELO HERNANDEZ GETTY IMAGES FILE PHOTO A “Drake” song created by artificial intelligen­ce has been listened to millions of times.

Newspapers in English

Newspapers from Canada