Canada must regulate AI quickly
Imagine a musical collaboration between Drake and the Weeknd that in a matter of days racks up nine million listeners — with some fans calling it the best music Drake’s made in the last two years. Except it never happened. The voices, lyrics and beats were the sole result of artificial intelligence.
Notably, U.S. Federal Trade Commission chair Lina Khan said the “turbocharging of fraud and scams that could be enabled by these (AI voice and video cloning) tools are a serious concern.”
Recently, the CBC reported across Canada a spate of fraud cases involving the use of voice cloning software by using the voices of the loved ones to scam the public.
The creator of the AI tune, Ghostwriter, says he’s been “a ghostwriter for years and got paid close to nothing just for major labels to profit” and calls this their way of getting revenge and disrupting the music industry.
Consequently, the AI music of Drake on TikTok has been repeatedly taken down, but having gone viral it may never disappear.
In response, Universal Music, the label for Drake, released a statement calling “the training of generative AI using our artists’ music … both a breach of our agreements and a violation of copyright law” and that “platforms have a fundamental legal and ethical responsibility to prevent the use of their services
in ways that harm artists.”
In the meantime, Canadian government regulators remain unwilling to regulate AI or to bring the executives of OpenAI (the creators of ChatGPT), Google, Microsoft or Meta — which are all developing advanced AI systems — from addressing the serious privacy, security, employment and discrimination concerns with AI.
The government says Canadians will need to wait for the Artificial Intelligence and Data Act (AIDA) to arrive in 2025, with a narrow mandate to address consumer privacy and data protection. By then, it’ll be too late: AI and its transformative social, political and economic impact will have already altered the world given its dramatic and rapid evolution.
Gone are our notions of narrow AI, designed for tasks like facial or speech recognition, manufacturing or driving a car. In contrast, generative AI, or strong AI, will instead be able to solve complex problems and possess a theory of mind, meaning it can recognize human emotions and thought processes.
As with the AI Drake song, generative AI capabilities have already expanded from text to images, video, audio and other media. Generative AI also can autonomously code, debug itself and eventually perform creative thinking outside the scope of narrow tasks. There are no limits to what AI can do, including altering our sense of truth and reality without any ability for us to control it once it
become autonomous. Yet we wait with little to nothing in the way of law and regulation to ensure the public is protected.
Canada must pursue changes and should follow what the EU is proposing in its Artificial Intelligence Act in 2021, including following:
Classifying AI into categories such as high risk (like hiring decisions, overseeing infrastructure, calculating credit scores and grading where people’s livelihood are at stake); low risk; and prohibited systems (such as social scoring based on race) and establishing laws, regulatory authority and penalties concerning each system.
Transparency — requiring high risk AI companies to register into a database registry and to provide access to the black box that forms their AI if necessary; establishing a duty to inform users when they are interacting with AI or AI-generated content, news, and media, including when interacting with AI that has emotion recognition (to avoid manipulation).
Accountability — establishing a cause-and-effect legal duty for the use of AI and its consequences. Using penalties such as fines for non-compliance with AI regulations and providing for market surveillance authorities for violation complaints.
AI raises questions about accuracy and trustworthiness. For example, hiring software can have racial, socioeconomic and gender biases due to objective data (if only men or certain races are predominately hired, then the algorithm may exacerbate the discrimination) or the algorithms themselves are flawed due to personal biases of the developers (designing the algorithm to hire candidates like themselves).
As such, Elon Musk has criticized ChatGPT for being “politically correct” and imposing the political biases of its creators, calling instead for “objective AI” and wanting to create a rival “TruthGPT.”
Importantly, AI reflects humanity all too well, and this is the very reason why it needs regulation.