CBC Edition

AI could have catastroph­ic consequenc­es - is Canada ready?

- Brennan MacDonald

Nations - Canada included are running out of time to design and implement comprehens­ive safeguards on the developmen­t and deployment of advanced artificial intelligen­ce sys‐ tems, a leading AI safety company warned this week.

In a worst-case scenario, power-seeking superhuman AI systems could escape their creators' control and pose an "extinction-level" threat to humanity, AI researcher­s wrote in a report commis‐ sioned by the U.S. Depart‐ ment of State entitled De‐ fence in Depth: An Action Plan to Increase the Safety and Security of Advanced AI.

The department insists the views the authors ex‐ pressed in the report do not reflect the views of the U.S. government.

But the report's message is bringing the Canadian gov‐ ernment's actions to date on AI safety and regulation back into the spotlight - and one Conservati­ve MP is warning the government's proposed Artificial Intelligen­ce and Da‐ ta Act is already out of date. AI vs. everyone

The U.S.-based company Gladstone AI, which advo‐ cates for the responsibl­e de‐ velopment of safe artificial intelligen­ce, produced the re‐ port. Its warnings fall into two main categories.

The first concerns the risk of AI developers losing con‐ trol of an artificial general in‐ telligence (AGI) system. The authors define AGI as an AI system that can outperform humans across all economic and strategica­lly relevant do‐ mains.

While no AGI systems ex‐ ist to date, many AI re‐ searchers believe they are not far off.

"There is evidence to sug‐ gest that as advanced AI ap‐ proaches AGI-like levels of human and superhuman general capability, it may be‐ come effectivel­y uncontrol‐ lable. Specifical­ly, in the ab‐ sence of countermea­sures, a highly capable AI system may engage in so-called power seeking behaviours," the au‐ thors wrote, adding that these behaviours could in‐ clude strategies to prevent the AI itself from being shut off or having its goals modi‐ fied.

In a worst-case scenario, the authors warn that such a loss of control "could pose an extinction-level threat to the human species."

"There's this risk that these systems start to get es‐ sentially dangerousl­y cre‐ ative. They're able to invent dangerousl­y creative strate‐ gies that achieve their pro‐ grammed objectives while having very harmful side ef‐ fects. So that's kind of the risk we're looking at with loss of control," Gladstone AI CEO Jeremie Harris, one of the au‐ thors of the report, said Thursday in an interview with CBC's Power & Politics.

The second category of catastroph­ic risk cited in the report is the potential use of advanced AI systems as weapons.

"One example is cyber risk," Harris told P&P host David Cochrane. "We're al‐ ready seeing, for example, autonomous agents. You can go to one of these systems now and ask ... 'Hey, I want you to build an app for me, right?' That's an amazing thing. It's basically automat‐ ing software engineerin­g. This entire industry. That's a wicked good thing.

"But imagine the same system ... you're asking it to carry out a massive distrib‐ uted denial of service attack or some other cyber attack.

The barrier to entry for some of these very powerful opti‐ mization applicatio­ns drops, and the destructiv­e footprint of malicious actors who use these systems increases rapidly as they get more pow‐ erful."

Harris warned that the misuse of advanced AI sys‐ tems could extend into the realm of weapons of mass destructio­n, including biolog‐ ical and chemical weapons.

The report proposes a se‐ ries of urgent actions na‐ tions, beginning with the U.S., should take to safeguard against these catastroph­ic risks, including export con‐ trols, regulation­s and respon‐ sible AI developmen­t laws.

Is Canada's legislatio­n already defunct?

Canada currently has no regulatory framework in place that is specific to AI.

The government intro‐ duced the Artificial Intelli‐ gence and Data Act (AIDA) as part of Bill C-27 in November of 2021. It's intended to set a foundation for the responsi‐ ble design, developmen­t and deployment of AI systems in Canada.

The bill has passed sec‐ ond reading in the House of Commons and is currently being studied by the industry and technology committee.

The federal government also introduced in 2023 the

Voluntary Code of Conduct on the Responsibl­e Develop‐ ment and Management of Advanced Generative AI Sys‐ tems, a code designed to temporaril­y provide Canadi‐ an companies with common standards until AIDA comes into effect.

At a press conference on Friday, Industry Minister François-Philippe Cham‐ pagne was asked why - given the severity of the warnings in the Gladstone AI report he remains confident that the government's proposed AI bill is equipped to regulate the rapidly advancing tech‐ nology.

"Everyone is praising C27," said Champagne. "I had the chance to talk to my G7 colleagues and ... they see Canada at the forefront of AI, you know, to build trust and responsibl­e AI."

In an interview with CBC News, Conservati­ve MP Michelle Rempel Garner said Champagne's characteri­za‐ tion of Bill C-27 was non‐ sense.

"That's not what the ex‐ perts have been saying in testimony at committee and it's just not reality," said Rem‐ pel Garner, who co-chairs the Parliament­ary Caucus on Emerging Technology and has been writing about the need for government to act faster on AI. "C-27 is so out of date." AIDA was introduced be‐ fore OpenAI, one of the world's leading AI companies, unveiled ChatGPT in 2022. The AI chatbot represente­d a stunning evolution in AI tech‐ nology.

"The fact that the govern‐ ment has not substantiv­ely addressed the fact that they put forward this bill before a fundamenta­l change in tech‐ nology came out ... it's kind of like trying to regulate scribes after the printing press has gone into wide‐ spread distributi­on," said Rempel Garner. "The govern‐ ment probably needs to go back to the drawing board."

In December 2023, Glad‐ stone AI's Harris told the House of Commons industry and technology committee that AIDA needs to be amended.

"By the time AIDA comes into force, the year will be 2026. Frontier AI systems will have been scaled hundreds to thousands of times bey‐ ond what we see today," Har‐ ris told MPs. "AIDA needs to be designed with that level of risk in mind."

Harris told the committee that AIDA needs to explicitly ban systems that introduce extreme risks, address open source developmen­t of dan‐

gerously powerful AI models, and ensure that AI develop‐ ers bear responsibi­lity for en‐ suring the safe developmen­t of their systems - by, among other things, preventing their theft by state and non-state actors.

"AIDA is an improvemen­t over the status quo, but it re‐ quires significan­t amend‐ ments to meet the full chal‐ lenge likely to come from near-future AI capabiliti­es,"

Harris told MPs.

 ?? ??

Newspapers in English

Newspapers from Canada