‘It’s unworkable’
‘Online harms’ bill draws international ire
OTTAWA • The world is watching as Canada’s federal government prepares legislation to fight online harms such as hateful speech and the non-consensual sharing of sexual images. The world does not like what it sees.
“Even if a system like the one that’s proposed could work in Canada, which I don’t think it could, it would right away get transposed to any number of countries that don’t have Canada’s checks and balances and due process [and] rule of law,” said Nathalie Maréchal, of the Washington-based
think tank Ranking Digital Rights. “And the line would be, ‘But Canada does it, so why can’t we?’”
HOW DO YOU IDENTIFY MISINFORMATION AT SCALE AND AT SPEED WITHOUT A HUGE, HUGE PERCENTAGE OF ERROR?
The Liberals have promised a bill to crack down on online hate, terrorist plotting and sexual exploitation will come in their first 100 days, and they signalled their thinking in proposals published at the end of July, before Prime Minister Justin Trudeau called the September election.
Those include: Putting a legal obligation on platforms like Facebook and Twitter to “take all reasonable measures … to make [harmful content] inaccessible to persons in Canada,” including by applying their own AI tools to sniff it out and by suppressing it within 24 hours of receiving a notice from an outsider;
Establishing “robust flagging, notice, and appeal systems for both authors of content and those who flag content”;
Notifying police and CSIS of proscribed content, with details of what justifies calling law enforcement to be determined later;
Creating a new Digital Safety Commission, with a commissioner empowered to enforce the rules (including through inspections and raids) and a Digital Recourse Council for appeals of platforms’ takedown decisions; and
Fines for violations topping out at three per cent of a platform’s global annual revenue or $10 million, whichever is more.
The regime the proposals describe has been condemned by the Citizen Lab at the University of Toronto (“Rewrite the Proposal from the Ground Up,” reads the headline on the conclusion of its formal submission to the consultation), and the Canadian Internet Policy and Public Interest Clinic at the University of Ottawa (“the current proposal threatens fundamental freedoms and the survival of a free and open internet in Canada and beyond,” says the introduction to its submission).
But outside the country, it’s drawn condemnation not just from Maréchal’s group, which issues report-card-style rankings of big digital players on how well they respect free expression and privacy. The venerable Electronic Frontier Foundation attacked the plan in August. Daphne Keller, director of Stanford University’s program in platform regulation, published a top-five list of its flaws.
“Human rights groups like Human Rights Watch, Access Now, and Article 19 have been fighting requirements like these one at a time in countries like India, Turkey and Russia. Canada’s proposal combines them all together in one package,” Keller wrote.
The Global Network Initiative, which aims to bring governments and corporations together to protect free expression in the digital world (its members include Facebook, Google, Microsoft and Yahoo!, finance-industry companies such as BMO and civil-society groups like Human Rights Watch), filed a submission saying it’s “concerned that some aspects of the proposed approach appear to be inconsistent with international human rights principles, regulatory best practice, and Canada’s leadership on internet freedom.”
“We, as a community of rights-respecting … advocates, including governments, are substantially weakened in our ability to push back not just in Vietnam or Russia, but also in Brazil, Turkey, if the rights-respecting governments — the ones that stand up and posture themselves as the defenders of human rights — are themselves putting in place laws that look quite similar to the approaches that some of these less rights-respecting countries are taking,” said the Global Network Initiative’s director of policy and strategy Jason Pielemeier, in an interview with The Logic from Washington.
It’s particularly galling, he said, because in 2022, Canada is chairing the Freedom Online Coalition, a group of 34 countries that promotes liberty of expression and democratic rights on the internet. Through a spokesperson, Heritage Minister Steven Guilbeault — who, despite the recent election, retains his office unless a new minister is named — declined a request for an interview on the barrage of criticism.
Twitter’s manager of public policy in Canada, Michele Austin, said in a statement relayed to The Logic by a spokesperson that Twitter wants the proposals gutted to the studs: “Our sincere hope is that the Government of Canada takes an entirely new approach to these issues after reviewing and analyzing the submissions.”
Twitter responded to the consultation, but isn’t sharing its submission publicly, she said.
Facebook is more circumspect. “Facebook supports the creation of a common set of rules to combat harmful content that would apply to all social media companies,” said Facebook Canada spokesperson Lisa Laventure. But, she said, the company did not submit a response to the government consultation.
Mindgeek, the Montreal-based, Luxembourg-headquartered operator of numerous pornography sites, did not reply to an email from The Logic.
Each of the international critiques of the Canadian plan is different, but they have common elements:
❚ The proposal says the final law will use definitions of harmful content that “borrow from the Criminal Code but are adapted to the regulatory context,” and in some cases will deliberately go beyond what’s already illegal;
❚ Platforms will decide what meets those definitions;
❚ They will have to make those decisions very fast;
❚ They’ll be ordered to use algorithms to detect problematic content without a requirement that those algorithms be made public, or that they can distinguish between, say, news coverage or satire and actual harmful material; and
❚ The requirement to report some types of flagged content to law enforcement turns private companies into agents of the state and could snare innocent people.
Maréchal pointed to the demand for algorithmic enforcement as a particular problem. U.S. Sen. Amy Klobuchar proposed a bill seeking to crack down on pandemic misinformation on social networks and it had the same flaw, she said.
“It’s unworkable as a proposal,” Maréchal said. “How do you identify misinformation at scale and at speed without a huge, huge percentage of error? I think a lot of these proposals hinge on wishful thinking that AI would be better than — much better than — it actually is. And unfortunately, you can’t wish that kind of algorithmic prowess into being.”
In practice, algorithmic enforcement mechanisms in any field have often over-targeted minority groups, many of the consultation submissions pointed out. And the steep fines for non-compliance will make platforms err on the side of caution.
“Companies basically have to be in the position of determining when something is not illegal, but nevertheless is harmful enough that you have to [restrict and report it]. And if you get that decision wrong, there will be potentially quite significant consequences for you,” Pielemeier said.
Hardly any platform really wants to be in the business of promoting harmful content, he said, and we shouldn’t try to regulate the whole internet to get at the few bad actors that do.
“I think the bigger challenge is, how do we encourage and facilitate the companies that are trying to do better at addressing this content, without pushing them so far in the direction of responsibility and liability that they effectively take over state functions in terms of detecting and determining when content becomes illegal and needs to be actioned?” he said. “Because that raises some very deep questions and concerns in terms of democratic accountability and responsibility.”
Maréchal said she believes the approach of targeting specific instances of harmful content is wrong-headed.
“One thing is to focus first of all on the business of a tech company, and to focus more on process than on results, and to shift the incentive structures that lead them to make the product design choices and the business decisions that they do,” she said.
Hyper-targeted advertising combined with algorithmic mechanisms designed to hold people’s attention by serving them content without care for what that content is has produced bad results, Maréchal said.
“If you reformed that and changed the incentives under which companies make decisions, you can improve the outcome without opening this door for autocratic regimes, or even just authoritarian-curious regimes, to make bad decisions,” she said.
Some moves would require action by platforms’ home countries — the United States, in the cases of Facebook and Twitter. Facebook’s corporate structure, with Mark Zuckerberg as CEO, chair of the board and key shareholder, doesn’t lend itself to accountability, Maréchal said, and many platform companies are almost as centralized.
Requirements for platforms to carry out human rights assessments of their plans before introducing new products or entering new markets would make them think ahead rather than trying things and seeing what happens, she said.
Pielemeier said governments’ failure to act earlier on online harms — maybe due to an understandable belief that they were better off leaving alone systems they didn’t really grasp — has left them without the capacity to do this sort of regulation well.
“Now that governments are feeling politically motivated to regulate these spaces, they don’t have necessarily a strong relationship in civil society or in companies, in regulators and authorities,” he said. “There’s more of this sense of antagonism that maybe there wouldn’t have been if they had taken a different approach.”
For more news about the innovation economy, visit www.thelogic.co