DISINFORMATION
The issue: In recent years, the world has rudely awakened to the proliferation of hate speech and “fake news” on social media — especially during election cycles, when online content can be used to disrupt democratic processes.
Regulators and social media companies have come under pressure to stem the flow of online hate speech and malicious “disinformation.” But these are complicated problems with no easy fixes; the sheer volume of harmful content online is overwhelming and the notion of regulating speech will always provoke free speech concerns while drawing fierce opposition.
And while some countries have legal definitions for hate speech, defining “fake news” is a tricky proposition; it can also be a dangerous one, especially in the hands of authoritarian regimes that want to censor information or crack down on the free press.
Regulators have struggled to find workable solutions. “No one’s figured it out,” says Heidi Tworek, an assistant professor of history at the University of British Columbia who studies media, democracy and the digital economy. “We continue to have to really evaluate many of these different schemes that are being proposed. These are such large and complex questions.”
What others are doing: Several European countries are now moving toward regulating social media platforms, deploying strategies that have drawn both criticism and praise. In France, the government passed a law in November that empowers judges to order the removal of “fake news” during elections; violators face a penalty of one year in prison or a fine of 75,000 euros.
In the United Kingdom, a new parliamentary report — released last week following an 18-month investigation — is also calling for platforms like Facebook to be brought under regulatory control. The report proposes several new regulations, including a mandatory code of ethics and independent regulator who can bring legal proceedings against social media companies.
Germany has taken a particularly bold approach to regulating hate speech on social media. In January 2018, the country passed its Net Enforcement Act — sometimes dubbed the NetzDG or “Facebook law” — which forces tech companies to remove hate speech within 24 hours of illegal content being reported. (When it’s unclear whether content is actually illegal under German law, tech companies have seven days to consult and decide.)
The penalty for breaking this law? Up to 50 million euros in fines. “This is probably the furthest anyone has gone in trying to get large social media companies to adopt their policies to local laws,” Tworek says. What Canada is doing: “Canadians, and the Canadian government, are alive and alert to the issues,” says Dwayne Winseck, a journalism professor at Carleton University and director of the Canadian Media Concentration Research Project.
He points to Canada’s recently-passed electoral reform bill, C-76, as a positive step. Online platforms like Facebook and Google must now create a registry of digital advertisements placed by political or third parties during elections and ensure they remain visible for two years. The law also bans the use of foreign money by “third party” advocacy groups during campaign periods — meaning social media companies can’t knowingly accept advertisements paid for with foreign funds.
Another provision prohibits making false statements about a candidate to influence an election. This only applies narrowly to certain types of statements, however (for example, statements about whether a candidate has broken the law or their place of birth).
In January, the federal government unveiled plans for safeguarding the upcoming election, including a $7 million initiative to improve the public’s ability to detect “online deceptive practices” and a team of five bureaucrats who will alert the public whenever they find evidence of election interference.
But where Canada could be innovating more is with social media regulation more generally, says journalist Chris Tenove, a PhD candidate at the University of British Columbia who studies global governance and digital politics.
He would like to see Canada follow the lead of jurisdictions like the European Union, which worked together with social media companies to develop a code of conduct. As a result, platforms have voluntarily committed to “quickly and efficiently” addressing hate speech and early reports have shown good results, he says.
One promising spot is Canada’s involvement with an international effort called the Grand Committee on Disinformation and Fake News. Comprising parliamentarians from nine countries, the committee is scheduled to meet in Ottawa this May and has called for social media executives — including Facebook’s Mark Zuckerberg and Google CEO Sundar Pichai — to appear so they can explain what they’re doing to stop the spread of disinformation.