UNFETTERED BIG TECH, LEFT TO ITS OWN DEVICES, PRESENTS REAL AND COMPLEX RISKS TO AUSTRALIAN SOCIETY
In Australia, these risks are becoming more visible too. No one watched the way Facebook so easily ‘turned off the news' during a global pandemic and in peak bushfire season and felt comfortable about our reliance on their products. No one observed the largescale Twitter bot activity during the 2019 Federal Election (over double the rate of the US presidential election), or Facebook's failure to remove the bogus death tax claims, and was left with confidence that these companies were ensuring the integrity of our democratic processes. It has become increasingly obvious that unfettered Big Tech, left to its own devices, presents real and complex risks to Australian society.
While many of these risks may not be new, the scale and speed of the threat makes them starkly different. In the early years of the millennium we could not have conceived the extractive potential of surveillance capitalism, or an age of big data. And yet, Australia seems committed to addressing these risks through the ‘lightest touch' regulatory framework possible, developed over 20 years ago.
Since the ‘90s Australia has pioneered a unique, flexible multi-path approach to industry regulation, covering four broad categories:
• Self-regulation -- an arrangement where industries develop and administer their own arrangements
• Quasi-regulation -- a broad set of rules and arrangements which government may actively influence but do not explicitly require compliance with
• Co-regulation -- an Australian borne system where industry develops and administers its own arrangements, but government provides legislative backing to enable the arrangements to be enforced
• Black letter law -- explicit regulation by primary and subordinate legislation
This multi-path approach leaves open the possibility for a range of regulatory responses to any industry, some ‘light' and some ‘hard touch'. In 2010, the Australian Government released a Best Practice Regulation Handbook1 that outlined criteria and considerations to help policy makers assess which level of regulation was most appropriate for each sector and issue.
Self-regulation was considered a feasible option only if:
• “There is no strong public interest concern, in particular no major public health and safety concerns
• The problem is a low-risk event, of low impact or significance, and
• The problem (could) be fixed by the market itself”
By contrast, the Handbook noted that explicit government regulation should be considered where:
• “The problem is high-risk, of high impact or significance; for example, a major public health and safety issue • The community requires the certainty provided by legal sanctions
• Universal application is required ...or • There is a systemic compliance problem with a history of intractable disputes and repeated or flagrant breaches of fair trading principles, and no possibility of effective sanctions being applied” The threats that Big Tech pose to
Australian society, and the lackluster response from major players in the industry, now meets and exceeds the long-held requirements for explicit regulation and Black Letter Law. It's time to stop affording them the benefit of ‘lighter touch' regulations that other sectors have worked hard to deserve. Let us examine some of these criteria in turn.
Big tech poses high risks to public health and safety
If the pandemic has taught us one thing, it's that Big Tech has the capacity to both create and amplify public health and safety risks. The central role that Big Tech has played in the proliferation of COVID-19 disinformation is no secret. Taking Facebook as an example, we have seen:
• The rapid increase in membership to, and engagement with, groups peddling ‘anti-vaxx' and vaccine hesitant content in Australia. 2 • Newsfeeds and algorithms being used to sow vaccine doubt and disinformation without Facebook taking adequate action. Both Facebook's human moderation and muchlauded use of AI to block COVID disinformation has proved not up to the task. For example, in February 2021, #Scamdemic was banned, but
Since the ‘90s Australia has pioneered a unique, flexible multi-path approach to industry regulation, covering four broad categories: Self-regulation; Quasiregulation; Co-regulation; and Black letter law
only after Forbes raised concerns. Five months later in July the hashtag #Vaccineskill was blocked after CNN raised concerns.
• The rise of ‘anti vaxx' influencers. In April 2021 the Bureau of Investigative Journalism found more than 100 Instagram accounts promoting antivaxx content to more than 6 million users.
• A persistent refusal to embrace policymakers' recommendations that Facebook provide data to independent, public health researchers to understand the prevalence and impact of COVID and other mis/disinformation. Facebook's Vice-president responsible for transparency, Brian Boland, quit his post this year, saying “most senior leadership in the company does not want to invest in understanding the impact of its core products...and it doesn't want to make the data available for others to do the hard work and hold them accountable.”
3
Facebook has repeatedly made assurances that vaccine disinformation is not permitted on its products, and has taken some action to remove content that violates its rules. And yet despite these attempts at selfregulation, reporters and researchers continue to find clear evidence of COVID disinformation and a lack of transparency around disinformation on their platforms.
The painful and deadly consequences of corporate decisions, made ‘behind closed doors', are playing out in our daily lives at this moment. Put simply, the spread of bad information on social media about COVID-19 and vaccinations leads to higher rates of infection and death, and in Australia, extended lockdowns.
The government's botched vaccine roll out, and confusing public health messaging have been significant factors, however as vaccine availability increases, our public health and safety relies on stemming the tide of false information too.
Big tech has systemic compliance problems
Big Tech is not known for its eager and robust compliance with regulatory initiatives. They have a long track record of not adequately nor quickly implementing regulations and directives where they are required to.
The catalogue of court cases against tech companies demonstrate this: • Google has been fined for multiple breaches, from a €500m fine for acting in bad faith around EU copyright directives in France, €7m for failing to meet requirements around GDPR in Sweden and €220m for anti-competitive practices in their advertising systems in France again (among many other fines).
• Facebook has faced many highprofile fines, including a $5m USD to settle civil rights lawsuits claiming the company's advertising system breached America's Fair Housing Act by excluding people from seeing housing ads based on age, gender and race. And of course, who could forget the $5b USD penalty from the
The Bureau of Investigative Journalism found more than 100 Instagram accounts promoting antivaxx content to more than 6 million users
FTC for breaching consumer privacy regulations.
• Tiktok has also had its fair share of fines, from a €750k Dutch fine over GDPR compliance, to a $5.7m USD fine for illegally collecting children's data.
Beyond strict compliance with the law, at times the sector appears to actively resist ‘doing the right thing'. For example, back in 2016, the Wall Street Journal found an internal Facebook presentation documenting that they know their platform was hosting a large number of extremist groups and promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said. 4
It was only in the wake of the insurrection in January of this year that Mark Zuckerberg announced that the company will no longer recommend civic and political groups to its users.
It's difficult to see how this is an industry where anything less than Black Letter legal arrangements would suffice.
The community wants and expects the certainty provided by legal sanctions
There are now legitimate community expectations of explicit regulation of Big Tech in Australia. Earlier this year, a Lowy Institute poll found that 90% of Australians think that the influence social media companies have is an important or critical threat to the vital interests of Australia. And indeed, a poll by the Australian Financial Review in late 2020 found that 77% of Australians felt that Big Tech should face stronger Government regulations. The scale and depth of the public's concerns warrants the strongest possible regulatory response.
Many of our cultural bedrocks, like the AFL, are also calling for stronger regulation after non-stop online abuse and
“64% OF ALL EXTREMIST GROUP JOINS ARE DUE TO OUR RECOMMENDATION TOOLS,” THE PRESENTATION SAID
“Current reactive measures, such as content reporting and take down… are proving inadequate.” We need to see Big Tech’s extractive data practices not as ‘accidental oversteps’, but rather necessary consequences of its business model
harassment of AFL players. The most recent among the frequent incidents of racist trolling of Indigenous AFL players, was against Port Adelaide player Aliir Aliir, only days after former Crows captain Taylor Walker was suspended for six games for making a racist comment about SANFL player, Robbie Young.
Simultaneously Eddie Betts, a Carlton veteran, has announced his retirement, after being targeted with racist abuse on social media during recent seasons. In his parting words, Betts implored Australians to help tackle racism in a powerful speech in response to Taylor Walker's remark.
Whilst the AFL could be doing more to address structural racism within the institution, they note that a significant barrier to progress is the lack of legal sanctions for social media platforms. Current reactive measures, such as content reporting and take down, including fines for failing to remove harmful, abusive or bullying content within 24 hours, are proving inadequate. As are the new features on Instagram aimed to protect high-profile individuals like athletes, such as the filtering out of abusive direct message requests.
The AFL'S general manager of inclusion and social policy, Torres Strait
Islander woman Tanya Hosch, says attacks such as the one against Aliir are "pervasive” in society, and that in the current regulatory context, “we don't have any power to make (the platforms) do anything” 5.
Such a widespread issue cannot be addressed with piecemeal interventions led by social media platforms, but rather requires Black Letter Law, with legal sanctions tied to duty of care for users.
The market won't fix it
The same Government handbook that outlined the criteria for determining that self- and co-regulation probably wouldn't be enough for Big Tech, plainly states that “self-regulation is not likely to be effective if industry has an incentive not to comply with the rules or codes of conduct.” A business model built on big data will never be incentivised towards compliance.
Big Tech's business model is sometimes described as based on ad revenue, however it is more accurate to describe these companies as monopolies built upon the ownership and profit from big data. Through the algorithmic collecting, linking and analysis of the data points of billions of users, they are not just able to predict consumer behaviour and thought, but actively shape it. This is hugely valuable. In fact, it is the source of Big Tech's economics and political power.
Data is never erased as a precaution. It simply accumulates, in globalised interconnected systems that never forget. If this data is a central resource, and Big Tech's voracious appetite for data can only be sated by disregard for privacy, then privacy scandals are inevitable. We need to see Big Tech's extractive data practices not as ‘accidental oversteps', but rather necessary consequences of its business model. The system is functioning precisely how it has been designed to.
Big data, combined with content curation algorithms, sorts people into different digital worlds. Worlds of COVID
and other disinformation, hate speech, and extremist ideology. The most fringe and sensational digital worlds are the most profitable for Big Tech. And this is why there is a clear tension between what is best for the company and what is best for the public.
The market cannot be relied upon to fix this. The industry push towards selfand co-regulation must be understood in this context. The softer mechanisms are favoured by corporations because it means they can avoid regulation that structurally changes their business model (and impacts their bottom line).
Why light touch regulation won't work for data protection
Despite the demonstrable need for explicit, Black Letter Law reining in Big Tech, when it comes to some of the highest risk, and publicly important issues, Australia is still defaulting to lighter touch co-regulatory models.
For example, protections around data privacy in social media - including data protections for children and young people, some of the most vulnerable social media users - are still being developed under co-regulatory systems.
Back in 2019, the Attorney General and Minister for Communications and Arts announced amendments to the Privacy Act that would result in an Online Privacy code for social media and online platforms which trade in personal information. We are expecting
6 a draft exposure Bill, paving the way for a code, any week now. Every indication suggests that it too will build on Australia's penchant for co-regulatory approaches.
Under the Privacy Act such a code could be made in one of three ways; it could be developed spontaneously by industry; the Information Commissioner could request that industry develop a code, or; if industry fail to develop a code after a request is made (or they do an inadequate job of it), the Commissioner could draft a code directly. Any way around, industry is heavily involved in drafting this code in the first instance.
There are reasons to be cautious about giving industry the pen. This is a sector that has been repeatedly fined for violating existing children's data protection and privacy laws. In 2019, the FTC settled cases with both Tiktok and Youtube for using children's data without necessary parental consent, for $5.7m USD and $170m USD respectively.
Seemingly unable to improve, both Tiktok and Youtube are still facing ongoing legal challenges around the collection and use of children's data, this time in the UK. Tiktok are currently facing a £1b ‘class action' suit for collecting and using children's location
TIKTOK ARE CURRENTLY FACING A £1B ‘CLASS ACTION’ SUIT FOR COLLECTING AND USING CHILDREN’S LOCATION AND BIOMETRIC DATA WITHOUT CONSENT
and biometric data without consent, while Youtube face a £2b ‘class action' for the same complaint.
Beyond fines, social media platforms often don't seem to be acting in children's best interests when it comes to their data. Instagram, Twitch, Twitter, Reddit, Tiktok and Spotify were all found deploying deceptive dark patterns – design tricks used to take action or give over permissions unintentionally - on young users in their data consent practices in Australia.
7
Without robust, firm and clear requirements from the Government about non-negotiable minimums for a code, this could go horribly wrong.
Other jurisdictions, including the UK and Ireland have recently drafted their own codes around the collection and use of children's data. In both jurisdictions, regulators drafted the codes directly. This led to robust and effective regulations, which among other things have seen platforms default children's accounts to private and stop microtargeting kids with adverts. These are common sense moves that will protect British and Irish children; Australian children deserve no less.
Big Tech has proven it should not write its own rules
Australia needs to pivot to adopt a stronger approach to regulating Big Tech. The industry presents a high risk, is of extreme public interest, and demonstrates systemic compliance issues. This alone should be indicative of a need for a tougher stance on regulations.
Big Tech companies did not set out to harm public health or weaken democracy. But when they realised that their products did these things, they did not stop. Every other large, complex industry with major impact on public welfare in Australia – pharmaceuticals, automotive, banking – is subject to serious transparency requirements and consumer/safety regulations. Big Tech should not be exempt. Nor should it benefit from a ‘lighter touch' regulatory regime designed for less complex issues in industries with strong track records of diligence.
Australia would not be alone in adopting a tougher stance. Many other countries are adopting primary and secondary legislation to reign in their harms. Big Tech is a global industry, and it's going to be this sort of rigorous regulation from multiple countries that catalyses the much-needed change to their practices. We can't leave this to Europe or America to do alone.
Australia needs to weigh in and flex its muscle when it comes to Big Tech, through Black Letter Law wherever possible.
BIG TECH COMPANIES DID NOT SET OUT TO HARM PUBLIC HEALTH OR WEAKEN DEMOCRACY. BUT WHEN THEY REALISED THAT THEIR PRODUCTS DID THESE THINGS, THEY DID NOT STOP