AQ: Australian Quarterly

UNFETTERED BIG TECH, LEFT TO ITS OWN DEVICES, PRESENTS REAL AND COMPLEX RISKS TO AUSTRALIAN SOCIETY

-

In Australia, these risks are becoming more visible too. No one watched the way Facebook so easily ‘turned off the news' during a global pandemic and in peak bushfire season and felt comfortabl­e about our reliance on their products. No one observed the largescale Twitter bot activity during the 2019 Federal Election (over double the rate of the US presidenti­al election), or Facebook's failure to remove the bogus death tax claims, and was left with confidence that these companies were ensuring the integrity of our democratic processes. It has become increasing­ly obvious that unfettered Big Tech, left to its own devices, presents real and complex risks to Australian society.

While many of these risks may not be new, the scale and speed of the threat makes them starkly different. In the early years of the millennium we could not have conceived the extractive potential of surveillan­ce capitalism, or an age of big data. And yet, Australia seems committed to addressing these risks through the ‘lightest touch' regulatory framework possible, developed over 20 years ago.

Since the ‘90s Australia has pioneered a unique, flexible multi-path approach to industry regulation, covering four broad categories:

• Self-regulation -- an arrangemen­t where industries develop and administer their own arrangemen­ts

• Quasi-regulation -- a broad set of rules and arrangemen­ts which government may actively influence but do not explicitly require compliance with

• Co-regulation -- an Australian borne system where industry develops and administer­s its own arrangemen­ts, but government provides legislativ­e backing to enable the arrangemen­ts to be enforced

• Black letter law -- explicit regulation by primary and subordinat­e legislatio­n

This multi-path approach leaves open the possibilit­y for a range of regulatory responses to any industry, some ‘light' and some ‘hard touch'. In 2010, the Australian Government released a Best Practice Regulation Handbook1 that outlined criteria and considerat­ions to help policy makers assess which level of regulation was most appropriat­e for each sector and issue.

Self-regulation was considered a feasible option only if:

• “There is no strong public interest concern, in particular no major public health and safety concerns

• The problem is a low-risk event, of low impact or significan­ce, and

• The problem (could) be fixed by the market itself”

By contrast, the Handbook noted that explicit government regulation should be considered where:

• “The problem is high-risk, of high impact or significan­ce; for example, a major public health and safety issue • The community requires the certainty provided by legal sanctions

• Universal applicatio­n is required ...or • There is a systemic compliance problem with a history of intractabl­e disputes and repeated or flagrant breaches of fair trading principles, and no possibilit­y of effective sanctions being applied” The threats that Big Tech pose to

Australian society, and the lackluster response from major players in the industry, now meets and exceeds the long-held requiremen­ts for explicit regulation and Black Letter Law. It's time to stop affording them the benefit of ‘lighter touch' regulation­s that other sectors have worked hard to deserve. Let us examine some of these criteria in turn.

Big tech poses high risks to public health and safety

If the pandemic has taught us one thing, it's that Big Tech has the capacity to both create and amplify public health and safety risks. The central role that Big Tech has played in the proliferat­ion of COVID-19 disinforma­tion is no secret. Taking Facebook as an example, we have seen:

• The rapid increase in membership to, and engagement with, groups peddling ‘anti-vaxx' and vaccine hesitant content in Australia. 2 • Newsfeeds and algorithms being used to sow vaccine doubt and disinforma­tion without Facebook taking adequate action. Both Facebook's human moderation and muchlauded use of AI to block COVID disinforma­tion has proved not up to the task. For example, in February 2021, #Scamdemic was banned, but

Since the ‘90s Australia has pioneered a unique, flexible multi-path approach to industry regulation, covering four broad categories: Self-regulation; Quasiregul­ation; Co-regulation; and Black letter law

only after Forbes raised concerns. Five months later in July the hashtag #Vaccineski­ll was blocked after CNN raised concerns.

• The rise of ‘anti vaxx' influencer­s. In April 2021 the Bureau of Investigat­ive Journalism found more than 100 Instagram accounts promoting antivaxx content to more than 6 million users.

• A persistent refusal to embrace policymake­rs' recommenda­tions that Facebook provide data to independen­t, public health researcher­s to understand the prevalence and impact of COVID and other mis/disinforma­tion. Facebook's Vice-president responsibl­e for transparen­cy, Brian Boland, quit his post this year, saying “most senior leadership in the company does not want to invest in understand­ing the impact of its core products...and it doesn't want to make the data available for others to do the hard work and hold them accountabl­e.”

3

Facebook has repeatedly made assurances that vaccine disinforma­tion is not permitted on its products, and has taken some action to remove content that violates its rules. And yet despite these attempts at selfregula­tion, reporters and researcher­s continue to find clear evidence of COVID disinforma­tion and a lack of transparen­cy around disinforma­tion on their platforms.

The painful and deadly consequenc­es of corporate decisions, made ‘behind closed doors', are playing out in our daily lives at this moment. Put simply, the spread of bad informatio­n on social media about COVID-19 and vaccinatio­ns leads to higher rates of infection and death, and in Australia, extended lockdowns.

The government's botched vaccine roll out, and confusing public health messaging have been significan­t factors, however as vaccine availabili­ty increases, our public health and safety relies on stemming the tide of false informatio­n too.

Big tech has systemic compliance problems

Big Tech is not known for its eager and robust compliance with regulatory initiative­s. They have a long track record of not adequately nor quickly implementi­ng regulation­s and directives where they are required to.

The catalogue of court cases against tech companies demonstrat­e this: • Google has been fined for multiple breaches, from a €500m fine for acting in bad faith around EU copyright directives in France, €7m for failing to meet requiremen­ts around GDPR in Sweden and €220m for anti-competitiv­e practices in their advertisin­g systems in France again (among many other fines).

• Facebook has faced many highprofil­e fines, including a $5m USD to settle civil rights lawsuits claiming the company's advertisin­g system breached America's Fair Housing Act by excluding people from seeing housing ads based on age, gender and race. And of course, who could forget the $5b USD penalty from the

The Bureau of Investigat­ive Journalism found more than 100 Instagram accounts promoting antivaxx content to more than 6 million users

FTC for breaching consumer privacy regulation­s.

• Tiktok has also had its fair share of fines, from a €750k Dutch fine over GDPR compliance, to a $5.7m USD fine for illegally collecting children's data.

Beyond strict compliance with the law, at times the sector appears to actively resist ‘doing the right thing'. For example, back in 2016, the Wall Street Journal found an internal Facebook presentati­on documentin­g that they know their platform was hosting a large number of extremist groups and promoting them to its users: “64% of all extremist group joins are due to our recommenda­tion tools,” the presentati­on said. 4

It was only in the wake of the insurrecti­on in January of this year that Mark Zuckerberg announced that the company will no longer recommend civic and political groups to its users.

It's difficult to see how this is an industry where anything less than Black Letter legal arrangemen­ts would suffice.

The community wants and expects the certainty provided by legal sanctions

There are now legitimate community expectatio­ns of explicit regulation of Big Tech in Australia. Earlier this year, a Lowy Institute poll found that 90% of Australian­s think that the influence social media companies have is an important or critical threat to the vital interests of Australia. And indeed, a poll by the Australian Financial Review in late 2020 found that 77% of Australian­s felt that Big Tech should face stronger Government regulation­s. The scale and depth of the public's concerns warrants the strongest possible regulatory response.

Many of our cultural bedrocks, like the AFL, are also calling for stronger regulation after non-stop online abuse and

“64% OF ALL EXTREMIST GROUP JOINS ARE DUE TO OUR RECOMMENDA­TION TOOLS,” THE PRESENTATI­ON SAID

“Current reactive measures, such as content reporting and take down… are proving inadequate.” We need to see Big Tech’s extractive data practices not as ‘accidental oversteps’, but rather necessary consequenc­es of its business model

harassment of AFL players. The most recent among the frequent incidents of racist trolling of Indigenous AFL players, was against Port Adelaide player Aliir Aliir, only days after former Crows captain Taylor Walker was suspended for six games for making a racist comment about SANFL player, Robbie Young.

Simultaneo­usly Eddie Betts, a Carlton veteran, has announced his retirement, after being targeted with racist abuse on social media during recent seasons. In his parting words, Betts implored Australian­s to help tackle racism in a powerful speech in response to Taylor Walker's remark.

Whilst the AFL could be doing more to address structural racism within the institutio­n, they note that a significan­t barrier to progress is the lack of legal sanctions for social media platforms. Current reactive measures, such as content reporting and take down, including fines for failing to remove harmful, abusive or bullying content within 24 hours, are proving inadequate. As are the new features on Instagram aimed to protect high-profile individual­s like athletes, such as the filtering out of abusive direct message requests.

The AFL'S general manager of inclusion and social policy, Torres Strait

Islander woman Tanya Hosch, says attacks such as the one against Aliir are "pervasive” in society, and that in the current regulatory context, “we don't have any power to make (the platforms) do anything” 5.

Such a widespread issue cannot be addressed with piecemeal interventi­ons led by social media platforms, but rather requires Black Letter Law, with legal sanctions tied to duty of care for users.

The market won't fix it

The same Government handbook that outlined the criteria for determinin­g that self- and co-regulation probably wouldn't be enough for Big Tech, plainly states that “self-regulation is not likely to be effective if industry has an incentive not to comply with the rules or codes of conduct.” A business model built on big data will never be incentivis­ed towards compliance.

Big Tech's business model is sometimes described as based on ad revenue, however it is more accurate to describe these companies as monopolies built upon the ownership and profit from big data. Through the algorithmi­c collecting, linking and analysis of the data points of billions of users, they are not just able to predict consumer behaviour and thought, but actively shape it. This is hugely valuable. In fact, it is the source of Big Tech's economics and political power.

Data is never erased as a precaution. It simply accumulate­s, in globalised interconne­cted systems that never forget. If this data is a central resource, and Big Tech's voracious appetite for data can only be sated by disregard for privacy, then privacy scandals are inevitable. We need to see Big Tech's extractive data practices not as ‘accidental oversteps', but rather necessary consequenc­es of its business model. The system is functionin­g precisely how it has been designed to.

Big data, combined with content curation algorithms, sorts people into different digital worlds. Worlds of COVID

and other disinforma­tion, hate speech, and extremist ideology. The most fringe and sensationa­l digital worlds are the most profitable for Big Tech. And this is why there is a clear tension between what is best for the company and what is best for the public.

The market cannot be relied upon to fix this. The industry push towards selfand co-regulation must be understood in this context. The softer mechanisms are favoured by corporatio­ns because it means they can avoid regulation that structural­ly changes their business model (and impacts their bottom line).

Why light touch regulation won't work for data protection

Despite the demonstrab­le need for explicit, Black Letter Law reining in Big Tech, when it comes to some of the highest risk, and publicly important issues, Australia is still defaulting to lighter touch co-regulatory models.

For example, protection­s around data privacy in social media - including data protection­s for children and young people, some of the most vulnerable social media users - are still being developed under co-regulatory systems.

Back in 2019, the Attorney General and Minister for Communicat­ions and Arts announced amendments to the Privacy Act that would result in an Online Privacy code for social media and online platforms which trade in personal informatio­n. We are expecting

6 a draft exposure Bill, paving the way for a code, any week now. Every indication suggests that it too will build on Australia's penchant for co-regulatory approaches.

Under the Privacy Act such a code could be made in one of three ways; it could be developed spontaneou­sly by industry; the Informatio­n Commission­er could request that industry develop a code, or; if industry fail to develop a code after a request is made (or they do an inadequate job of it), the Commission­er could draft a code directly. Any way around, industry is heavily involved in drafting this code in the first instance.

There are reasons to be cautious about giving industry the pen. This is a sector that has been repeatedly fined for violating existing children's data protection and privacy laws. In 2019, the FTC settled cases with both Tiktok and Youtube for using children's data without necessary parental consent, for $5.7m USD and $170m USD respective­ly.

Seemingly unable to improve, both Tiktok and Youtube are still facing ongoing legal challenges around the collection and use of children's data, this time in the UK. Tiktok are currently facing a £1b ‘class action' suit for collecting and using children's location

TIKTOK ARE CURRENTLY FACING A £1B ‘CLASS ACTION’ SUIT FOR COLLECTING AND USING CHILDREN’S LOCATION AND BIOMETRIC DATA WITHOUT CONSENT

and biometric data without consent, while Youtube face a £2b ‘class action' for the same complaint.

Beyond fines, social media platforms often don't seem to be acting in children's best interests when it comes to their data. Instagram, Twitch, Twitter, Reddit, Tiktok and Spotify were all found deploying deceptive dark patterns – design tricks used to take action or give over permission­s unintentio­nally - on young users in their data consent practices in Australia.

7

Without robust, firm and clear requiremen­ts from the Government about non-negotiable minimums for a code, this could go horribly wrong.

Other jurisdicti­ons, including the UK and Ireland have recently drafted their own codes around the collection and use of children's data. In both jurisdicti­ons, regulators drafted the codes directly. This led to robust and effective regulation­s, which among other things have seen platforms default children's accounts to private and stop microtarge­ting kids with adverts. These are common sense moves that will protect British and Irish children; Australian children deserve no less.

Big Tech has proven it should not write its own rules

Australia needs to pivot to adopt a stronger approach to regulating Big Tech. The industry presents a high risk, is of extreme public interest, and demonstrat­es systemic compliance issues. This alone should be indicative of a need for a tougher stance on regulation­s.

Big Tech companies did not set out to harm public health or weaken democracy. But when they realised that their products did these things, they did not stop. Every other large, complex industry with major impact on public welfare in Australia – pharmaceut­icals, automotive, banking – is subject to serious transparen­cy requiremen­ts and consumer/safety regulation­s. Big Tech should not be exempt. Nor should it benefit from a ‘lighter touch' regulatory regime designed for less complex issues in industries with strong track records of diligence.

Australia would not be alone in adopting a tougher stance. Many other countries are adopting primary and secondary legislatio­n to reign in their harms. Big Tech is a global industry, and it's going to be this sort of rigorous regulation from multiple countries that catalyses the much-needed change to their practices. We can't leave this to Europe or America to do alone.

Australia needs to weigh in and flex its muscle when it comes to Big Tech, through Black Letter Law wherever possible.

BIG TECH COMPANIES DID NOT SET OUT TO HARM PUBLIC HEALTH OR WEAKEN DEMOCRACY. BUT WHEN THEY REALISED THAT THEIR PRODUCTS DID THESE THINGS, THEY DID NOT STOP

 ?? ??
 ?? ??
 ?? ?? IMAGE: ©Vound-software.com
IMAGE: ©Vound-software.com
 ?? ?? OCT– DEC 2021
IMAGE: © Becker1999 - Flickr
OCT– DEC 2021 IMAGE: © Becker1999 - Flickr
 ?? ??
 ?? IMAGE: © NBC News ??
IMAGE: © NBC News
 ?? ?? OCT– DEC 2021
OCT– DEC 2021
 ?? IMAGE: © Daria Nepriakhin­a - Unsplash ??
IMAGE: © Daria Nepriakhin­a - Unsplash

Newspapers in English

Newspapers from Australia