The Guardian (USA)

Who will protect us from digital deception? Not tech companies

- Ann M Ravel and Samuel Woolley

It’s too late to save the 2018 US midterms from digital deception campaigns – but it’s not too late for democracy. This year’s elections saw an unpreceden­ted rise in political manipulati­on over social media. In October, the Department of Justice charged a Russian national with running a USfocused political disinforma­tion campaign that had a budget of $10m from January to June. Revelation­s about Iranian disinforma­tion efforts and Saudi Arabian state-sponsored digital propaganda demonstrat­e a complex problem with ill-defined borders on- and offline.

At home, political actors continue to abuse campaign finance loopholes and digital technologi­es to sway and suppress voters, further polarize political debate, and decrease trust in democratic institutio­ns. Indeed, our research shows that disinforma­tion is often domestic in origin as well as state sponsored. Such digital deception has resulted in US-based social groups, including Jewish Americans, experienci­ng waves of digital harassment that have contribute­d to offline violence.

Our democracy is under attack, but there are immediate actions we as a society can take to combat deception online. First and foremost is increasing transparen­cy. The public deserves to know who is behind the political advertisem­ents they receive. People should also be privy to how companies are using their data. Such transparen­cy cannot just come in the form of reports rife with inaccessib­le jargon from tech giants; transparen­cy efforts must be accompanie­d by serious action. Technology companies must adopt transparen­cy – as well as human rights, democracy and ethics – as governing principles.

Unregulate­d digital media allow misleading informatio­n to spread virally from anonymous sources, preventing accountabi­lity. Tech companies’ voluntary efforts are not sufficient

to protect political integrity. In fact, recent reporting suggests that their business models disincenti­vize getting rid of automated profiles and polarizing political content. We are in desperate need of regulation to shine light on paid political ads, curtail microtarge­ting, and unmask bots and fake accounts.

However, despite political theatrics – including several rounds of congressio­nal hearings with big tech executives – there are still no adequate solutions to the problem of digital deception. For their part, Facebook, Twitter and Google have largely reverted to “technologi­cal solutionis­m”. Their proposed fixes focus on novel software (bolstered by large-scale human content moderation) and tweaks to algorithms. They promise greater efficiency in stamping out digital deception via artificial intelligen­ce. But focusing on algorithms that detect and delete disinforma­tion, seek to prevent astroturfi­ng, or “redirect” people to factual content fails to address the fact that this problem is more than technologi­cal – it is a social issue. Private companies created the problem of scale that fuels viral disinforma­tion and are partially responsibl­e for the intense polarizati­on underlying our political situation, but they seem ill-equipped to deal with any of it.

Facebook, Twitter and Google have voluntaril­y undertaken efforts to bring transparen­cy to political advertisin­g and endorsed the Honest Ads Act, but such initiative­s don’t go far enough. None of the social media firms’ political advertisin­g databases show the specific audiences targeted by ad buyers – informatio­n that is crucial to defang nefarious forms of political microtarge­ting. Civil society and the public need consistent informatio­n on these and other forms of digital political communicat­ion from all social media firms.

The technology companies have set varied, unsystemat­ic standards. Their efforts fail to create multiplatf­orm solutions to a problem that clearly transcends any one platform. Facebook’s disclaimer requiremen­ts for buyers of political ads are easily gamed. And its well-publicized collaborat­ion with independen­t researcher­s has one glaring omission: it hides data from before 2017, meaning that we can’t understand what actually transpired during the 2016 elections.

We cannot rely on companies to provide adequate transparen­cy without government involvemen­t. There is nothing to prevent them from deactivati­ng transparen­cy measures once public attention shifts.

Indeed, companies have financial incentives to allow digital deception to continue. Researcher­s estimate that up to 15% of accounts on Twitter are bots. While bots can serve many useful functions, Twitter doesn’t distinguis­h between benign accounts and those that spread conspiracy theories and sow political discord. More accounts on Twitter means more paid ad impression­s and a higher financial valuation. Beyond this, political advertisin­g has become a large source of income for social media firms: Borrell Associates estimates that $1.8bn went into digital advertisin­g by political campaigns in the 2018 elections, little of it disclosed due to gaps in campaign finance law. Regulation could cause platforms to lose money. Their business models are at odds with the public interest.

Government must actually govern technology companies and work jointly with them and civil society to address the consequenc­es of their technologi­es. Unfortunat­ely, thus far, Congress has passed the “hot potato” back to tech firms, leaving them to fix the problems they created. Straightfo­rward legislatio­n based on existing legal principles, such as the Honest Ads Act or the Bot Disclosure and Accountabi­lity Act, has effectivel­y stalled.

Government must apply the constituti­onal principle of transparen­cy in four areas so that the public has the informatio­n it needs:

1. Laws and regulation­s must expose who is behind sponsored digital political communicat­ions. The Federal Election Commission must adapt the standards that currently apply to television and radio advertisin­g for the internet.

2. To combat the effects of microtarge­ting, Congress should pass the Honest Ads Act and legislatio­n that protects privacy and illuminate­s company usage of user data.

3. We must develop legal solutions to fake and automated accounts so that bots and trolls can no longer operate from the shadows. For example, platforms could be required to label all automated accounts, as required by the Bot Disclosure and Accountabi­lity Act.

4. Technology companies must be required to share data with researcher­s, submit their algorithms to evaluation, and be upfront about their efforts to police their platforms.

Tech companies have an important role in reining in digital deception, but government-mandated transparen­cy and accountabi­lity are the bedrock of an operationa­l democracy. If we don’t shore up this foundation fast, we put our democracy at risk.

Ann M Ravel is the digital deception project director at MapLight and previously served as chair of the Federal Election Commission.

Samuel Woolley is director of the Digital Intelligen­ce Lab at the Institute for the Future.

 ??  ?? Congressio­nal hearings with Facebook, Twitter and Google executives have failed to solve digital deception. Photograph: JGI/Tom Grill/Getty Images/Blend Images
Congressio­nal hearings with Facebook, Twitter and Google executives have failed to solve digital deception. Photograph: JGI/Tom Grill/Getty Images/Blend Images

Newspapers in English

Newspapers from United States