Who will protect us from digital deception? Not tech companies
It’s too late to save the 2018 US midterms from digital deception campaigns – but it’s not too late for democracy. This year’s elections saw an unprecedented rise in political manipulation over social media. In October, the Department of Justice charged a Russian national with running a USfocused political disinformation campaign that had a budget of $10m from January to June. Revelations about Iranian disinformation efforts and Saudi Arabian state-sponsored digital propaganda demonstrate a complex problem with ill-defined borders on- and offline.
At home, political actors continue to abuse campaign finance loopholes and digital technologies to sway and suppress voters, further polarize political debate, and decrease trust in democratic institutions. Indeed, our research shows that disinformation is often domestic in origin as well as state sponsored. Such digital deception has resulted in US-based social groups, including Jewish Americans, experiencing waves of digital harassment that have contributed to offline violence.
Our democracy is under attack, but there are immediate actions we as a society can take to combat deception online. First and foremost is increasing transparency. The public deserves to know who is behind the political advertisements they receive. People should also be privy to how companies are using their data. Such transparency cannot just come in the form of reports rife with inaccessible jargon from tech giants; transparency efforts must be accompanied by serious action. Technology companies must adopt transparency – as well as human rights, democracy and ethics – as governing principles.
Unregulated digital media allow misleading information to spread virally from anonymous sources, preventing accountability. Tech companies’ voluntary efforts are not sufficient
to protect political integrity. In fact, recent reporting suggests that their business models disincentivize getting rid of automated profiles and polarizing political content. We are in desperate need of regulation to shine light on paid political ads, curtail microtargeting, and unmask bots and fake accounts.
However, despite political theatrics – including several rounds of congressional hearings with big tech executives – there are still no adequate solutions to the problem of digital deception. For their part, Facebook, Twitter and Google have largely reverted to “technological solutionism”. Their proposed fixes focus on novel software (bolstered by large-scale human content moderation) and tweaks to algorithms. They promise greater efficiency in stamping out digital deception via artificial intelligence. But focusing on algorithms that detect and delete disinformation, seek to prevent astroturfing, or “redirect” people to factual content fails to address the fact that this problem is more than technological – it is a social issue. Private companies created the problem of scale that fuels viral disinformation and are partially responsible for the intense polarization underlying our political situation, but they seem ill-equipped to deal with any of it.
Facebook, Twitter and Google have voluntarily undertaken efforts to bring transparency to political advertising and endorsed the Honest Ads Act, but such initiatives don’t go far enough. None of the social media firms’ political advertising databases show the specific audiences targeted by ad buyers – information that is crucial to defang nefarious forms of political microtargeting. Civil society and the public need consistent information on these and other forms of digital political communication from all social media firms.
The technology companies have set varied, unsystematic standards. Their efforts fail to create multiplatform solutions to a problem that clearly transcends any one platform. Facebook’s disclaimer requirements for buyers of political ads are easily gamed. And its well-publicized collaboration with independent researchers has one glaring omission: it hides data from before 2017, meaning that we can’t understand what actually transpired during the 2016 elections.
We cannot rely on companies to provide adequate transparency without government involvement. There is nothing to prevent them from deactivating transparency measures once public attention shifts.
Indeed, companies have financial incentives to allow digital deception to continue. Researchers estimate that up to 15% of accounts on Twitter are bots. While bots can serve many useful functions, Twitter doesn’t distinguish between benign accounts and those that spread conspiracy theories and sow political discord. More accounts on Twitter means more paid ad impressions and a higher financial valuation. Beyond this, political advertising has become a large source of income for social media firms: Borrell Associates estimates that $1.8bn went into digital advertising by political campaigns in the 2018 elections, little of it disclosed due to gaps in campaign finance law. Regulation could cause platforms to lose money. Their business models are at odds with the public interest.
Government must actually govern technology companies and work jointly with them and civil society to address the consequences of their technologies. Unfortunately, thus far, Congress has passed the “hot potato” back to tech firms, leaving them to fix the problems they created. Straightforward legislation based on existing legal principles, such as the Honest Ads Act or the Bot Disclosure and Accountability Act, has effectively stalled.
Government must apply the constitutional principle of transparency in four areas so that the public has the information it needs:
1. Laws and regulations must expose who is behind sponsored digital political communications. The Federal Election Commission must adapt the standards that currently apply to television and radio advertising for the internet.
2. To combat the effects of microtargeting, Congress should pass the Honest Ads Act and legislation that protects privacy and illuminates company usage of user data.
3. We must develop legal solutions to fake and automated accounts so that bots and trolls can no longer operate from the shadows. For example, platforms could be required to label all automated accounts, as required by the Bot Disclosure and Accountability Act.
4. Technology companies must be required to share data with researchers, submit their algorithms to evaluation, and be upfront about their efforts to police their platforms.
Tech companies have an important role in reining in digital deception, but government-mandated transparency and accountability are the bedrock of an operational democracy. If we don’t shore up this foundation fast, we put our democracy at risk.
Ann M Ravel is the digital deception project director at MapLight and previously served as chair of the Federal Election Commission.
Samuel Woolley is director of the Digital Intelligence Lab at the Institute for the Future.