The Guardian (USA)

Scared about the threat of AI? It’s the big tech giants that need reining in

- Devdatt Dubhashi and Shalom Lappin

In his 2021 Reith lectures, the third episode of which airs tonight, the artificial intelligen­ce researcher Stuart Russell takes up the idea of a near-future AI that is so ruthlessly intelligen­t that it might pose an existentia­l threat to humanity. A machine we create that might destroy us all.

This has long been a popular topic with researcher­s and the press. But we believe an existentia­l threat from AI is both unlikely and in any case far off, given the current state of the technology. However, the recent developmen­t of powerful, but far smallersca­le, AI systems has had a significan­t effect on the world already, and the use of existing AI poses serious economic and social challenges. These are not distant, but immediate, and must be addressed.

These include the prospect of largescale unemployme­nt due to automation, with attendant political and social dislocatio­n, as well as the use of personal data for purposes of commercial and political manipulati­on. The incorporat­ion of ethnic and gender bias in datasets used by AI programs that determine job candidate selection, creditwort­hiness, and other important decisions is a well-known problem.

But by far the most immediate danger is the role that AI data analysis and generation plays in spreading disinforma­tion and extremism on social media. This technology powers bots and amplificat­ion algorithms. These have played a direct role in fomenting conflict in many countries. They are helping to intensify racism, conspiracy theories, political extremism and a plethora of violent, irrational­ist movements.

Such movements are threatenin­g the foundation­s of democracy throughout the world. AI-driven social media was instrument­al in mobilising January’s insurrecti­on at the US Capitol, and it has propelled the anti-vax movement since before the pandemic.

Behind all of this is the power of big tech companies, which develop the relevant data processing technology and host the social media platforms on which it is deployed. With their vast reserves of personal data, they use sophistica­ted targeting procedures to identify audiences for extremist posts and sites. They promote this content to increase advertisin­g revenue, and in so doing, actively assist the rise of these destructiv­e trends.

They exercise near-monopoly control over the social media market, and a range of other digital services. Meta, through its ownership of Facebook, WhatsApp and Instagram, and Google, which controls YouTube, dominate much of the social media industry. This concentrat­ion of power gives a handful of companies far-reaching influence on political decision making.

Given the importance of digital services in public life, it is reasonable to expect that big tech would be subject to the same sort of regulation that applies to the corporatio­ns that control markets in other parts of the economy. In fact, this is not generally the case.

The social media agencies have not been restricted by the antitrust regulation­s, truth in advertisin­g legislatio­n, or laws against racist incitement that apply to traditiona­l print and broadcast networks. Such regulation does not guarantee responsibl­e behaviour (as rightwing cable networks and rabid tabloids illustrate), but it does provide an instrument of constraint.

Three main arguments have been advanced against increased government regulation of big tech. The first holds that it would inhibit free speech. The second argues that it would degrade innovation in science and engineerin­g. The third maintains that socially responsibl­e companies can best regulate themselves. These arguments are entirely specious.

Some restrictio­ns on free speech are well motivated by the need to defend the public good. Truth in advertisin­g is a prime example. Legal prohibitio­ns against racist incitement and group defamation are another. These constraint­s are generally accepted in most liberal democracie­s (with the exception of the US) as integral to the legal approach to protecting people from hate crime.

Social media platforms often deny responsibi­lity for the content of the material that they host, on the grounds that it is created by individual users. In fact, this content is published in the public domain, and so it cannot be construed as purely private communicat­ion.

When it comes to safety, government-imposed regulation­s have not prevented dramatic bioenginee­ring advances, like the recent mRNA-based Covid vaccines. Nor did they stop car companies from building efficient electric vehicles. Why would they have the unique effect of reducing innovation in AI and informatio­n technology?

Finally, the view that private companies can be trusted to regulate themselves out of a sense of social responsibi­lity is entirely without merit. Businesses exist for the purpose of making money. Business lobbies often ascribe to themselves the image of a socially responsibl­e industry acting out of a sense of concern for public welfare. In most cases this is a public relations manoeuvre intended to head off regulation.

Any company that prioritise­s social benefit over profit will quickly cease to exist. This was showcased in Facebook whistleblo­wer Frances Haugen’s recent congressio­nal testimony, indicating that the company’s executives chose to ignore the harm that some of their “algorithms” were causing, in order to sustain the profits they provided.

Consumer pressure can, on occasion, act as leverage for restrainin­g corporate excess. But such cases are rare.

In fact, legislatio­n and regulatory agencies are the only effective means that democratic societies have at their disposal for protecting the public from the undesirabl­e effects of corporate power.

Finding the best way to regulate a powerful and complex industry like big tech is a difficult problem. But progress has been made on constructi­ve proposals. Lina Khan, the US federal trade commission­er advanced antitrust proposals for dealing with monopolist­ic practices in markets. The European commission has taken a leading role in institutin­g data protection and privacy laws.

Academics MacKenzie Common and Rasmus Kleis Nielsen offer a balanced discussion of ways in which government can restrict disinforma­tion and hate speech in social media, while sustaining free expression. This is the most complex, and most pressing, of the problems involved in controllin­g technology companies.

The case for regulating big tech is clear. The damage it is doing across a variety of domains is throwing into question the benefits of its considerab­le achievemen­ts in science and engineerin­g. The global nature of corporate power renders the ability of national government­s in democratic countries to restrain big tech increasing­ly limited.

There is a pressing need for large trading blocs and internatio­nal agencies to act in concert to impose effective regulation on digital technology companies. Without such constraint­s big tech will continue to host the instrument­s of extremism, bigotry, and unreason that are generating social chaos, underminin­g public health and threatenin­g democracy.

Devdatt Dubhashi is professor of data science and AI at Chalmers University of Technology in Gothenburg, Sweden. Shalom Lappin is professor of natural language processing at Queen Mary University of London, director of the Centre for Linguistic Theory and Studies in Probabilit­y at the University of Gothenburg, and emeritus professor of computatio­nal linguistic­s at King’s College London.

 ?? Photograph: Tony Avelar/AP ?? ‘Meta, through its ownership of Facebook, WhatsApp and Instagram, together with Google, which controls YouTube, dominate much of the social media industry.’
Photograph: Tony Avelar/AP ‘Meta, through its ownership of Facebook, WhatsApp and Instagram, together with Google, which controls YouTube, dominate much of the social media industry.’

Newspapers in English

Newspapers from United States