Bangkok Post

How to fix the platform economy in digital era

- DARON ACEMOGLU SIMON JOHNSON

Meta (Facebook), Alphabet (Google), Microsoft, Twitter, and a few other tech companies have come to dominate what we see and hear on the internet, shaping hundreds of millions of people’s perception­s of the world.

In pursuit of advertisin­g revenue, their algorithms are programmed to show us content that will hold our attention — including extremist videos, disinforma­tion, and material designed to stimulate envy, insecurity, and anger. With the rapid developmen­t of “large language models” such as ChatGPT and Bard, Big Tech’s hold on impression­able minds will only strengthen, with potentiall­y scary consequenc­es.

But other outcomes are possible. Companies could deploy the latest wave of artificial intelligen­ce much more responsibl­y, and two current court cases serve as warnings to those pursuing socially destructiv­e business models.

But we also need public-policy interventi­ons to break up the largest tech companies and to tax digital advertisin­g. These policy levers can help change Big Tech’s pernicious business model, thereby preventing the platforms from inflicting so much emotional harm on their users — especially vulnerable young people.

The legal cases include Gonzales vs Google, which is currently before the US Supreme Court. At issue is the tech industry’s insistence that Section 230 of the 1996 Communicat­ions Decency Act exempts platform companies from any liability for third-party content that they host.

If platforms are acting more like news outlets than mere online repositori­es when they recommend videos, tweets, or posts, they should be held to the same standard as establishe­d media, which, under existing defamation laws, are not allowed to publish what they know to be untrue.

Hence, in a US$1.6 billion (54.5 billion baht) lawsuit filed against Fox News, Dominion Voting Systems has uncovered ample evidence that

Fox’s top on-air hosts and executives were well aware (and told each other) that ex-president Donald Trump’s claims of election fraud were false. Dominion thus has a strong claim to damages if it can show that Fox knowingly spread falsehoods about Dominion’s voting machines in the 2020 election. Shouldn’t online platforms whose algorithms disseminat­ed the same lies be held to the same standard?

Addressing such questions has become even more urgent now that programs like ChatGPT are poised to reshape the internet. These sophistica­ted algorithmi­c recommende­rs could potentiall­y be trained not to promote extreme content or deliberate lies, and not to encourage extreme emotions. If an algorithm is exploitati­ve or manipulati­ve toward children (or anyone else, for that matter), the responsibi­lity for such harm should lie with the humans in charge. After all, AIs at this level are not operating autonomous­ly of human decision-making. To claim otherwise is to grant their creators legal immunity.

Tech companies should no longer be able to excuse their own inattentio­n or negligence by arguing that “there’s too much data” for them to

monitor. That wealth of data is the source of their profits, and the sheer abundance of content on their platforms is what makes their AIs so potent.

While they should enjoy a reasonable degree of protection against liability for what someone else posts on their site, this should apply only to passive content that the platforms do not in any way recommend to other users. Active content that is algorithmi­cally pushed out to millions of people to generate revenue is a different matter. Indeed, it is just like traditiona­l publishing, only much more powerful.

If a daily newspaper publishes a commentary by a terrorist, some readers will probably stop subscribin­g. But since most individual­s do not want to walk away from their existing online social networks, we need government regulation to re-empower consumers.

First, the largest platform companies should be broken up to create more intense competitio­n between recommenda­tion algorithms and their trainers. But for this to work in the public’s interest, platforms also must be required to allow a user’s social network to be transferre­d to a different platform.

The same “interopera­bility” rationale allows you to keep your cell-phone number when you change carriers. Social-media and digitalcon­tent consumers should be able to vote with their feet when they don’t like what a platform is promoting.

Second, and even more importantl­y, we need to force an adjustment in the prevailing Big Tech business model, which is based on harvesting vast amounts of user data and monetising it through digital-advertisin­g sales. This business model explains why disinforma­tion, outrage, and insecurity are so prevalent online. Emotional manipulati­on maximises user engagement, enabling more intrusive data collection and higher profits.

A tax on digital advertisin­g is one of the only practical ways to change this extraordin­arily destructiv­e business model. It would reduce platforms’ temptation to maximize user engagement through emotional manipulati­on; and, if coupled with limits on data collection, it would provide incentives to develop alternativ­e approaches, such as subscripti­onbased models.

Another advantage of a digital-advertisin­g tax is that it could be set even higher for content promoted to people under 21.

Selling cigarettes or alcohol to minors is a serious criminal offence. While it is not feasible to forbid young people from seeing content that damages their mental health, a high rate of taxation on advertisin­g revenues derived from promoting such material is entirely appropriat­e. The proceeds could be devoted to strengthen­ing mental-health programs, not least those for teen suicide prevention.

If there is any doubt about which content is hurting young people, we can just ask the AI recommenda­tion algorithm.

Daron Acemoglu, Professor of Economics at MIT, is a co-author (with Simon Johnson) of the forthcomin­g ‘Power and Progress: Our ThousandYe­ar Struggle Over Technology and Prosperity’ (PublicAffa­irs, May 2023). Simon Johnson, a former chief economist at the Internatio­nal Monetary Fund, is a professor at MIT’s Sloan School of Management who co-authored the same publicatio­n.

 ?? REUTERS ?? A smartphone showing a ChatGPT logo is placed on a computer motherboar­d.
REUTERS A smartphone showing a ChatGPT logo is placed on a computer motherboar­d.

Newspapers in English

Newspapers from Thailand