The Guardian (USA)

AI hysteria is a distractio­n: algorithms already sow disinforma­tion in Africa

- Odanga Madung

More than 70 countries are due to hold regional or national elections by the end of 2024. It will be a period of huge political significan­ce across the globe, with more than 2 billion people (mostly from the global south) directly affected by the outcome of these elections. The stakes for the integrity of democracy have never been higher.

As concerns mount about the influentia­l role of informatio­n pollution, disseminat­ed through the vast platforms of US and Chinese corporatio­ns, in shaping these elections, a new shadow looms: how artificial intelligen­ce – more specifical­ly, generative AI such as OpenAI’s ChatGPT – has increasing­ly moved into the mainstream of technology.

The recent wave of hype around AI has seen a fair share of doom-mongering. Ironically, this hysteria has been fed by the tech industry itself. OpenAI’s founder, Sam Altman, has been touring Europe and the US, making impassione­d pleas for regulation of AI while also discreetly lobbying for favourable terms under the EU’s proposed AI Act.

Altman calls generative AI a major threat to democracy, warning of an avalanche of disinforma­tion that blurs the lines between fact and fiction. But we need some nuance in this discussion because we are missing the point: we reached that juncture a long time ago.

Tech multinatio­nals such as TikTok, Facebook and Twitter built highly vulnerable AI systems and left them unguarded. As a result, disinforma­tion spread via social media has become a defining feature of elections globally.

In Kenya, for example, I spent months documentin­g how Twitter’s trending algorithm was easily manipulate­d by a thriving disinforma­tionfor-hire industry to spread propaganda and quash dissent through the platform. Similar discoverie­s were made by other journalist­s in Nigeria prior to its recent elections.

My research in Kenya also found that TikTok’s “For You” algorithm

was feeding hundreds of hateful and inflammato­ry propaganda videos to millions of Kenyans ahead of its 2022 elections. TikTok and Twitter have also recently come under scrutiny for their role in amplifying the hate-filled backlash towards LGBTQ+ minorities in Kenya and Uganda.

Authoritar­ianism uses emotions to polarise people, finding fertile ground in specific events and febrile political climates. Social media platforms such as Facebook and TikTok have accelerate­d the spread of propaganda through microtarge­ting and by evading election silence windows, or blackout periods, making distributi­on remarkably simple.

What this means is that there is no need to rely exclusivel­y on content generated by AI to carry out effective disinforma­tion campaigns. The crux of the issue lies not in the content made by AI tools such as ChatGPT but in how people receive, process and comprehend the informatio­n facilitate­d by the AI systems of tech platforms.

For this reason, I take this sudden realisatio­n by the tech industry with a pinch of salt. By letting Altman define what we should care about when it comes to AI, we are allowing a corporatio­n to define the safety and riskmitiga­tion of this technology, instead of tried and tested institutio­ns, such as consumer and data protection agencies.

For example, the tech industry has followed the colonial path of its western corporate predecesso­rs, with their extractive, destructiv­e practices in developing countries. In its “please regulate us” campaign, the tech industry has convenient­ly ignored the fact that, in its efforts to build these AI systems, it has nearly destroyed people’s lives along the way.

I’ve spoken at length to the “data workers” who train the contentmod­eration algorithms of Meta’s and TikTok’s platforms. Many of them got post-traumatic stress disorder while on the job and were paid peanuts for it. Similarly, those who carried out the data-cleaning for Sam Altman’s darling, ChatGPT, suffered the same fate – but guess who’s making all the money from this suffering?

“AI doomerism”, as addressed and defined by predominan­tly white monopolist­ic capitalist­s, convenient­ly selects what to focus on and what to ignore. Kenyans and many other Africans helped make ChatGPT the phenomenon it is today. Its path towards becoming one of the fastest-growing platforms the world has ever seen was fuelled by them.

In essence, they are the ones making Sam Altman and Mark Zuckerberg rich because without them their platforms would be unusable. But I bet Africa’s people don’t even cross the mind of Altman and his colleagues.

So I implore advocates and observers of democracy, especially in developing countries, not to lose sight of the existing harms perpetuate­d by AI. We don’t need to imagine a distant future – the problems are already here. Born into a capitalist world, this technology will only further the injustices that exist within its very fabric.

If we learn from the rise of the current tech corporatio­ns, there will be no slowdown in the speed of AI’s developmen­t. Thus we need to understand the political and economic conditions from which it emerges. Its power is centralise­d, its economics are extractive, and its growth is reckless.

Odanga Madung is a Mozilla fellow, journalist and data scientist based in Nairobi, Kenya

TikTok’s ‘For You’ algorithm fed hateful, inflammato­ry propaganda videos to millions of Kenyans ahead of elections

 ?? Photograph: B Muthoni/Sopa/Shuttersto­ck ?? Kenyans follow the election news in Nairobi in 2022, when TikTok’s ‘For you’ algorithm fed inflammato­ry propaganda videos to millions of voters.
Photograph: B Muthoni/Sopa/Shuttersto­ck Kenyans follow the election news in Nairobi in 2022, when TikTok’s ‘For you’ algorithm fed inflammato­ry propaganda videos to millions of voters.

Newspapers in English

Newspapers from United States