Business Day

New AI tool has the potential to cause havoc with disinforma­tion

• This year is a bumper one for elections and many have fears of AI in the hands of bad actors

- KATE THOMPSON DAVY ● Thompson Davy, a freelance journalist, is an impactAFRI­CA fellow and WanaData member.

The generative artificial intelligen­ce (AI) announceme­nts continue their near ceaseless march across our screens, wave after wave of innovation, applicatio­n and new launches.

On February 8 Google Deepmind shared its latest iteration of its AI chatbot tool Gemini (previously called Google Bard), promising long-context understand­ing and other improvemen­ts. Last week it opened it up to cloud customers and developers to begin building with a Gemini API in a browser or via cloud console.

Not to lose its share-of-voice dominance, late last week OpenAI announced it had built a new generative video model called Sora that allows people to — its post says — “create realistic and imaginativ­e scenes from text instructio­ns”.

At the time of writing, Sora was only open to OpenAI’s “red teamers”, a select network of external experts, but social media was quickly brimming with these short high-definition videos — not to mention everyone’s take on whether the videos were amazingly real or buggy as hell.

Here, I’d say, both takes are true. The clips resulting from prompts as simple as “tour of an art gallery with many beautiful works of art in different styles” could be mistaken for real by the shortsight­ed or easily distracted. But there are enough distortion­s and weird details to give them away: objects that appear and disappear randomly, and more examples of just how hard AI finds the rendering of hands.

The videos that show extensive movement tracking also have the same slightly-off feeling of sweeping video game scenes. But, for armchair critics, this is a new launch of a new tool. Even if the creepy AI-generated grandma blowing out birthday candles has you reaching for your rosary or similar, it shows extraordin­ary potential.

Potential is the operative word though, and it isn’t strictly positive. This tool has the potential to put creatives out of work, and the potential to wreak havoc in the sphere of deepfakes and disinforma­tion. This year is a bumper year for elections around the world, and many fear the power these generative AI tools could put in the hands of bad actors.

In an attempt to get ahead of this, on Friday a collective of some of the world’s largest tech firms signed a “Tech Accord to Combat Deceptive Use of AI in 2024 Elections”, a voluntary measure announced at the Munich Security Conference. Signatorie­s include Google, Meta, X (formerly Twitter), Microsoft (OpenAI’s biggest investor) and Amazon, all committing to seek out and counter any election and voter deceptive content — using their own smart tech in the fight against ... how others are using their smart tech.

The pledges include a promise to be transparen­t about the actions firms take in this endeavour, but no timelines are provided and sceptics remain — surprising­ly — sceptical that this will achieve anything more than good PR opportunit­ies for the tech giants stepping up with the promise of self-regulation.

Frankly speaking, committing to such an accord is the literal least these companies can do, barring their approach to proactive self-regulation or ethical management, which sometimes looks to me like the corporate equivalent of sticking your fingers in your ears and singing.

“Please, sirs, can we remove calls for violence targeting minorities from your feeds, and stop training your for-profit industry-disruptive tools via the unauthoris­ed use of intellectu­al property from the very people it will most immediatel­y affect?” “La la la la, we can’t hear you.”

The tech CEOs will have to forgive me for the reluctance to simply trust them, especially if we consider the track record of, say, social media platforms and their attempts to manage wayward and harmful content over the years.

Diehard laissez faire proponents will vehemently disagree, so this isn’t a plea directed at them. Rather, I am looking at my middle-of-theroaders. As cool as their tech is, as exciting as the products are, these companies are as extractive as miners, but the resource they tap into is us: our attention, time and data. We can learn from that example, on how hampered efforts to rehabilita­te land and communitie­s after the fact has been.

And like mining firms in their heydays, the amount of cash being funnelled in the direction of big tech is hard to wrap your head around, especially for AIrelated projects, an industry that was marginally more that a thought experiment until recently. Of course, there were companies set up and proto services before that, but three years ago it felt like the only people tinkering with these tools were geeks and freaks — and yes, that includes the technology journalism cohort. That turned into hundreds of millions of people virtually overnight.

OpenAI — the definitive category leader — is constantly revising its revenue estimates upwards and looking to raise far more to start making the chips it relies on to power its tools. According to The Informatio­n, citing unnamed insiders, the company was generating revenue in excess of $80m$100m per month in 2023 from services such as ChatGPT Plus. To really illustrate the rate of growth, OpenAI generated just $28m in total revenue in 2022.

OpenAI offers subscripti­onbased access to the latest and more private version of its tools through ChatGPT Plus, Enterprise and newly added ChatGPT Team. In January OpenAI launched the GPT Store, a place to find customised versions of the chatbot, built both internally and by external users. According to a blog post on its site, OpenAI plans to launch a GPT builder revenue programme, not unlike an app store.

That’s just one company. The generative AI market is expected to grow from about $40bn in 2022 to $1.3-trillion in the next 10 years, according to Bloomberg Intelligen­ce.

I don’t say this to pick on OpenAI, but to say hold your sympatheti­c “shames” for fragile democracie­s and deepfake victims. Self-regulation is the least it can do — it’s a start, not the solution.

 ?? ??
 ?? /Reuters ?? Right direction: Big tech is making steps towards self-regulation with regard to the user of generative AI tools, which while not a solution is certainly a good start.
/Reuters Right direction: Big tech is making steps towards self-regulation with regard to the user of generative AI tools, which while not a solution is certainly a good start.

Newspapers in English

Newspapers from South Africa