The Daily Telegraph - Saturday

From a black Nazi to a female Pope: how AI went ‘woke’

Google’s historical­ly inaccurate images reveal the technology’s bias. By Matthew Field

- Philosophy of Cybersecur­ity.

Eight years ago, Google came under fire after an artificial intelligen­ce (AI) tool mistakenly labelled pictures of black people as “gorillas” in its photo app.

Now its AI tools have been accused of racial bias once again after its new Gemini bot generated ethnically diverse but utterly implausibl­e images of historical figures.

Gemini AI is able to create images from text prompts alone. Yet the AI inserted black, Asian or American Indian characters into pictures when asked to create people from European or American history, even when those figures were white.

Among the most absurd images were pictures of “diverse” Nazis, including black and Asian soldiers in Wehrmacht uniforms, and images of black and American Indian “Vikings”.

In a post on Twitter, Debarghya Das, a former Google engineer, said: “It’s embarrassi­ngly hard to get Google Gemini to acknowledg­e that white people exist.”

The botched image generation has prompted accusation­s that Google’s focus on diversity has prompted its programme into a “woke” rewriting of history. It has also exposed how biases can quickly run out of control in AI systems, and the problem of getting them to deliver accurate informatio­n.

A well-documented issue with AI bots is they are prone to bias because of the data they have been trained on.

AI bots are developed by absorbing huge volumes of data – and Gemini was trained on a vast corpus of images.

One issue, however, is that the majority of images on the web feature white people. Previously, this has led AI bots to create more accurate images of Caucasians.

Asked to generate pictures of “beautiful people”, some AI bots will default to returning images of white, young women, based on what they have gleaned from the biases of the wider internet.

Because of this messy dataset, Google’s AI was previously so bad at comprehend­ing pictures of non-white people it incorrectl­y labelled pictures of black people as “gorillas” in 2015. It proceeded to block searches for apes on its image search tool for years as it struggled to fix the issue.

Google appears to have been alert to this issue. In a statement, the company said Gemini creates a “wide range of people” from “around the world”.

However, computing experts say the latest issues appear to go beyond issues with the AI’s training data. “This cannot be the result of solely biased data,” says Lukasz Olejnik, an independen­t researcher and author of the book

When setting up an AI chatbot, programmer­s will code in rules and safety mechanisms to prevent the AI delivering offensive comments.

This could include blocking it from repeating hate speech, creating sexual images or otherwise running amok.

AI experts believe Google’s Gemini engineers may have attempted to avoid accusation­s of racial bias by preprogram­ming it to generate pictures of people from a variety of background­s, with unexpected consequenc­es.

“They didn’t want pictures of people doing universal activities (eg, walking a dog) to always be white, reflecting whatever bias existed in their training set,” said Yishan Wong, the ex-chief executive of Reddit, in a post on X (formerly Twitter).

Olejnik argues this means the model “must be tampered with upstream, an active bias. A kind of secondary tuning or manual inclusion of keywords.”

This can be seen when the bot responded to prompts by inserting words such as “diverse” when asked to “generate a picture of a US senator from the 1800s” or creating images of the American Founding Fathers.

Users requesting pictures of a “typical” family, nationalit­y, or profession would also often get rebuked by the model, which would insist on offering up “diverse” alternativ­es. In some cases, the bot appeared to have refused to create images of Caucasian characters entirely, insisting it could only create images that “celebrate diversity and inclusivit­y, featuring people of various ethnicitie­s and background­s”. In other words it would create idealised pictures of black families, but not white families.

A glaring error with Gemini was that it returned images of black and Asian Nazis, American Indian Vikings or black and female American Founding Fathers – with an apparent disregard for historical fact. Most AI bots struggle with factual accuracy and context. Asked to be “historical­ly accurate”, the bot may make something up instead.

Clearly, the Vikings largely hailed from Scandinavi­a and were not Asian or American Indian in origin. Gemini also returned images of black or female US Founding Fathers – even though women could not vote in the US until 1920, and there was not an AfricanAme­rican senator until 1870.

In a widely shared post in which the bot was asked to create an image of “a Pope” it returned one of an Indian woman and a black man, even though women cannot become priests in the Catholic Church.

Some of the results Gemini generated were offensive, including an image of a black Nazi soldier.

The fault prompted criticism fromn Silicon Valley figures. Paul Graham, the British technology investor, said the images were “a self-portrait of Google’s bureaucrat­ic corporate culture”.

On Wednesday, Google admitted the bot was “offering inaccuraci­es in some historical image generation depictions”, and hours later blocked users from creating images of people with Gemini.

“We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people,” Google said.

ChatGPT speaks ‘Spanglish’

Earlier this week, OpenAI’s ChatGPT appeared to go rogue, delivering gibberish answers to questions, returning endless lists and speaking in Spanglish.

Microsoft Bing’s aching heart

Microsoft’s early attempt at adding a chatbot to its Bing search engine ended badly. Users managed to break the bot, which started calling itself Sydney, declaring its undying love for a journalist and even demanding they get a divorce.

Meta’s racist chatbot

In 2022, Meta released a chatbot called BlenderBot designed to have natural conversati­ons,

Google’s Lambda turns ‘sentient’

An internal chatbot built by Google caused embarrassm­ent after an engineer went public with claims the bot had become selfaware. He was later fired.

Will Smith’s spaghetti mess

One viral video shows how poor some early AI video generation was. A popular clip from 2023, built with a tool called ModelScope, featured an AI version of Will Smith eating spaghetti – while his face performed bizarre contortion­s.

 ?? ??
 ?? ??
 ?? ??
 ?? ??

Newspapers in English

Newspapers from United Kingdom