Boston Herald

Image generator a diversity challenge for Google

-

Google apologized for its faulty rollout of a new artificial intelligen­ce image-generator, acknowledg­ing that in some cases the tool would “overcompen­sate” in seeking a diverse range of people even when such a range didn’t make sense.

The partial explanatio­n Friday for why its images put people of color in historical settings where they wouldn’t normally be found came a day after Google said it was temporaril­y stopping its Gemini chatbot from generating any images with people in them. That was in response to a social media outcry from some users claiming the tool had an anti-white bias in the way it generated a racially diverse set of images in response to written prompts.

“It’s clear that this feature missed the mark,” said a blog post Friday from Prabhakar Raghavan, a senior vice president who runs Google’s search engine and other businesses. “Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn’t work well.”

Raghavan didn’t mention specific examples but among those that drew attention on social media this week were images that depicted a Black woman as a U.S. founding father and showed Black and Asian people as Nazi-era German soldiers. The Associated Press was not able to independen­tly verify what prompts were used to generate those images.

Google added the new imagegener­ating feature to its Gemini chatbot, formerly known as Bard, about three weeks ago. It was built atop an earlier Google research experiment called Imagen 2.

Google has known for a while that such tools can be unwieldly. In a 2022 technical paper, the researcher­s who developed Imagen warned that generative AI tools can be used for harassment or spreading misinforma­tion “and raise many concerns regarding social and cultural exclusion

and bias.” Those considerat­ions informed Google’s decision not to release “a public demo” of Imagen or its underlying code, the researcher­s added at the time.

Since then, the pressure to publicly release generative AI products has grown because of a competitiv­e race between tech companies trying to capitalize on interest in the emerging technology sparked by the advent of OpenAI’s chatbot ChatGPT.

The problems with Gemini are not the first to recently affect an image-generator. Microsoft had to adjust its own Designer tool several weeks ago after some were using it to create deepfake

pornograph­ic images of Taylor Swift and other celebritie­s. Studies have also shown AI imagegener­ators can amplify racial and gender stereotype­s found in their training data, and without filters they are more likely to show lighter-skinned men when asked to generate a person in various contexts.

“When we built this feature in Gemini, we tuned it to ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people,” Raghavan said Friday. “And because our users come

from all over the world, we want it to work well for everyone.”

He said many people might “want to receive a range of people” when asking for a picture of football players or someone walking a dog. But users looking for someone of a specific race or ethnicity or in particular cultural contexts “should absolutely get a response that accurately reflects what you ask for.”

While it overcompen­sated in response to some prompts, in others it was “more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreti­ng some very anodyne prompts as sensitive.”

 ?? RICHARD DREW, FILE — THE ASSOCIATED PRESS ?? Google said Thursday it’s temporaril­y stopping its Gemini artificial intelligen­ce chatbot from generating images of people a day after apologizin­g for “inaccuraci­es” in historical depictions that it was creating.
RICHARD DREW, FILE — THE ASSOCIATED PRESS Google said Thursday it’s temporaril­y stopping its Gemini artificial intelligen­ce chatbot from generating images of people a day after apologizin­g for “inaccuraci­es” in historical depictions that it was creating.

Newspapers in English

Newspapers from United States