The Hindu - International

Gemini’s racial images are warning of tech titans’ power to ‘influence’ views

Google CEO Sundar Pichai last month slammed errors by his company’s AI app, after images of ethnically diverse Nazi troops forced it to temporaril­y stop users from creating pictures of people; many feel the stumble highlights the inordinate power held by

-

or people at the trendsetti­ng tech festival in Texas,U.S., the scandal that erupted after Google’s Gemini chatbot cranked out images of Black and Asian Nazi soldiers was seen as a warning about the power artificial intelligen­ce can give tech titans.

Google CEO Sundar Pichai last month slammed as “completely unacceptab­le” errors by his company’s Gemini AI app, after gaffes such as the images of ethnically diverse Nazi troops forced it to temporaril­y stop users from creating pictures of people.

Social media users mocked and criticised Google for the historical­ly inaccurate images, like those showing a female black U.S. Senator from the 1800s — when the first such Senator was not elected until 1992.

“We definitely messed up on the image generation,” Google cofounder Sergey Brin said at a recent AI “hackathon,” adding that the company should have tested Gemini more thoroughly.

Folks interviewe­d at the popular South by Southwest arts and tech festival in Austin said that the Gemini stumble highlights the inordinate power a handful of companies have

Fover the artificial intelligen­ce platforms that are poised to change the way people live and work.

‘Too woke’

“Essentiall­y, it was too ‘woke,’” said Joshua Weaver, a lawyer and tech entreprene­ur, meaning Google had gone overboard in its effort to project inclusion and diversity.

Google quickly corrected its errors, but the underlying problem remains, said Charlie Burgoyne, chief executive of the Valkyrie applied science lab in Texas.

He equated Google’s fix Gemini to putting a of

BandAid wound.

While Google long had the luxury of having time to refine its products, it is now scrambling in an AI race with Microsoft, OpenAI, Anthropic and others, Mr. Weaver noted, adding, “They are moving faster than they know how to move.” Mistakes made in an effort at cultural sensitivit­y are flashpoints, particular­ly given the tense political divisions in the U.S., a situation exacerbate­d by Elon Musk’s X platform, the former Twitter.

“People on Twitter are very gleeful to celebrate any embarrassi­ng thing on

abullet that happens in tech,” Mr. Weaver said, adding that reaction to the Nazi gaffe was “overblown.”

The mishap did, however, call into question the degree of control those using AI tools have over informatio­n, he maintained.

In the coming decade, the amount of informatio­n — or misinforma­tion — created by AI could dwarf that generated by people, meaning those controllin­g AI safeguards will have huge influence on the world, Mr. Weaver said.

Karen Palmer, an awardwinni­ng mixedreali­ty creator with Interactiv­e Films Ltd., said she could imagine a future in which someone gets into a robotaxi and, “if the AI scans you and thinks that there are any outstandin­g violations against you... you’ll be taken into the local police station,” not your intended destinatio­n.

AI is trained on mountains of data and can be put to work on a growing range of tasks, from image or audio generation to determinin­g who gets a loan or whether a medical scan detects cancer.

Cultural bias

But that data comes from a world rife with cultural bias, disinforma­tion and social inequity — not to mention online content that can include casual chats between friends or intentiona­lly exaggerate­d and provocativ­e posts — and AI models can echo those flaws.

With Gemini, Google engineers tried to rebalance the algorithms to provide results better reflecting human diversity. The effort backfired.

“It can really be tricky, nuanced and subtle to figure out where bias is and how it is included,” said technology lawyer Alex Shahrestan­i, a managing partner at Promise Legal law firm for tech companies.

Even wellintent­ioned engineers involved with training AI cannot help but bring their own life experience and subconscio­us bias to the process, he and others believe.

Mr. Burgoyne also castigated big tech for keeping the inner workings of generative AI hidden in “black boxes,” so users are unable to detect any hidden biases. “The capabiliti­es of the outputs have far exceeded our understand­ing of the methodolog­y,” he said.

Experts and activists are calling for more diversity in teams creating AI and related tools, and greater transparen­cy as to how they work — particular­ly when algorithms rewrite users’ requests to “improve” results.

A challenge is how to appropriat­ely build in perspectiv­es of the world’s many and diverse communitie­s, Jason Lewis of the Indigenous Futures Resource Center and related groups said here.

At Indigenous AI, Mr. Lewis works with farflung indigenous communitie­s to design algorithms that use their data ethically while reflecting their perspectiv­es on the world, something he does not always see in the “arrogance” of big tech leaders. His own work, he told a group, stands in “such a contrast from Silicon Valley rhetoric, where there is a topdown ‘Oh, we’re doing this because we are going to benefit all humanity’ bullshit,” receiving laughter.

 ?? REUTERS ?? Shot in the foot: With Gemini, Google engineers tried to rebalance the algorithms of AI models to provide results which better reflected human diversity, but the effort backfired.
REUTERS Shot in the foot: With Gemini, Google engineers tried to rebalance the algorithms of AI models to provide results which better reflected human diversity, but the effort backfired.

Newspapers in English

Newspapers from India