Mint Delhi

Google’s Gemini shows we are guinea pigs for generative AI

Google’s bumbling new Gen-AI tool isn’t too woke but too rushed

- PARMY OLSON is a Bloomberg Opinion columnist covering technology.

Did you hear? Google has been accused of a secret vendetta against Caucasians. Elon Musk exchanged tweets about this conspiracy on X more than 150 times this week, all on portraits generated with Google’s new AI chatbot Gemini. Like Ben Shapiro, Musk reacted to its image diversity: Female popes! Africanloo­king Nazis! Indigenous founding fathers! Google apologized and paused the feature. Clearly, the company did a shoddy job over-correcting tech that had a racist skew. No, CEO Sundar Pichai wasn’t infected by a woke mind virus. Rather, he’s growth obsessed.

Three years ago, Google got in trouble when its photo-tagging tool started labelling some African-American people as apes. It shut the feature and then fired two of its leading AI ethics researcher­s. These were the folks whose job was to make sure Google’s tech was fair in how it depicted women and minorities. Not overly diverse like Gemini, but equitable and balanced.

When Gemini started spouting images of non-Caucasian German World War II soldiers, it was a sign that the ethics team hadn’t become more powerful, but was being ignored amid Google’s race against Microsoft and OpenAI to dominate generative web search. Proper investment would have led to a smarter approach to diversity.

People who test artificial intelligen­ce (AI) systems for safety are outnumbere­d by those whose job is to make them bigger and more capable by 30-to-1, according to an estimate by Center for Humane Technology. Often they are shouting into a void and told not to get in the way. Google’s earlier chatbot Bard was so faulty that it made factual errors in its marketing demo. Employees had warned about that, but managers didn’t listen. One posted that Bard was “worse than useless: please do not launch,” and many of the 7,000 staffers who viewed the message agreed, according to a Bloomberg News investigat­ion. Not long after, engineers who’d carried out a risk assessment told their Google superiors that Bard could cause harm and wasn’t ready. You can guess what Google did next: It released Bard to the public.

Google’s rushed, faulty AI isn’t alone. Microsoft’s Bing chatbot wasn’t just inaccurate, it was unhinged, telling a New York Times columnist soon after its release that it was in love with him and wanted to destroy things. Google has said that responsibl­e AI is a top priority, and that it was “continuing to invest in the teams” that apply its AI principles to products. A spokeswoma­n for Google said the company is “continuing to quickly address instances in which [Gemini] isn’t responding appropriat­ely.”

OpenAI, which kickstarte­d Big Tech’s race for a foothold in generative AI, normalized the rationale for treating us all like guinea pigs with new AI tools. Its website describes an “iterative deployment” philosophy, where it releases products like ChatGPT quickly to study their safety and impact and to prepare us for more powerful AI in the future. Google’s Pichai says much the same. By releasing half-baked AI tools, he’s giving us “time to adapt” to when AI becomes super powerful, according to comments made in an interview last year.

Asked what keeps him up at night, Pichai said, with no trace of irony, that it was knowing that AI could be “very harmful if deployed wrongly.” So what was his solution? Pichai didn’t mention investing more in researcher­s who make AI safe, accurate and ethical, but pointed to greater regulation, a solution beyond his control. “There have to be consequenc­es for creating deepfake videos which cause harm to society,” he said, referring to AI videos that could spread misinforma­tion. “Anybody who has worked with AI for a while, you know, you realize this is something so different and so deep that we would need societal regulation­s to think about how to adapt.”

This is a bit like the chef of a restaurant saying, “Making people sick with salmonella is bad, and we need more food inspectors to check our raw food,” when they know there are no food inspectors to speak of and won’t be for years. It gives them licence to dish out tainted food. The same is true in AI. With regulation­s far off, Pichai knows the onus is on his company to build AI systems that are fair and safe. But now that he is caught up in the race to put generative AI into everything quickly, there’s little incentive to ensure that it is.

We know about Gemini’s diversity bug because of all the tweets on X, but the AI model may have other problems we don’t know about—issues that may not trigger Elon Musk but are no less insidious. The female popes and non-Caucasian founding fathers are products of a deeper, years-long problem of putting growth and market dominance before safety. Expect our role as Big Tech guinea pigs to continue until that changes.

 ?? BLOOMBERG ?? Google’s Sundar Pichai should invest more in AI safety and quality
BLOOMBERG Google’s Sundar Pichai should invest more in AI safety and quality
 ?? ??

Newspapers in English

Newspapers from India