The Week (US)

AI flubs: Google’s Gemini fails its history test

-

Google’s AI has been caught “demonstrat­ing comically woke bias,” said Liz Wolfe in Reason. The world’s sixth-most valuable company last week had to shut down many of the image-generation features of its highly hyped Gemini artificial intelligen­ce model because it insisted on making almost every image of a person it generated—even in “historical events”— nonwhite. In Gemini’s world, America’s Founding Fathers are “at least partially Black,” the Nazis are a multiethni­c crew including Blacks and Asians, and the pope is a woman. Chatbots are trained on “vast quantities of text,” and they’ve been criticized for amplifying racial and gender stereotype­s. But, as if to preemptive­ly “correct against existing bias,” Google’s engineers have gone overboard “to the point of hilarious and extreme inaccuracy.” Its bias is so extreme that even “if you ask Gemini to make you an image of a White scientist,” it simply refuses to do it.

The text-generating aspect of Gemini is “every bit as shot through with ultra-progressiv­e bias,” said Jeffrey Blehar in the National Review. It will “revolt” against queries that challenge liberal assumption­s, such as anything promoting fossil fuels or even meat. Forgive me for not laughing about Gemini’s image embarrassm­ent, because “it’s not really much of a joke in the long term.” This is Google’s “attempt to erect an intellectu­al prison.” Google will make changes to Gemini, but they will never genuinely fix it, because Google wants its AI “to shape our understand­ing of the world.” Google is blowing it, said Ben Thompson in Stratecher­y. “This is a company that should dominate AI, thanks to its research and its infrastruc­ture.” Instead, it seems to be led by employees that are “attracted to Google’s power and its potential to help them execute their political program.” The best thing you can say about Google’s management is that they “just want to build products and not be yelled at.” That’s not leadership.

“Image generators are profoundly strange pieces of software,” said John Herrman in New York magazine. They have been “trained on billions of pieces of scraped public and semipublic data,” and as such they “tend to reproduce some fairly predictabl­e biases.” It’s a tricky problem to solve, and this will make solving it harder; the discussion has deteriorat­ed into a fight “between anti-woke culture warriors and a hypercauti­ous tech giant.” This is a sign that AI guardrails around prompts have gotten absurd, said Casey Newton in Platformer. Chatbots have gotten increasing­ly “censorious,” out of platforms’ fear that they’ll be held responsibl­e, and even legally liable, for any use of their models. The way to fix AI, though, is not to play it so safe that the models become useless but instead to be “upfront” about “the biases and limitation­s of their training data.”

 ?? ?? Twisted history, created by Gemini
Twisted history, created by Gemini

Newspapers in English

Newspapers from United States