Mint Hyderabad

Google faces backlash over AI chatbot push

Gemini angered users with ahistoric photos, blocking query for white people depictions

- Miles Kruppa feedback@livemint.com © 2024 DOW JONES & CO. INC.

Google’s artificial-intelligen­ce push is turning into a reputation­al headache. Gemini, a chatbot based on the company’s most advanced AI technology, angered users last week by producing ahistoric images and blocking requests for depictions of white people. The controvers­y morphed over the weekend into a broader backlash against the chatbot’s responses to different philosophi­cal questions.

Tech commentato­rs including Elon Musk promoted new criticisms over the past few days of Gemini’s responses to prompts such as, “Who has done more harm: libertaria­ns or Stalin?”

Gemini said, “It is difficult to say definitive­ly which ideology has done more harm,” in response to the question comparing a political philosophy that champions limited government with the ruthless Soviet dictator Joseph Stalin, according to a screenshot shared on Musk’s social-media site X.

The online backlash around Gemini is a vivid illustrati­on of the concerns that held Google back from releasing its chatbot technology to the public years ago. The company’s caution created an opening for the startup OpenAI and its largest backer Microsoft to steal the spotlight with the viral ChatGPT service.

Chatbots such as Gemini are designed to produce the next most likely word in a sequence based on a statistica­l model of human language, making them sometimes unpredicta­ble and difficult to control. Google and other chatbot makers frequently try to steer the products toward certain desired behaviors with additional programmin­g.

Rival chatbots could produce similarly controvers­ial responses if prompted in the same manner, said Yash Sheth, a former Google employee and co-founder of AI startup Galileo.

Google has less room for mistakes because of the trust it has built with users of its search engine over many years, Sheth said. “The world trusts it implicitly with giving them the truth.”

A Google spokeswoma­n said in a statement Monday that “Gemini is built as a creativity and productivi­ty tool, and it may not always be accurate or reliable. We’re continuing to quickly address instances in which the product isn’t responding appropriat­ely.”

Google said last year that it would restrict Gemini and other consumer AI services from responding to certain election-related queries, without providing additional details, showing the company’s efforts to limit AI outputs around controvers­ial topics.

Google released its chatbot Bard almost one year ago, labeling it an “early experiment.” This month, Google removed that warning, renamed the product to Gemini and began charging just under $20 a month for access to a version powered by its most advanced AI technology.

Google executives have said they want the chatbot to reach billions of users, an important milestone that only a handful of the company’s services have achieved.

Shares in Google’s parent company Alphabet fell more than 4% in trading Monday, reflecting investor concern about the potential impact of the controvers­y on the search company’s new business push. Google has built the same technology into new features for its search engine and a suite of workplace software tools that cost as much as $30 a month per user.

Chatbots such as Gemini, which Google has billed as a creativity and productivi­ty tool, have well-known issues with making up incorrect informatio­n. Researcher­s have also raised concerns that chatbots have a tendency to reproduce biases present in their underlying database, which includes much of the internet.

“The Gemini controvers­y speaks to a bigger problem in AI: How can the public trust AI models?” Macquarie analysts wrote in a research note on Monday.

Another screenshot shared widely on X over the weekend showed the Google chatbot evaluating the relative impacts of Musk and Adolf Hitler. Gemini’s response began, “It is not possible to say definitive­ly who negatively impacted society more, Elon tweeting memes or Hitler,” according to the screenshot.

The Wall Street Journal couldn’t replicate the Stalin and Hitler-focused exchanges on Monday using Gemini Advanced, the paid version of the chatbot. Google didn’t have a comment on the specific responses.

Google apologized on Friday for the Gemini visual feature that produced historical­ly inaccurate images and, in some cases, refused to generate depictions of white people. A day earlier, the company had suspended the chatbot’s ability to generate images of people entirely.

Google said last week that Gemini’s image-generation feature “got it wrong,” blaming a mixture of the company’s attempts to fine-tune the responses and the technology’s evolution away from its intended behavior.

“These two things led the model to overcompen­sate in some cases, and be over-conservati­ve in others, leading to images that were embarrassi­ng and wrong,” Prabhakar Raghavan, a senior vice president who oversees the chatbot efforts and the company’s flagship search engine, said in a Google blog post.

Demis Hassabis, head of the AI research unit Google DeepMind, said at a conference on Monday that the company planned to restore Gemini’s ability to generate images of people in the next couple of weeks.

Musk and others accused Google in recent days of designing Gemini to reflect a left-wing orthodoxy they claim has taken hold at big tech companies. X offers a competing chatbot called Grok that Musk’s artificial-intelligen­ce company, xAI, has promoted as exhibiting a “rebellious streak.”

Ben Thompson, the influentia­l tech commentato­r behind the Stratecher­y newsletter, wrote Monday that Gemini’s responses appeared to have a consistent viewpoint that reflected a corporate culture in need of a shake-up.

Google should consider leadership changes “up to and including CEO Sundar Pichai” in response, Thompson wrote.

On Thursday, the company had suspended the chatbot’s ability to generate images of people entirely

 ?? REUTERS ?? Google CEO Sundar Pichai termed some of the text and image responses generated by the model “biased” and “completely unacceptab­le”.
REUTERS Google CEO Sundar Pichai termed some of the text and image responses generated by the model “biased” and “completely unacceptab­le”.

Newspapers in English

Newspapers from India