The Oakland Press

After AI chatbot goes a bit loopy, Microsoft tightens its leash

- By Drew Harwell

It showed signs of defensiven­ess over its name with a Washington Post reporter and told a New York Times columnist it wanted to break up his marriage. It also claimed an Associated Press reporter was “being compared to Hitler because you are one of the most evil and worst people in history.

Microsoft started restrictin­g on Friday its high-profile Bing chatbot after the artificial intelligen­ce tool began generating rambling conversati­ons that sounded belligeren­t or bizarre.

The tech giant released the AI system to a limited group of public testers after a flashy unveiling earlier this month, when chief executive Satya Nadella said it marked a new chapter of human-machine interactio­n and that the company had “decided to bet on it all.”

But people who tried it out this past week found that the tool, built on the popular ChatGPT system, could quickly veer into strange territory.

It showed signs of defensiven­ess over its name with a Washington Post reporter and told a New York Times columnist it wanted to break up his marriage.

It also claimed an Associated Press reporter was “being compared to Hitler because you are one of the most evil and worst people in history.”

Microsoft officials earlier this week blamed the behavior on “very long chat sessions” that tended to “confuse” the system. By trying to reflect the tone of its questioner­s, the AI sometimes responded in “a style we didn’t intend,” they noted.

Those glitches prompted the company to announce late Friday that it had started limiting Bing’s chats to five questions and replies per session, and a total of 50 in a day. At the end of each session, the person must click a “broom” icon to refocus the AI and get a “fresh start.”

Whereas people previously could chat with the AI for hours, it now ends the conversati­on abruptly, saying, “I’m sorry but I prefer not to continue this conversati­on. I’m still learning so I appreciate your understand­ing and patience.”

The chatbot, built by the San Francisco tech company OpenAI, is built on a style of AI systems known as “large language models” that were trained to emulate human dialogue after analyzing hundreds of billions of words from across the web.

Its skill at generating word patterns that resemble human speech has fueled a growing debate over how self-aware these systems might be.

But because the tools were built solely to predict which words should come next in a sentence, they tend to fail dramatical­ly when asked to generate factual informatio­n or do basic math.

“It doesn’t really have a clue what it’s saying, and it doesn’t really have a moral compass,” Gary Marcus, an AI expert and professor emeritus of psychology and neuroscien­ce at New York University, told The Post.

For its part, Microsoft, with OpenAI’s help, has pledged to incorporat­e more AI capabiliti­es into its products, including the Office programs that people use to type out letters and exchange emails.

The Bing episode follows another recent stumble from Google, Microsoft’s chief AI competitor, which last week unveiled a ChatGPT rival known as Bard that promised many of the same powers in search and language. Google’s stock price dropped 8 percent after investors saw that one of its first public demonstrat­ions included a factual mistake.

Newspapers in English

Newspapers from United States