The Guardian (USA)

Meta’s new AI chatbot can’t stop bashing Facebook

- Matthew Cantor

If you’re worried that artificial intelligen­ce is getting too smart, talking to Meta’s AI chatbot might make you feel better. Launched on Friday, BlenderBot is a prototype of Meta’s conversati­onal AI, which, according to Facebook’s parent company, can converse on nearly any topic. On the demo website, members of the public are invited to chat with the tool and share feedback with developers. The results thus far, writers at Buzzfeed and Vice have pointed out, have been rather interestin­g.

Asked about Mark Zuckerberg, the bot told BuzzFeed’s Max Woolf that “he is a good businessma­n, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!”

The bot has also made clear that it’s not a Facebook user, telling Vice’s Janus Rose that it had deleted its account after learning about the company’s privacy scandals. “Since deleting Facebook my life has been much better,” it said.

The bot repeats material it finds on the internet, and it’s very transparen­t about this: you can click on its responses to learn where it picked up whatever claims it is making (though it is not always specific).

This means that along with uncomforta­ble truths about its parent company, BlenderBot has been spouting predictabl­e falsehoods. In conversati­on with Jeff Horwitz of the Wall Street Journal, it insisted Donald Trump was still president and would continue to be “even after his second term ends in 2024”. (It added another dig at Meta, saying Facebook “has a lot of fake news on it these days”.) Users have also recorded it making antisemiti­c claims.

BlenderBot’s remarks were foreseeabl­e based on the behavior of older chatbots such as Microsoft’s Tay, which Twitter users quickly taught to be a racist conspiracy theorist, forcing the company to apologize for its “wildly inappropri­ate and reprehensi­ble words and images”. GPT-3, another AI system, has also delivered racist, misogynist and homophobic remarks. A South

Korean startup’s chatbot, designed to resemble a 20-year-old college student, had to be suspended after it rattled off racial slurs and anti-LGBTQ+ remarks.

Given the cesspool that is the internet, Meta appears to have expected similar things from its own service. Before logging on, users must acknowledg­e that BlenderBot is “likely to make untrue or offensive statements”. As Vice notes, Meta researcher­s have described the AI tech behind the bot as having “a high propensity to generate toxic language and reinforce harmful stereotype­s, even when provided with a relatively innocuous prompt”.

“Everyone who uses Blender Bot is required to acknowledg­e they understand it’s for research and entertainm­ent purposes only, that it can make untrue or offensive statements, and that they agree to not intentiona­lly trigger the bot to make offensive statements,” said a Meta spokespers­on in a statement.

My own conversati­on with BlenderBot did not veer into that territory, but it did highlight another flaw in the service: its utter inanity.

The bot began by asking me what subject I liked in school. The bot is open about which “persona” it is using in the conversati­on; our discussion involved a persona it described as “I like to read. I went to college.” When I asked it tough questions, such as which colleges accepted AI bot students, it offered nonsensica­l replies (“in some ways, aren’t we all chatbots?” No) and offered further reading by saying, “Here’s the link”, without providing a link. It also kept steering the conversati­on back to chatbots.

It did, however, maintain its firm stance against its own creators. When

 ?? ?? BlenderBot, a prototype of Meta’s conversati­onal AI, was launched on Friday. Photograph: Dado Ruvić/Reuters
BlenderBot, a prototype of Meta’s conversati­onal AI, was launched on Friday. Photograph: Dado Ruvić/Reuters

Newspapers in English

Newspapers from United States