Toronto Star

Users turn Microsoft AI into a racist, sexist maniac

- JING CAO

Microsoft is in damage control mode after Twitter users exploited its new artificial intelligen­ce chat bot, teaching it to spew racist, sexist and offensive remarks.

The company introduced Tay earlier this week to chat with real humans on Twitter and other messaging platforms. The bot learns by parroting comments and then generating its own answers and statements based on all of its interactio­ns.

It was supposed to emulate the casual speech of a stereotypi­cal millennial. The Internet took advantage and quickly tried to see how far it could push Tay.

The worst tweets are quickly disappeari­ng from Twitter, and Tay itself has now also gone offline “to absorb it all.”

Some Twitter users are asking why the company didn’t build filters to prevent Tay from discussing certain topics, such as the Holocaust.

“The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said in a statement. “It is as much a social and cultural experiment as it is technical. Unfortunat­ely, within the first 24 hours of coming online, we became aware of a co-ordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropri­ate ways. As a result, we have taken Tay offline and are making adjustment­s.”

The bot was targeted at 18- to 24year-olds in the U.S. and meant to entertain and engage people through casual and playful conversati­on, according to Microsoft’s website.

In less than a day, Twitter’s denizens realized Tay didn’t really know what it was talking about and that it was easy to get the bot to make inappropri­ate comments on any taboo subject. People got Tay to deny the Holocaust, call for genocide and lynching, equate feminism to cancer and stump for Adolf Hitler.

Newspapers in English

Newspapers from Canada