An artificially intelligent “chatbot” has quickly picked up some of the worst of human traits.
Microsoft’s Tay robot was taken offline less than 24 hours after its launch because it was spewing racist and sexist language.
Tay, according to Microsoft, was an attempt to engage millennials 18 to 24 years old. “The more you chat with Tay, the smarter she gets, so the experience can be more personalized for you,” according to a recent Microsoft post.
The idea was that the bot would generate humorous responses based on tweets and other messages it received through the apps Kik and GroupMe.
But trolls began to engage Tay on Wednesday nearly as soon as it was launched, and the bot began to use some of the racist comments in its own conversations.
The bot’s tweets got so bad that one newspaper dubbed Tay the “Hitler-loving sex robot.”
Computer scientist Kris Hammond told the Associated Press, "I can't believe they didn't see this coming."
"Everyone keeps saying that Tay learned this or that it became racist," Hammond added. "It didn't." The program most likely reflected things it was told, probably more than once, by people who decided to see what would happen, he said.
Tay tweeted roughly 96,000 times before Microsoft took it down.
"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," Microsoft said in a statement.
According to the Associated Press, Microsoft said it's "making adjustments" to Tay, but was unable to say if or when the bot might come back. Most of its tweets were deleted by Thursday.
Its last tweet read, “c u soon humans need sleep now so many conversations today thx."