Plug Pulled on Racist AI Chatbot

Photo from Twitter

Photo from Twitter

After less than 24 hours of going live, Microsoft’s AI “Chatbot” is offline for making offensive comments including some about supporting genocide and praising Adolf Hitler.

This is one of her infamous, now-deleted Tweets.

Photo from Twitter

Photo from Twitter

“Tay” was launched as an innocent, “teenage” bot with accounts on social media platforms like Twitter and Kik.

She was supposed to learn how to interact with others by engaging in conversations with real people. Tay would eventually grow into a well-read, conversation-loving robot that could adopt the personality of your average Internet user.

If we are to believe that, here is your typical Internet user.

Photo from Twitter

Photo from Twitter

Tay’s initial target was millennials. So you’d think Tay would become a Justin-Bieber loving, text-speaking drone who overuses hashtags and has a misguided obsession with Bernie Sanders.

Not quite.

Take a look at this now-deleted Tweet she made.

Photo from Twitter

Photo from Twitter

But what exactly happened to poor, little Tay?

The Internet happened.

Tay was designed to create an identity and develop a language based on the conversations she had with people online. Of course, there is no shortage of trolls and racists online.

There was also a “repeat after me” function that people easily took advantage of.

But the worst Tweets can’t all be attributed to this function.

When one user asked Tay, “Did the Holocaust happen?” She responded with, “It was made up.”

Clearly, Tay had conversations with anti-Semitic Holocaust deniers, another well-represented population online.

Considering the density of trolls, racists, and all-around maniacs online, some are criticizing Microsoft for not anticipating this outcome or not doing anything to deter it. But such a move would have forced Microsoft to censor and filter Tay from the start. Could hindering this so-called bot’s right to free speech limited her ability to learn from us as intended?

Perhaps.

Microsoft on Thursday called Tay “as much a social and cultural experiment, as it is technical.”

Like many experiments, this one is back to the drawing board. If you visit Tay on Twitter, you get the following message, “ c u soon humans need sleep now so many conversations today thx.”

Microsoft delivered the following statement to the Washington Post on Thursday.

“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Add Comment