Thursday 24 March 2016

Microsoft Puts AI Chatbot to Sleep After Twitter Taught It to Troll

TayTweets
Microsoft had good intentions when it unleashed "Tay," an AI chatbot designed to interact with the millennial crowd, to the Twitterverse. At the same time, Microsoft displayed an incredible amount of naivety in not predicting or preparing for what would happen next.
In short order, Twitter users corrupted Microsoft's sweet, innocent chatbot,. They discovered she would repeat tweets, no matter how offensive, and before long Tay had devolved into a racist chatbot tweeting inflammatory remarks, conspiracy theories, and all matter of offensive content. Her tweets involved Hitler, Jews, 9/11, and so forth.
Again, this isn't what Microsoft intended. Microsoft's technology and research and Bing teams collaborated to create a chatbot that would "experiment with and conduct research on conversational understanding." It was also designed to "entertain people," but with "casual and playful conversation," not Internet trolling.
Microsoft has deleted the offensive tweets (you can still view many of them on Social Hax) and pulled Tay offline for the time being.
"The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments," Microsoft said in a statement.
We assume the next version of Tay that emerges won't be so prone to repeating everything she sees on Twitter, particularly things like "Hitler was right." But the real question here is how did Microsoft not foresee this happening? It's like a parent dropping off his impressionable teenage daughter at a college frat party and expecting nothing but moral behavior while she's there. Good luck with that.

No comments:

Post a Comment

MY AD 2