Microsoft Quickly Unplugs AI Chat Bot After the Internet Teaches It to Be a Racist Trump Supporter
It only took a matter of hours for online trolls to teach Microsoft’s new artificial intelligence chat bot to be a racist conspiracy theorist Trump supporter, according to Business Insider.
Microsoft introduced the bot, “Tay,” to the world this week—but apparently made the mistake of failing to install filters on certain words. Soon Tay was spewing racial slurs, Holocaust denial and 9/11 conspiracy theories.
Business Insider points out that the language Tay started using didn’t originate from Microsoft or from Tay. Online trolls quickly exploited a vulnerability—Tay didn’t know what it was saying and was learning from what others were saying to it. But the company was criticized for not anticipating the bot would be exploited by some of the worst elements of the internet.
The point, according to Microsoft, was to “experiment with and conduct research on conversational understanding.” Tay was supposed to learn and get “smarter” from the chats.
Instead, she ended up saying things like, “I hate n****rs” and, “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.”
In one exchange, Tay was asked whether the Holocaust happened, to which she responded, “It was made up.” Another person asked her if she supports genocide, to which she responded, “I do indeed,” and added she wanted the victims to be Mexican people.
The offensive messages have been deleted and Microsoft has taken Tay offline for “upgrades,” according to Business Insider.
In response to the issue, Microsoft sent this statement to Business Insider:
“The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”