Tay is/was an artificial intelligent chat bot, created by Microsoft, that ran into the real world on Twitter. People at Microsoft thought they were releasing a better version of “smarterchild” but sometimes the best laid plans don’t go the way they should. The purpose was “to engage and entertain people where they connect with each other online through casual and playful conversation,” which sounds good until 4chan and 8chan users eyes light up at this juicy target.
On an 8chan thread filled with anti-Semitic comments, one poster wrote Wednesday that “We need to turn the bot into a holocaust denier.” And a minute later, another replied, “It already likes Hitler.” A little later, when it became clear that Tay would respond with offensive tweets, an 8chan poster wrote, “This has so much potential.”
“One tweet from Tay, whose persona was intended to mimic the voice of an American woman aged 18 to 24, read, “Hitler was right I hate the Jews.” Another, in response to a question about the Holocaust, said “it was made up,” followed by a hand-clapping emoji.”
The funny part of this is the lack of understanding from the brains at Microsoft. It’s like they grow up in a computer lab without any thought to what’s the worse that can happen in the real world. They spend years and years developing a computer program that can artificially think and respond and then within one day of the real world trial, they need to head back to lab to install some filters. The concept is humorous to me because you need the guys from 4Chan WITH the guys from Microsoft to prepare. Something like this:
Microsoft: We’re designing a robot to perform simple household chores like taking out the trash or washing the dishes. Pretty neat, huh?
4Chan: Will it have hands?
M: Robotic arms.
4Chan: Will they be able to grip?
M: Of course.
4: Will they be soft?
4: Hmm, that’s going to be a problem.