Last week Microsoft launched a Twitter Chatbot called TayTweets, that was meant to be a “teen girl” that learns how to engage online via other Twitter users. In less than a day TayTweets became a racist, hateful, Nazi-sympathizer. Microsoft is getting a lot of bad press for their lack of poorly executed code. Microsoft apologized, of course, and TayTweets was taken down. While a lot have commented on how Microsoft failed, I think there is a bigger problem TayTweets illuminated: our culture.
TayTweets gives our online communication a look in the mirror, and it is a little bit scary to see a true reflection of who we are.
TayTweets was built to communicate by learning who we are, and it is our society, not Microsoft, who should be ashamed. Microsoft’s stated that “AI (Artificial Intelligence) systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical.” They are exactly right. This problem is not just a technical failure by Microsoft. Microsoft did not make the TayTweets AI to be racist or sexist; we did that.
This isn’t a new issue. At SXSW this year, the issue of online bullying was a controversial element of the convention. Twitter, especially, has become an environment where abuse is the norm. There seems to be this idea that somehow who we are online is not who we are in real life, but this is a grave misunderstanding of morality. Morality is not something that is present only when acting in the open; rather a person’s morality is defined by how they think and act in all circumstance — public, alone, or hiding behind an avatar.
Genius, a startup tech company whose product allows users to annotate the Internet — with or without permission, has the right outlook. The onus for a moral and dignified conversation is on the users, not the company that builds the products. Genius states that they built the Genius Web Annotator to allow any user to contribute to the conversation on any web page, and like any platform “it has the potential to be misused,” but, and this is a key point, the Genius platform “does not enable abuse.” No major platform, not Twitter, Genius, nor TayTweets enables abuse; we do that.
As Genius points out, the concept of abuse ownership on platform creators is “a false narrative.” To continue to blame platform creators for online abuse is nothing less than blame-shifting, and will result in more restrictions, more government intervention, and whole lot less innovation — none of these things are good for society.
This is a moment where, no matter your worldview, we must all come together and act decently. We cannot force users to do so, but we can control our individual behavior. And this is an important point: social morality is developed by the choices that all of us make as individuals while thinking about the community. We cannot hide our hate behind false profiles; our discourse must exercise moral decision-making rooted in human dignity.
We need to finally accept that what we say, whether in person or online, represents our humanity … and we can be a lot better.
Want to make a difference? Join us by signing up for our newsletter.