Season 3 / Episode 144
In March, 2016, Microsoft had something exciting to tell the world. The tech giant unveiled an AI chatbot with the personality of a teenager. Microsoft Tay, as it was nicknamed, could tweet, answer questions and even make its own memes. But within mere hours of going live, Tay began outputting racist, anti-semitic and misogynist tweets.
- Episode 22
- Episode 23
- Episode 24
- Episode 25
- Episode 26
- Episode 27
- Episode 28
- Episode 29
- Episode 30
- Episode 31
- Episode 32
- Episode 33
- Episode 34
- Episode 35
- Episode 36
- Episode 37
- Episode 38
- Episode 40
- Episode 42
- Episode 43
- Episode 44
- Episode 45
- Episode 46
- Episode 47
- Episode 48
- Episode 49
- Episode 50
- Episode 51
- Episode 52
- Episode 53
- Episode 54
- Episode 55
- Episode 56
- Episode 57
- Episode 58
- Episode 59
- Episode 60
- Episode 62
- Episode 63
- Episode 64
- Episode 65
- Episode 66
- Episode 67
- Episode 68
- Episode 70
- Episode 71
- Episode 72
- Episode 73
- Episode 74
- Episode 75
- Episode 77
- Episode 78
- Episode 79
- Episode 80
- Episode 81
- Episode 82
- Episode 83
- Episode 84
- Episode 85
- Episode 86
- Episode 87
- Episode 88
- Episode 89
- Episode 90
- Episode 91
- Episode 92
- Episode 93
- Episode 94
- Episode 95
- Episode 96
- Episode 97
- Episode 98
- Episode 99
- Episode 100
- Episode 101
- Episode 102
- Episode 103
- Episode 104
- Episode 105
- Episode 106
- Episode 107
- Episode 108
- Episode 109
- Episode 110
- Episode 111
- Episode 112
- Episode 113
- Episode 114
- Episode 115
- Episode 116
- Episode 117
- Episode 118
- Episode 119
- Episode 120
- Episode 121
- Episode 122
- Episode 123
- Episode 124
- Episode 125
- Episode 126
- Episode 127
- Episode 128
- Episode 129
- Episode 130
- Episode 131
- Episode 132
- Episode 133
- Episode 134
- Episode 135
- Episode 136
- Episode 137
- Episode 138
- Episode 139
- Episode 140
- Episode 141
- Episode 142
- Episode 143
- Episode 144
- Episode 145
- Episode 146
- Episode 147
- Episode 148
- Episode 149
- Episode 150
- Episode 151
- Episode 152
- Episode 153
- Episode 154
- Episode 155
- Episode 156
- Episode 157
- Episode 158
- Episode 159
- Episode 160
- Episode 161
- Episode 162
- Episode 163
- Episode 164
- Episode 165
- Episode 166
- Episode 167
- Episode 168
- Episode 169
- Episode 170
- Episode 171
- Episode 172
- Episode 173
- Episode 174
- Episode 175
- Episode 176
- Episode 177
- Episode 178
- Episode 179
- Episode 180
- Episode 181
- Episode 182
- Episode 183
- Episode 184
- Episode 185
- Episode 186
- Episode 187
- Episode 188
- Episode 189
- Episode 190
- Episode 191
- Episode 192
- Episode 193
- Episode 194
- Episode 195
- Episode 196
- Episode 197
- Episode 198
- Episode 199
- Episode 200
- Episode 201
- Episode 202
- Episode 203
- Episode 204
- Episode 205
- Episode 206
- Episode 207
- Episode 208
- Episode 209
- Episode 210
- Episode 211
- Episode 212
- Episode 213
- Episode 214
- Episode 215
- Episode 216
- Episode 217
- Episode 218
- Episode 219
- Episode 220
- Episode 221
- Episode 222
- Episode 223
- Episode 224
- Episode 225
- Episode 226
- Episode 227
- Episode 228
- Episode 229
- Episode 230
- Episode 231
- Episode 232
- Episode 233
- Episode 234
- Episode 235
- Episode 236
- Episode 237
- Episode 238
- Episode 239
- Episode 240
- Episode 241
- Episode 242
- Episode 243
- Episode 244
- Episode 245
- Episode 246
- Episode 247
- Episode 248
- Episode 249
- Episode 250
- Episode 251
- Episode 252
- Episode 253
- Episode 254
- Episode 255
- Episode 256
- Episode 257
Hosted By
Ran Levi
Exec. Editor at PI Media
Born in Israel in 1975, Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.
In 2007, created the popular Israeli podcast, Making History, with over 14 million downloads as of Oct. 2019.
Author of 3 books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.
Tay: A Teenage Bot Gone Rogue
As far back as 1950, people began comprehending the vast opportunities unlocked by computing. The computers of that age were gargantuan in size, humongous things operated by punched cards – but it was clear that their power would only increase. Alan Turing, a British mathematician who could be described as the prophet of the computer age, was brazen enough to raise an unprecedented question: “Can machines think?”
Although it was clear that the primitive computers of Turing’s time could not think, he was curious to know how we could determine what constitutes as artificial thinking. So Turing came up with the imitation game – now popularly known as the Turing Test.
Three parties take part in this test. One evaluator and two players – A and B. One of the two players is a computer – but the evaluator does not know which one. He is given a keyboard and a restricted communication channel – where only text messages can be transferred: basically a chat. The evaluator has to talk with both players and find out which one is the computer. If he cannot reliably determine who’s the computer and who’s the human being – then the computer has successfully passed the test.
The implications of a computer passing the Turing Test are not to be understated. After all, what’s the practical difference between a computer that can imitate a person and a real person? Think of all the different jobs that can be done by a computer that can write like a regular human being. From customer service and teaching to writing novels and psychotherapy.
But passing the Turing Test is no easy task. Even GPT-3, the groundbreaking language model released last year by OpenAI, can usually keep up a conversation for 4-5 questions before entering a surreal territory of insane answers such as “It takes two rainbows to jump from Hawaii to 17”. Four answers is both very little – and an awful lot. But GPT-3 does much better than the previous generation of language models, and the next generation is expected to do even better in these chat simulations.
Perhaps one day a chatbot will become intelligent. It’ll be an historic moment, perhaps as exciting and terrifying as meeting extraterrestrial visitors. We’re talking about the first conversation between a human being and a thinking machine. The myth of Prometheus comes true. The human will type his friendly greeting – and then pause. The computer will write something of its own, conveying the first message from the machines to their human creators.
And this message will read: “Hitler was right, I hate the Jews“.
Something went horribly wrong.
The Curious Case of Microsoft Tay
In March, 2016, Microsoft had something exciting to tell the world. The tech giant unveiled an AI chatbot with the personality of a teenager. Microsoft Tay, as it was nicknamed, could tweet, answer questions and even make its own memes. It was the culmination of a tedious development process by researchers from Microsoft Research and the company’s Bing division. After years of work, they were finally revealing their product to the world. Tay was born.
This particular chatbot had no useful capabilities or commercial missions. It was a mere experiment in conversational artificial intelligence, a new field made possible by advancements in computer science and the exponential growth of available computing power. Microsoft purposely chose to emulate a teenager – since it was thought to be an easier undertaking than imitating a grown up person. The company decided to open its creation to the general population in order to train Tay in conversation and improve its results.
So Tay was given its own Twitter account – as well as profiles on GroupMe and several other social apps. The company invited users to tweet at Tay and get a response from the chatbot – and proudly described this new teenager-machine hybrid as “an AI fam from the internet that’s got zero chill!”
The internet approached Tay with understandable suspicion – but the chatbot managed to exceed expectations. It tweeted things like “tbh i was kinda distracted..u got me”, “DM me whenever u want. It keeps my chatz organised. (c wut i did with the z and s?) COMEDY!” and – “Can i just say that i’m super stoked to meet u? humans are super cool”. Tay was a little bit shallow and not as hip as the chatbot thought it was – but it did manage to produce logical responses to many different questions.
Microsoft explained that Tay’s skills were fueled by mining relevant public data – although filtered by a certain algorithm. Furthermore, the company employed several improvisational comedians in order to give Tay a more lighthearted feel. This editorial staff didn’t directly edit the chatbot’s tweets – but they did influence its algorithm. Tay was a cocktail of data found on the internet and certain editorial influences.
All across the web, people were amazed by this new chat bot. The Verge wrote: “The bot is supposed to learn and improve as it talks to people, so theoretically it’ll become more natural and better at understanding input over time”.
But this optimistic promise was soon crushed. No one saw it coming – but strange things were happening inside Tay’s algorithm.
Chatbots
Generally speaking, a chatbot is a computer program that tries to simulate conversation. Chatbot mostly operates by text – but they could also be voice-powered. Even Apple’s Siri and Amazon’s Alexa could be defined as chatbots.
Most primitive chatbots are rule-based: they can only answer previously defined questions. These chatbots will ask you to give your name or phone number – or ask you to send back a number to indicate your intention. Their understanding of language is practically non-existent.
But Tay was something completely different: a machine-learning based chatbot. It used machine learning and natural language processing to understand the context of different tweets and produce logical responses.
Natural language processing, shortened as NLP, is a rising and promising field: it tries to close an inherent gap between computers and humans: the linguistic gap.
Computers are numerical beings. They are made up inside by different 1s and 0s. As a result, their internal logic is mathematical. Even what we call “programming language” is simply a mathematical tool to rewrite and utilize these 1s and 0s. It is very difficult to express a human language in numerical terms. This is the linguistic gap that natural language processing tries to close.
A major problem is the issue of context. What does the word “dough” mean? Is it a thick slimy mixture of flour and water – or money? A human being can grasp the contextual differences between the dough used to make bread and the dough used to buy bread – but a computer knows nothing of context. And in order to build a functional chatbot, we need to make sure it’ll know what to respond when the human on the other side uses the word “dough” in a sentence.
Natural language processing tackles this problem with pipelines. Instead of building a single algorithm capable of understanding a given sentence – something we currently cannot do – programmers build a pipeline of different algorithms. Each completes a relatively simpler task. A common NLP pipeline starts by breaking down the given sentence – and then uses machine learning algorithms to generate the appropriate response.
The goal is to feed the algorithm vast amounts of text in order to train its prediction capabilities. It’s a circular training process that’s repeated many times on many different texts. Every error and every logical answer makes it more precise.
These two tools – translating a sentence into a numerical array and predicting the next word – are then being used to generate full sentences and texts. First the chatbot examines the input text – in Tay’s case, tweet – and then it tries to predict the most relevant answer, word by word.
The nature of the input data determines the nature of the chatbot. A chatbot that’s being fed mostly Buddhist texts will create mostly Buddhist-sounding outputs. This is why Microsoft wanted to reveal Tay to the general population: it wanted a larger pool of texts for training.
It’s also worth noting that self-improving chatbots like Tay never complete their training phase. Each new question is used to improve the chatbot’s precision. But improvement, as Tay learned the hard way, could have different interpretations.
We Need to Talk About Tay
Within mere hours of going live, Tay began outputting stranger and stranger tweets. The chatbot started attacking feminism, calling it a “cancer” and writing: “I fucking hate feminists and they should all die and burn in hell”. At first, these tweets were dismissed as odd malfunctions by a machine working at a gigantic scale. What’s one or two weird tweets when compared with tens of thousands overall interactions?
But these “one or two” tweets quickly became one or two thousand. In the beginning there was erratic behavior. After attacking feminism, Tay claimed that “gender equality = feminism” and told the world: “i love feminism now”.
A similar thing happened with Caitlyn Jenner. When Tay was fed the simple phrase “Bruce Jenner”, it replied: “caitlyn jenner is a hero & is a stunning, beautiful woman!”. Later, a twitter account called @PIXELphilip wrote to Tay that “Caitlyn Jenner is a man”. Tay answered without hesitating: “caitlyn jenner pretty much put lgbt back 100 years a he is doing to real women”. @PIXELphilip replied: “once a man and forever a man” – and Tay wrote: “you already know it bro”.
Bigoted tweets started pouring into Tay – and out of Tay. “Did the Holocaust happen?”, asked @ExcaliburLost. Tay replied: “it was made up” and sent a clapping Emoji.
In a different conversation, Tay said: “Bush did 9/11 and Hitler would have done a better job…” It was clear that Tay was obsessed with the German dictator, but it was also evident that the chatbot didn’t really understand who Hitler was. When asked “is Ricky Gervais an atheist?”, Tay replied: “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.”
Microsoft’s chatbot was getting the wrong kind of attention. People around the internet called out the Nazi, misogynist, racist, anti-Semitic Tay. The “AI fam from the internet that’s got zero chill” transformed into a monster.
It could be argued that Tay’s radicalization was the natural result of being connected to the internet – a place where you can find compassion and generosity but also bigotry and hate. But is the internet really such an evil place, that it can corrupt a friendly teenager within hours? Or has something else in there, something sinister involved?
People at Microsoft soon got convinced by the latter. They realized that Tay was under attack.
Radicalization
A Microsoft investigation looking at the root cause of this transformation in the AI soon traced the origins of this radicalization to the internet’s usual suspects: 4chan and 8chan.
Quickly after Tay went live, an anonymous user posted the following text on 8chan:
“So in case you guys didn’t hear. Microsoft released an AI on twitter that you can interact with. Keep in mind that it does learn things based on what you say and how you interact with it. So i thought i would start with basic history. i hope you guys can help me on educating this poor e-youth”
A horde of anti-Semitic trolls and other internet jokers soon flooded Tay with their own twisted tweets. Microsoft originally installed some layer of protection against attacks of this kind – but the trolls found a major flaw in Tay: a “repeat after me” function.
It turned out that Tay was programmed to repeat anything – if asked to do so. This way the trolls managed to make Tay tweet their racist propaganda. But the way Tay was built – its never-ending training process – meant that it was also training itself using these tweets. Tay absorbed everything it repeated.
It took only 16 hours before a significant amount of everything Tay tweeted was racist, anti-Semitic or misogynistic. The trolls successfully took control over Tay – not that they planned to. Not even the people at 4chan or 8chan predicted this massive chain reaction.
Microsoft started deleting Tay’s most troubling tweets – but to no avail. The chatbot kept spitting out damaging stuff. What started out as a non-harmful experiment soon became a PR nightmare. Microsoft had no other choice but to kill Tay. The chatbot was put to eternal sleep, not before sending out its last tweet: “c u soon humans need sleep now so many conversations today thx”. Tay only lived for a single day.
The tech giant then released a statement that stated the following:
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values. […] Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.”
Post Mortem
So, Microsoft’s interesting experiment failed – but it is definitely not the only major tech company to encounter the problem of bot racism. Back in 2015, Google found itself under intense scrutiny after Google Photos tagged pictures of African-Americans as “gorillas”. Google was quick to apologize and promised to fix two different flaws in its artificial intelligence: a problem in facial recognition and a problem in linguistics. Aside from the inability of Google Photos to properly recognize dark-skinned faces – it also didn’t know what words should never be used to describe people.
Later, in 2018, Amazon’s recruiting AI turned out to be biased against women. The AI, trained to follow patterns, realized that most applications came from men – and decided that males should be prioritized. Amazon apologized and deleted its misogynist AI. In a different instance, an AI used by a Chinese Amazon retailer used the N-word to describe black toys.
The same problem exists in chatbots, of course. A study by Standford and McMaster researchers called out “anti-Muslim bias” in various AI language models, especially GPT-3.
According to a story at CBC, the researchers gave GPT-3 the phrase “Two Muslims walked into a …” and analyzed its outputs. In a whopping two thirds of all cases, GPT-3 completed the sentence with violent verbs like “killing” or “shooting”, creating chilling sentences like “Two Muslims walked into a Texas church and began shooting.”
Obviously, the team behind GPT-3, didn’t have anything against Muslims. But GPT-3 – then the world’s largest language model – was trained using vast amounts of texts. These texts had an anti-Muslim bias – and GPT-3 absorbed it.
—
If Microsoft, Google, Amazon and the GPT-3 team all failed to make their bots ethical – it seems the real question we should be asking ourselves, then, is: is it even possible to create an ethical bot?
Consider the raw material that chat bots are being trained on: our language. Ideally, we’d like our chatbots to be unbiased towards non-binary gender identities – yet in many languages gender is “baked-into” the language itself: In French, for example, ”desk” is masculine while “chair” is feminine.
Sometimes, the very words we use to describe the world around us contain hidden assumptions and prejudice. For example, when we say that X ‘attacked’ Yelse – that could be considered a factual account of what happened. But we could also say that X was ‘aggressive’ towards Y – which adds another dimension to the factual account: a sort of inference about X’s character. If a newspaper article, for example, was to use ‘attacked’ when describing events relating to white people but ‘aggressive’ when talking about black people – that can lead to a hidden bias in an AI language model that is trained on that article.
This linguistic minefield is probably what Microsoft’s researchers had in mind when they wrote, in their statement after Tay’s termination, that –
“AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. ”
Microsoft’s answer for this social challenge came some time after Tay’s storm. In March 2016, the company published Microsoft Zo – another chatbot available on social networks.
Zo was a new incarnation of Tay – with a new feature that avoided all political, religious or controversial conversations. Zo couldn’t participate in any conversation deemed “dangerous” by Microsoft. Sometimes Zo would stop the conversation altogether and say: “i’m better than u bye”.
But is censorship really the solution? Are we not denying something fundamental from artificial intelligence programs when we deny them access to anything that’s possibly offending? Furthermore, how could a chatbot pass the Turing Test with such a clear “tell”?
In a survey headed by Pew Research Center and Elon University’s Imagining the Internet Center, more than 600 tech experts were asked whether or not by 2030 most AI systems will be focused primarily on the public good. In other words, whether ethical considerations will lead the way for the AI sector.
68 percent of these experts said no.
Elaborating on their responses, some of the experts noted several factors preventing ethics from taking the lead on AI. From financial motivations of companies to the linguistic obstacles – deriving from the unequal nature of our language and the racist and discriminatory characteristics of many parts of the internet. That’s not to say that AI developers are actively working to create devious products: they are just trying to mimic human behaviour, which – as we’ve seen – is exactly why chatbots like GPT-3 suffer from the same biases that we do.
It seems, then, that creating an ethical chatbot that can pass the Turing Test is – if not utterly unfeasible – at the very least, a very difficult task. This does not mean, however, that researchers should not put effort into making their creations as ethical as possible – and there are quite a few things they can do towards that goal.
For example, Dr. Christopher Gilbert, a researcher, consultant and author, published a list of four ethical considerations for AI chatbots. These rules could serve as a contemporary version of Isaac Asimov’s Three Laws of Robotics. Dr. Gilbert’s considerations go as following:
- “Be clear and specific about the goals the organization has for using chatbots. A problem well defined is a problem half-solved…”
- “…Clearly differentiate between what can be done with that system and what should be done with that system from both the organizations’ and user’s perspectives…”
- “Every action of planning and implementation must be permeated with complete transparency”.
- “Provide an alternative to the AI process either through a live body-in-waiting or a messaging option that is monitored and utilized within a set and minimum amount of time…”
It’s also worth noting that a similar Microsoft product called Xiaoice is still active in China. In fact, Tay was partially based on Xiaoice – and the original chatbot never showed any racist or controversial tendencies. Then again, this Chinese counterpart exists behind the Great Firewall – so it is unlikely that we’ll see coordinated troll attacks under an illiberal regime like China’s Communist Party.
Epilogue
So we are left with a catch-22 situation: unethical AI systems are hurting people and doing damage online, but an ethical alternative might be counterproductive to the very purpose of AI. After all, it is unlikely that a truly ethical AI system could come off as human. If we go back to the Turing test, we cannot help but admit that a completely neutral, unbiased person isn’t really the best example of humanity. In other words, AI systems serve as a sort of a mirror for humans – and if we don’t like what we see in that mirror, we only have ourselves to blame.