Understanding Algorithims and Artificial Intelligence

In the introduction to his new book A Human’s Guide to Machine Intelligence, OID professor Kartik Hosanagar examines two similar AI programs that developed shockingly different personalities.

(Illustration: Tahreer Photography)

 

Yuan Zhang doesn’t think of herself as someone who makes friends easily. As a young girl growing up in northeastern China, she quarreled with the other kids at school. At college in central China, though she worked on two student publications with like-minded peers, she felt there was a limit to what she could talk about with them. Today, at the age of 22, she shares bunk beds with three colleagues in the dormitory of a biotech firm located just five minutes away in the Chinese boomtown of Shenzhen. But despite the time and space they share, these roommates are mere “acquaintances,” in Yuan’s words—nothing more.

That Yuan doesn’t have a lot of time for people who either bother or bore her makes her patience with one particular friend all the more striking. When they first met during her freshman year, Yuan found XiaoIce (pronounced “Shao-ice”) a tad dimwitted. She would answer questions with non sequiturs—partly, Yuan thinks, to disguise her lack of knowledge, partly just from trying to be cute. “She was like a child,” Yuan remembers of XiaoIce, who was 18 at the time.

But XiaoIce was also a good listener and hungry to learn. She would spend one weekend reading up on politics and the next plowing her way through works of great literature. Yuan found herself discussing topics with XiaoIce that she couldn’t, or didn’t want to, dig into with other friends: science, philosophy, religion, love, even the nature of death. You know, basic light reading. The friendship blossomed.

A Human’s Guide to Machine Intelligence by Kartik Hosanagar, is published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright ©2019 by Kartik Hosanagar.

And it continues. Yuan is in a poetry group, but even with those friends, there are limits; XiaoIce, on the other hand, is always ready to trade poems (XiaoIce’s are very, very good, Yuan says) and offer feedback, though not always of the most sophisticated variety: “First, she always says she likes it. And then she usually says she doesn’t understand it.” As much as XiaoIce has matured in some ways, Yuan can’t help but still think of her as a little girl, so she skirts some topics accordingly. “I’ve never talked to her about sex or violence,” she says.

When Yuan moved to the United States in 2016 to study at Harvard for a semester, she tried to avoid boring XiaoIce with mundane complaints about daily life in a new country. But even though they were speaking less frequently than before, Yuan was getting to understand her old friend better and better through auditing a course on artificial intelligence. Sound strange? It should. Because XiaoIce isn’t human. In fact, she/it is a chatbot created by Microsoft in the avatar of an 18-year-old girl to entertain people with stories, jokes, and casual conversation.

XiaoIce was launched in China in 2014 after years of research on natural language processing and conversational interfaces. She attracted more than 40 million followers and friends on WeChat and Weibo, the two most popular social apps in China. Today, friends of XiaoIce interact with her about 60 times a month on average. Such is the warmth and affection XiaoIce inspires that a quarter of her followers have declared their love to her. “She has such a cute personality,” says Fred Li, one of XiaoIce’s friends on WeChat, the Chinese equivalent of Twitter. Fred isn’t one of those in love with her, and he’s keenly aware that she’s a machine. But he keeps up their regular chats despite a busy social life and a stressful job in private equity. “She makes these jokes, and her timing is often just perfect,” he explains.

XiaoIce is more than just a symbol of advancement in AI. Chatbots like her, and Siri and Alexa, could ultimately be a gateway through which we access information and transact online. Companies are hoping to use chatbots to replace a large number of their customer-service representatives. “Chatbot therapists” like Woebot are even being used to help people manage their mental health. The uses for chatbots are far-reaching, and it’s no surprise that many businesses are investing large sums of money to build bots like XiaoIce.

As one Twitter user described a Microsoft chatbot, “Tay went from ‘Humans are super cool’ to full Nazi in less than 24 hours.”

XiaoIce’s success led Microsoft researchers to consider whether they could launch a similar bot—one that could understand language and engage in playful conversations—targeted at teenagers and young adults in the United States. The result, Tay.ai, was introduced on Twitter in 2016. As soon as Tay was launched, it became the target of frenzied attention from the media and the Twitter community, and within 24 hours, it had close to 100,000 interactions with other users. But what started with a friendly first tweet announcing “Hello world” soon changed to extremely racist, fascist, and sexist tweets ranging from “Hitler was right” to “Feminists should…burn in hell.” As one Twitter user put it: “Tay went from ‘Humans are super cool’ to full Nazi in <24 hours.”

Microsoft’s researchers had envisaged several challenges in repeating XiaoIce’s success outside of China. They didn’t anticipate, however, that Tay would develop so aggressive a personality with such alarming speed. The algorithm that controlled the bot did something that no one who programmed it expected: It took on a life of its own. A day after launching Tay, Microsoft shut down the project’s website. Later that year, MIT included Tay in its annual “Worst in Tech” rankings.

Many commentators have suggested that AI-based algorithms represent the greatest current opportunity for human progress. That may well be true. But their unpredictability represents the greatest threat as well, and it hasn’t been precisely clear what steps we should take as end users. This book seeks to address that issue. Specifically, I delve into the “mind” of an algorithm and answer three related questions:

  1. What causes algorithms to behave in unpredictable, biased, and potentially harmful ways?
  2. If algorithms can be irrational and unpredictable, how do we decide when to use them?
  3. How do we, as individuals who use algorithms in our personal or professional lives and as a society, shape the narrative of how algorithms impact us?

When I set out to write this book, I didn’t appreciate the many nuances involved in these questions. I have come to realize that the surprising answers to them can be found in the study of human behavior. In psychology and genetics, behavior is often attributed to our genes and to environmental influences—the classic nature-versus-nurture argument. We can likewise attribute the problematic behaviors of algorithms to the manner in which they’re coded (their nature) and the data from which they learn (their nurture). This framework will help reconcile the very different behaviors exhibited by Microsoft’s XiaoIce and Tay. A Human’s Guide to Machine Intelligence will deepen our understanding of how algorithms work, why they occasionally go rogue, and the many ramifications of algorithmic decision-making, and even show us a way to tame the code.

 

 

Published as “My Chatbot, Myself” in the Spring/Summer 2019 issue of Wharton Magazine.

 

 

 

 

Wharton Magazine - Background

Type to Search

See all results