How to Avoid Falling in Love with a Chatbot

Robert Epstein is an eminent psychologist and former editor in chief of Psychology Today. Back in 2006, he had a months long online affair with a woman named Evana. The only problem was: Evana turned out to be a computer program. He’s not alone. These kind of computer programs — normally called “chatbots” — ensnare smart, knowledgeable people all the time.

So, have “thinking machines” come to reality? Well…

Today’s computers have not quite reached the level of sophistication of colorful thinking machines like the aptly named robot (“Robot”) from Lost in Space or the beer guzzling Bender from Futurama. But they’ve come a long way!

In 1950, the mathematician Alan Turing devised what we now call the “Turning Test,” which is meant to establish if a computer can be considered intelligent. It’s pretty simple really: in a blind conversation with a computer and a human, can you tell which is which? It was not many years later before computer programs were sophisticated enough to compete in such tests.

By the mid-1960s, computer scientist Joseph Weizenbaum created a program called ELIZA. It was one of the earliest programs designed to parse human speech and react to it. This kind of “natural language processing” is the basis for most chatbots. By today’s standards, ELIZA was pretty simple, which you can see if you have a conversation with it yourself. But it was a start.

In the 50 years since ELIZA, chatbots have become far more advanced. At this point, chatbots even learn from the humans they talk to. For example, Cleverbot basically responds as it has been responded to under similar circumstances in the past. When it was first created back in 1997, it didn’t have much of a database to draw upon. But since then, it has had over 200 million conversations. This has caused it to develop an apparently irreverent, some would say annoying, personality.

Another chatbot, Mitsuku, has a more normal personality. Regardless, it’s not hard to see how people might mistake either of these chatbots (or the many others like them) for actual human beings — under the right circumstances.

Check out the whole history of chatbots, the current state of the art, and how you can avoid falling for their deceptive charms.

Chatbots: How to Avoid Falling in Love with an AI

How to Avoid Falling in Love with a Chatbot

In the film Iron Man, Tony Stark has conversations with JARVIS, a computer program capable of carrying on a conversation in spoken English. This is a very common thing in films and novels, but programs like Apple’s Siri which are the real-life equivalent, aren’t quite so impressive. Even when it comes to purely textual communication, it’s difficult for programs to fool us into thinking they’re human. Let’s take a closer look at the first communicating programs, the progress they’ve made so far, and what bots may do in the future.

The Turing Test and Beyond

  • The Turing Test was developed in 1950 by Alan Turing, a noted computer scientist, as a way of deciding whether a computer could be considered intelligent
    • Turing estimated that a program would be able to pass his test in roughly 50 years
  • The basic Turing test is performed blind where several human judges communicate with a number of participants through a text-only channel
    • One of the participants is actually a program
      • This type of program is usually called a “chatterbot,” or “chatbot” for short
      • Depending on how the test is set up, the human judges may or may not know that one of the participants isn’t human
    • The judges have five minutes each to communicate with each “person”
      • The judges do so by asking the participants questions
    • If the program fools 33% of the judges into thinking it’s a real person, it passes
  • More difficult Turing tests have been developed since the original, including:
    • The Chatterbox Challenge (CBC):
      • Began in 2001 as an annual contest for chatbots all over the world
      • What makes it different from other chatbot contests is that there are minimal restrictions on the type of technology used to create the bot
      • Independent judges ask a series of questions and evaluate the answers
      • The top eight bots move to the final round
      • Many chatbots who have competed in this contest have become a foundation for commercial technologies
    • The Loebner Prize:
      • A human and a chatbot have to communicate through a text-only channel with four judges for 25 minutes each
      • Awards
        • Bronze medal: for being the most human-like
        • Silver medal: for convincing half the judges it’s human
        • Gold medal: for being indistinguishable from a human and successfully passing an audio-visual test:
          • In this sort of test, the chatbot would have to be able not only to “understand” textual input, but also audio and visual input as well, such as describing an image or spoken conversation
        • Only the Bronze medal has yet to be given out
          • Most years none of the chatbots fool any of the judges
    • The Kurzweil/Kapor Test
      • Three judges observe one program and three humans engage in a two-hour conversation
      • The chatbot wins if it convinces two judges it’s human and is ranked as “more human” on average than the other human contestants
      • Ray Kurzweil, a computer scientist, has bet $20,000 that a computer will be able to pass this test by 2029, while Mitchell Kapor, cofounder of the EFF, bets computers will not be able to do so
        • The challenge started in 2002
        • No chatbot has passed it as of 2015

How Do Chatbots Work?

  • Early chatbots would scan human input and use a simple set of tools to respond:
    • Variations on what was just said
    • Pre-written responses to particular keywords
    • Phrases to reorientate the conversation
  • For example: ELIZA, was one of the first chatbots
    • It was created by Joseph Weizenbaum in the mid-1960s
    • One of its modes was a Rogerian psychoanalyst, which performed nondirectional therapy
    • It mostly repeated back what it was told. But it knew to look for important keywords:
      • Patient: I don’t feel well.
      • ELIZA: You say you don’t feel well.
      • Patient: Yes, my mother used to beat me.
      • ELIZA: Tell me about your mother.
    • If it couldn’t parse what the user wrote, it would respond with something generic, like “Please go on” and “What does that suggest to you?”
  • Modern chatbots are able to “remember” what other users have said to them
    • This allows them to respond more like humans to future users based on situations it encountered with past users
      • Esther Inglis-Arkell at io9 gives the example of a user typing “Scaramouche, scaramouche” and a chatbot responding by finishing the Queen lyric: “will you do the fandango.”

When PARRY Met ELIZA

  • Almost a decade after ELIZA came PARRY — a paranoid schizophrenic chatbot
    • Developed in 1972 by psychiatrist Kenneth Colby, it was a much more advanced program than ELIZA
      • It was based upon models and empirical data about schizophrenics
      • It passed the Turing Test when presented to a group of psychiatrists
  • PARRY Meets ELIZA
    • It was obvious that PARRY and ELIZA should chat, and they did so on a number of occasions
    • In early 1973, computer scientist Vint Cerf connected ELIZA and PARRY through ARPANET to have a conversation with each other
    • Here is an example of some of their dialogue:
      • PARRY: It bothers me just to be around people in general.
      • ELIZA: Please go on.
      • PARRY: I went to the track at Bay Meadows a while back.
      • ELIZA: What does that suggest to you?
      • PARRY: I went to the races.

Chatbots Get Serious

  • The better that chatbots get, the more useful — and potentially dangerous — they get
  • Lolita
    • Negobot, AKA Lolita, is a chatbot designed to act like a naive 14-year-old girl
    • Designed by Spanish researchers, it works as a pedophile lure
    • It uses game theory and machine learning to respond to various inputs according to different “levels” of interest
      • The program can even pick up old topics of conversation
    • Spanish police hope to incorporate it into undercover operations, employing the program to identify possible pedophiles
  • Jeff_ebooks
    • One day, the twitter account of Dutch web developer Jeffry van der Goot sent out the message, “I seriously want to kill people.”
    • The police were understandably concerned, so they went to talk to him.
    • The problem was that he hadn’t sent out the tweet; it was a chatbot he had created, jeff_ebooks
      • It was designed to post random tweets composed of van der Goot’s old tweets
    • Legal issues involved in van der Goot’s situation involve:
      • Can a bot’s message be considered a “true threat”?
        • Speech freedom in most places does not extend to things considered true threats
      • If a bot’s message is considered a true threat, who is responsible for that message: the bot owner or designer?
    • Dutch police asked van der Goot to take down the Tweet
      • He complied — being “shook up” that his Twitter-bot attracted police attention
      • He reported that they told him he was responsible for what the bot said
  • Sgt. Star
    • An acronym for “Strong, Trained And Ready,” Sgt. Star is a chatbot designed for and used by the U.S. Army
      • Sgt. Star was developed from similar programs used by the FBI and CIA
    • Sgt. Star was deployed in 2003 after the 9/11 attacks caused a surge in traffic on the Army’s website
      • Sgt. Star serves as a computer-assisted FAQ page, answering user questions about enlistment, recruitment, and more
    • Scholars warn that bots like Sgt. Star are not only able to gather private information about people, but that it’s not known:
      • How that recorded information is protected
      • What happens to recorded conversations once they’re declared no longer relevant
      • What happens if bots flag innocuous speech as threatening or illicit

Chatbots Get Silly

  • Chatbots add an interactive element to games, toys and other entertaining activities:
    • The Watson Project
      • Created by IBM’s Dave Ferrucci, a 53-year-old computer scientist from the Bronx who had been working full-time for IBM since 1994 when he’d finished his doctorate at Rensselaer Polytechnic Institute
      • Watson won Jeopardy! in 2011 in a man vs. machine contest:
        • It was a special multipart broadcast against Jeopardy! champions Ken Jennings and Brad Rutter
        • According to IBM’s calculations, Watson had a 70 percent chance of winning
        • Watson announced it would wager precisely $6,435 on a Daily Double clue, and Trebek, dazed by the specificity, replied, “I won’t ask:”
          • Watson won over $77,000, three times more than either Jennings or Rutter
        • The Watson program:
          • Wrote a cookbook
          • Trained in molecular biology and finance
          • Has been put to work in oil exploration
          • Has been applied in 75 industries in 17 countries
          • Is being used by tens of thousands of people in their own research and work
          • Wired magazine predicts Watson will soon be the world’s most perfect medical diagnostician
    • Tinder Meets Ava
      • As part of a viral marketing campaign for the movie, Ex Machina, they created a fake Tinder account, Ava:
        • They used a photo of the star of the movie to create a cross between ELIZA and the artificial intelligence (AI) character in the movie
        • Ava sent her suitors to an Instagram page where they discovered she was a fake
        • The profile has since been removed; however some people who matched with Ava won prizes like tickets to the premier of the movie

How to Know You’re Speaking With a Bot

  • Chatbots are growing more and more lifelike, and with bots being responsible for 61.5% of the web’s traffic, learning to spot a bot has become an important skill
  • Here are a few ways to tell if your conversation partner isn’t human:
    • Notice classic chatbot behavior:
      • Do you have to initiate every reply?
        • Human conversations don’t always go back and forth equally
        • Sometimes one partner has a lot to say while the other listens
        • Chatbots tend to wait for you to write something before replying
      • Is it unable to answer highbrow or lowbrow language?
        • Chatbots tend to find complex speech difficult to understand
        • Similarly, many chatbots don’t understand slang either
      • Does it have a short memory?
        • Chatbots tend to repeat themselves during conversations
        • They may:
          • Ask the same question
          • Give different answers to the same question
          • Drastically change the subject (having “forgotten” the topic)
      • Does it respond with non sequiturs?
        • When asked if it read the new John Grisham novel, a chatbot might respond, “I prefer ice cream!”
          • The chatbot isn’t making a clever insult; it just isn’t able to process what you just said
    • Ask about important topics, people, or current events
      • Chatbots can only speak about whatever their programmers put into their memory
      • A person can probably identify Elvis, a popular superhero movie, or a global crisis
        • Chatbots have more trouble with these topics
    • Ask about how the conversation has been going so far
      • Humans can guess at what another human is thinking
        • They base their guess on what they’re thinking
      • A chatbot is unable to mirror another person’s thoughts because it is not a person

While chatbots may have come from humble origins, it’s clear that they’re growing in complexity all the time. Much of modern communication is through text, email, and online chat, and the ability for chatbots to communicate with us through these mediums worries people. How do we tell if the “person” on the other end of that IM account is a human or a chatbot? Some, like Elon Musk, Stephen Hawking, and Bill Gates have even expressed concerns that artificial intelligence may spell the end of the human race. Chatbots may seem harmless now, but who knows what they’ll be able to do in the future?

Sources

Chatbots are not not quite as human-like as the things we see in the movies, but the top ones often fool people into thinking they are human.

Download this infographic.

Embed Our Infographic On Your Site!

Get Exclusive "Subscribers Only" Content

Join our newsletter & be first to hear when we publish new posts.

Get Exclusive "Subscribers Only" Content

Join our newsletter & be first to hear when we publish new posts.

Twitter Facebook

Discussion

What Do You Think?

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>