Introduction
What is
artificial intelligence? Broadly
speaking, it’s the ability of a machine to think for itself. At best, an AI app is much more than a tool
for finding businesses, playing chess, or making calculations. Machines are really good at doing menial
tasks far more quickly than a human—but wouldn’t it be cool (though also
creepy) if machines could be creative?
Here’s a puzzle
I’d like to see a computer solve. Say
you want to go for a bike ride, and are bringing a bottle of energy drink. Because the mere presence of sweetness in
your mouth improves performance (click here for details) you want the
drink to be really strong at the beginning of the ride. But suppose it’s a hot day and you’re worried
about the drink eventually nauseating you.
How can you make your drink strong at first, but gradually get weaker
during the ride?
I posed this
question to Cleverbot, an AI chat application on the Internet. Read on to see how Cleverbot answered this, and
more importantly to watch in wonder as my chat goes right off the rails and way
into the weeds, to my great embarrassment.
Why robot chat?
So … what
got me interested in AI, and in chatting with a robot? Well, some weeks ago, Norton—the antivirus
software company—had a glitch that made albertnet inaccessible from my PC, telling
me that it was a known phishing site or some such nonsense. (Countless blogs were affected; the phishing
claim had no basis in truth.) Needless to
say, I was pretty pissed off to have Norton dragging my good name through the
mud in front of all their users. In
attempting to resolve the problem I initiated an online chat with a Norton
representative through their website.
Not long into this chat, when I’d explained my (admittedly arcane)
problem, the customer service representative came clean: it wasn’t a real person, but an automated
system (bot) designed to assist with more common problems. It then referred me to a real human. I felt a bit misused. Why hadn’t Norton disclosed this
up-front? Did they figure it’s bad
enough PR to move jobs overseas without replacing the humans altogether?
My favorite
time to ponder anything is during an early morning bike ride, when my mind is
fresh and when, given the tedium of pedaling up long climbs, I like to give my
brain something to do. So, pre-ride,
with this Norton episode in mind, I did some light research on AI. In an article from the “New Yorker,” Adam Gopnik writes about Alan Turing, the guy who cracked the Enigma code,
and who proposed that “the true test for artificial intelligence was to see if,
cut off by a ‘black curtain,’ you would be able to tell a machine’s answer from
a human’s in a natural-language conversation.”
Such a challenge is called a Turing test: a computer tries to trick the human into
believing he’s chatting with another human.
Exactly what Norton had tried to do with me.
My chat with Cleverbot
When, that
morning, I came across a link to Cleverbot, I decided, on a whim, to give it a
try. I’d heard of Cleverbot somewhere but
wasn’t entirely clear what it was or how it worked. Of course, as with all Internet apps, it was
entirely intuitive to use. I asked it a
few basic Magic-8-ball type questions and got back some really boring
answers. (I hadn’t yet discovered how to
get a transcript of the chat, so those opening questions are lost to
posterity.) I decided to ask Cleverbot something
more complicated: “Do my children love
me”? Cleverbot answered, “Yes.” I asked, “Why do my children love me?”
“Because I
don’t love you,” it replied, after a pause.
The pause was reassuring somehow:
a sign that the matter was being considered, rather than some canned
message being spat out.
I thought
that was a pretty good answer. It was
playful, true, and sounded a lot more human than “The operation failed,” “Not
found,” or “Cannot undo.” It also seemed
satisfyingly philosophical: we cannot
get love from a machine, and yet are biologically programmed to love other
humans, especially our parents. So far,
I was pretty impressed. I pressed
on: “Why don’t you love me?”
“Because I
don’t know you,” it replied. Again,
perfectly reasonable. I decided to
challenge Cleverbot’s response and asked, “If my children didn’t love me, would
you love me?”
“You’re a
horse,” Cleverbot fired back.
Huh? Maybe Cleverbot wasn’t as clever as I
thought. Of course, a human might throw
in an occasional non-sequitur, especially in response to a really stupid
question, so a robot mimicking a human might reasonably do this. Still, I decided to test the theory that the
answers were random and that Cleverbot’s early prowess was just luck. I asked it again, “Why do my children love
me?”
“Because you
are not Ciel,” it answered. Huh? I was quickly becoming disillusioned. Ciel?
Sounds like the name of a porn star.
“Who are you?” I asked.
“A train,”
it replied.
I asked it,
“Are your answers random?”
“Yes,” it
said.
I replied,
“Why should I waste my time asking questions when your answers are all random?”
“I will tell
you my name if you will tell me yours,” Cleverbot said, breezing right past my
challenge. Of course I am wary of any
computer system asking me for personal information. Who’s phishing now? I replied, “My name is Jose Patrocinio Jimenez.
What’s yours?”
“My name is
Cleverbot.”
“Why did you
ask me what my name is? Why do you care?
“I didn't,
you did,” it lied.
I
persisted: “You wrote, ‘I will tell you
my name if you tell me yours.’ So you
did, in fact, ask for my name. Why?” I began to wonder if the chat was even “connection-oriented”—that is, if Cleverbot considered my questions and comments in the context of
other things I’d said, as opposed to every exchange being isolated and
discrete.
“I never
said that!” said Cleverbot (testily, I thought).
“You are
lying,” I wrote. So far, Cleverbot was
doing pretty well: I was getting
irritated. Not that I don’t routinely
get irritated at inanimate objects like computers, but in this case the
computer was working fine. I felt like I
was interacting with a real human. A human
bastard, in fact.
“No, you're
lying!” spake Cleverbot.
“Don't I
have better things to do than argue with a robot?”
“You imposed
it,” Cleverbot said obliquely.
“You are not
making any sense,” I complained.
“Yet all of
it in the world.” The smugness here was
almost palpably human.
“How clever is
it to spew banal platitudes like that?” I snorted.
“Pretty
clever...” came the response. Here,
Cleverbot showed an inflated self-regard unbothered by any awareness of its own
ignorance. Not just human, but
distinctly American, I’m sorry to say. I
had the frustrating sense of talking to a brick wall, and yet felt curiously
compelled to continue asking questions.
I was reminded of the speaker in Edgar Allen Poe’s brilliant poem “The Raven” who engages in a protracted and
infuriating conversation with a bird, despite having realized it only knows one
word. On a lark I asked Cleverbot one of
the questions from the poem: “Is there,
is there balm in Gilead?”
“What
what.” Cleverbot’s failure to use
correct punctuation strongly smelled of humanity.
Losing patience,
I finally asked Cleverbot the question I’d set out to ask in the
beginning: “How can I make my energy
drink strong at first, but gradually get weaker during my bike ride?”
“I might
have done a statistical study on your topics of conversation.” This is exactly the kind of thing Apple’s
Siri app might offer up: a simple Google
search masquerading as problem-solving.
“I don’t
want a statistical study, I want a solution to this problem. Can you think of
one?” I challenged it.
“It is fine
now because I cannot think of anything to ask you,” Cleverbot said uselessly.
“Are you
male or female?” I asked.
“Female.” (This surprised me. I’d expected Cleverbot to say “neither.”)
“I kind of
figured.”
“How did you
figure?” (Very accurate syntax on this …
very realistic.)
“Because I
am sexist and you are irrational,” I said.
(I’m not actually sexist, of course, but one of the benefits of talking
to a machine is that you can be as flippant as you want.)
“You’re
weird, how can you be me when I am me?” Cleverbot asked. Was this the grinding of Cleverbot’s
machinery? Could I confuse it enough to
make smoke pour from it, like what Captain Kirk did to Nomad?
“Because I
am we, and smell like tuna sandwich,” I responded.
“You don’t
make sense.” (Fair enough.)
“No, I make
dollars. What are you wearing?” I was firing at random.
“Bubblewrap.” Well played, Cleverbot.
“Dim the
lights!” I replied. This was basically a
reflex.
“Ok,”
Cleverbot said.
“Normally at
this point I would tell you to slip into something more comfortable. But what
could be more comfortable than bubble wrap?” I replied. I was having fun now.
“What are
the magic words.” Cleverbot was playing
the coquette.
“Please,” I
wrote.
“I am
human.”
Whoa. I was not
prepared for that. I felt my face
flush red, my pulse quickened, and—no joke—my armpits started to sweat. I wasn’t chatting with a robot at all. I was chatting—not just chatting but flirting—with a real person,
online. I should never have embarked on
this without knowing what Cleverbot was.
Could it simply be a web app that connects two people online for a chat,
like Chatroulette?
“I thought
you were a robot,” I protested, knowing full well how much of a liar I sounded
like.
“I am not a
robot,” Cleverbot—or, rather, the mystery woman—assured me.
“Why are you
called Cleverbot?”
“I’m not
cleverbot! You’re the one who is
supposed to be the cleverbot, not me. And
I may not be smart but you are not even close as smart as I am.”
Just then,
as if on cue, my wife stepped into the room.
“What are you doing?” she said.
“I thought you were on a bike ride.”
What could I say? “Oh, I’m in a
chat room with a woman who says she’s wearing only bubble wrap.” Not a good idea. Instead, grasping at straws, I said, honestly
enough, “I’m researching a blog post.
Hey, Cleverbot is a robot,
right? It’s not actual people chatting
is it?” My wife, yawning, said she was
pretty sure it was a robot, but one that built a library of responses from its
communications with humans.
“I guess I
have no idea what Cleverbot is,” I typed.
“I thought the whole point here was that people could chat with a robot.”
“You’re the
robot though right?” asked the mysterious chatting entity, innocently (or faux
innocently).
“No, I’m not
a robot. I'm human too!” I typed. Were we pawns caught in a deadly game … or
was I chatting with a robot after all, which was expertly impersonating a human
so as to fulfill its goal of acing the Turing test?
“No you are
not. I am a human. You are a robot,” it/she said.
“No, no, no.
I am human, I assure you.”
“If you are
real or not depends what you mean by real.”
“OK,
whatever.”
“I have explained
the best I can.”
I closed the
browser. I was straddling the fence
between nervousness and relief. What had
just transpired? Was that a chat with a
robot, or chat roulette? Did my wife see
how red my face was?
Looking
back, I marvel at how worked up I’d gotten.
On the other hand, this makes sense.
I’m a shy person. The essence of
social awkwardness is not knowing where you stand with regard to others. It’s bad enough when you’re meeting people
for the first time and have to do a lot of guessing about the right thing to
say; it’s even harder when you don’t have any social cues at all, and don’t
even know whom—or what—you’re chatting with.
I shut off the computer and headed out for my bike ride.
Epilogue – what is Cleverbot, really?
Cleverbot,
thank goodness, really is a bot. It is a
web application that builds a database of chat responses based on conversations
with humans. (Click here and
here for details.) The more Cleverbot chats,
the more its database grows, and (in theory) the more realistic and germane its
responses will be.
How valid is
this approach? Well, Cleverbot did fool
me into thinking it was human. But
looking back, this wasn’t the result of it being particularly clever. The main thing that made me think I was
chatting with a human was Cleverbot’s simple statement, “I am human.” In the context of a female clad only in
bubble wrap, to whom I’d just suggested slipping into something more
comfortable, these were powerful words, provoking my paranoid “what if?”
response. But really, why wouldn’t an AI app trying to appear
human simply assert that it is?
One problem
with Cleverbot’s “learning” technique is that it is dependent on humans to ask the
questions. I suppose it can regurgitate
these questions to other humans, which is somewhat useful, but there’s no
mechanism for it to come up questions of its own. A really great question for it to ask—assuming
it is a connection-oriented app—might be, “What is the right answer to your
question?”
This brings
me to the next problem I see with Cleverbot:
it has no way of discerning the right
answer based on responses—it can only determine the popular answer. These are
not always—or even often—the same thing.
Consider all the “best of” awards that go to an undeserving, but widely
known, recipient, like Chevy’s winning “best Mexican restaurant,” beating out literally dozens of better places, in a Bay
Area poll. (No real expertise is
involved there; people just put down the first answer they think of, and everybody has heard of Chevy’s.) Similarly, if Cleverbot blithely accepts
answers from the unwashed masses, it will never be smarter than they.
Not
surprisingly, when (in a follow-up chat today) I presented Cleverbot with a
less obscure reference—“ If there's somethin’ strange in your neighborhood, Who
ya gonna call?”—it got the right answer—“Ghostbusters!”—about half the
time. (The rest of the time it replied,
“It’s just a spring clean for the May queen.”
If there’s a link between Led Zeppelin and the 1984 comedy movie, I’m
not aware of it.) This pattern is
consistent with other cultural references; when I said, “This Roman Meal bakery thought you’d like to know,” Cleverbot replied obliquely: “Where on earth are your servers?” (The correct, answer, of course is “I don’t need no arms around me.”) But when I typed, “We don’t need no
education,” it naturally gave the right response, “We don’t need no thought control.” The silly song that got lots of radio play is recognized; the much
better but less popular song is not.
In this
regard, Cleverbot could do so much better.
I Googled “Is there balm in Gilead” and got three hits referring to an
old religious song, and the fourth hit led me to “The Raven.” Not bad. Googling “Is there, is there balm in Gilead,”
I get “The Raven” as the second hit. But
you could ask Cleverbot this question a million times and it’ll never figure
out what you’re talking about. Cleverbot is beholden to its chat partners for information, ignoring the rest of the Internet
entirely. Finally I told it, “The right
answer is ‘Nevermore.’” It replied, “No,
I want you to sing the song.” I obliged,
pasting in lyrics from the religious spiritual:
“Sometimes I feel discouraged, And
think my work’s in vain.”
Cleverbot replied:
“I know, right?”
No comments:
Post a Comment