Friday, September 7, 2012

Almost Intelligent - Part II


Introduction

In my previous post I explored modern efforts at artificial intelligence, evaluating them in terms of two common criteria:  how well AI devices can simulate human dialog, and how well they translate languages.  In this post, I will look at another classic measure of the progress of AI:  how well a computer can play a game.

It’s not hard to see why this criterion is a valid one.  So often, a computer (or other machine) simply does what we tell it do (or at least it tries).  With a game, the computer—far from accommodating you—is carrying out its own agenda, which is in direct opposition to you.  Also, whereas Siri or a chatbot may not be “connection-oriented”—that is, may not actually consider sequential inputs in the context of an ongoing conversation, a computer playing a game most certainly is.  Thus, if it does a really good job of beating us, all on its own, it’s both the most successful and (at least to me) creepiest manifestation of AI there is.

My early, early experience

I got a very early start with computer gaming.  Before personal computers were a common fixture in homes, my brother Bryan wrote a game for the Hewlett-Packard Model 85, a computer which my dad bought and let us kids use (which stands out as one of parenting’s finest moments, if you ask me).

The game Bryan coded was Hexapawn, a simple variant of chess involving three pawns per player on a 3x3 board.  Wikipedia tells us that the game’s inventor, the famous mathematician and writer Martin Gardner, “specifically constructed it as a game with a small game tree, in order to demonstrate how it could be played by a heuristic AI implemented by a mechanical computer.”  I’m sure Gardner would be thrilled to learn that Bryan was inspired by his magazine article on the topic.  (I asked Bryan today if he can recall his precise motivation for that programming project, and he replied, “Well, I loved science, computers and futuristic stuff, not really sure why, heck, we all did, but there was just one problem, and that’s that the [HP-85] computer didn’t really do anything. There were a few primitive games and whatnot, but as you know, those got old pretty fast.”)

At first, the HP-85 and I were pretty well matched.  But once I got the hang of Hexapawn, I found I could beat the computer—but only for awhile.  The computer learned while playing:  it never made the same mistake twice.  Thus, after this learning period our games always ended in a draw.  But the HP-85 had one weakness:  when you exited the program, its memory was erased.  The next time you played, it had to learn all over again.

Deep Blue vs. Kasparov

I don’t need to say much about this because you surely know the story:  an IBM computer called Deep Blue beat Garry Kasparov, the world chess champion, at his own game.  I haven’t watched the matches (I’m not much into chess; in fact, I once lost to my four-year-old nephew) but I gather Kasparov got pretty heated.  He even accused the IBM team of cheating by helping Deep Blue out behind the scenes.  A documentary about the match, commenting on how visibly flustered Kasparov got, said he would be the worst poker player in the world.

In a sense, it wasn’t a fair matchup:  Deep Blue got Kasparov’s goat, but the computer had no goat.  An awareness of the significance of your activity is part of what it means to be intelligent, so to the extent that Deep Blue played mechanically, it wasn’t quite intelligent.  I cannot brood about Kasparov losing his temper against a soulless, ruthless computer without fantasizing about Kasparov grabbing a cheap knockoff peripheral device, unsupported by Deep Blue’s operating system, and jamming it into a USB port.  The machine’s calculations grind to a halt and eventually it blue-screens, thus losing by default to the human.  And the crowd goes wild!

My other  early experience

In 1984, a friend and I took on his Apple IIe computer in a game far more exciting than Hexapawn:  strip poker.  Needless to say, only our opponent would actually strip.  As I recall, we had three babes to choose from as our rival.  Now, before you get too excited (or offended), remember the quality of computer graphics in that era.  This was extremely low-resolution—the CRT equivalent of Pointillism.  Still, it was fun to play poker against, and strip, the babes.

We eventually discovered a huge weakness in the computer’s play, that has strong ramifications for AI in general:  you could easily win just by bluffing constantly.  So long as we bet big on every hand, no matter how lame our cards were, we’d have our opponent bare naked within minutes.  One babe was as gullible as the next—they never learned!  But then, how could they?  A smarter program could have noted the frequency of our bluffing, but this one didn’t.  Its creators could have implemented some sort of ratio-based “this guy bluffs” detector, but ultimately how smart can a computer get about human treachery?  Could it ever pick up on the hundreds of nonverbal cues that a human can?  Can it really learn the traits of its opponent?

Consider this anecdote.  I attended Poker Night (a fundraising event for my kids’ school) a few months back, and (not wishing to spend too much money) was very conservative with my betting.  When I finally got an obviously good hand (this was Texas Hold ‘em, a game unfamiliar to me, and I was hopeless at spotting opportunities), I finally bet big.  None of us had played one another before, so there was much conjecture about whether or not I was bluffing.  “He’s been betting low all night. He’s got something!” someone said.  “No, he might just have balls,” another guy said.  A third guy replied, “No, he’s in my wife’s book club, so I know he doesn’t have any balls!”  See?  Though he’d never played against me, that third guy had biographical information that came into play.  I’d like to see Deep Blue go up against a professional poker player.  It would get its CPU kicked!

What’s the point?

There are two main reasons I can think of for a computer to play a game.  One is so that a lone person can have somebody to play against.  The other is to prove that the computer can actually do it.  But what is the point of people playing games?  Why do we do it?  This question, I think, gets at the core difference between humans and AI.

Of course there are all kinds of reasons people play games, but a computer only plays a game because a human told it to.  And all a computer knows how to do is to try to win.  I play games to have fun (which a computer can’t do) and to teach my kids things. 

For example, my family loves to play Apples to Apples.  In this game, players take turns being the judge.  The judge turns over a green card that has an adjective on it (e.g., brave, difficult, scary).  Each of the other players has seven red cards, each with a noun printed on it (e.g., doorknob, t-shirt, egg).  Each player selects from his hand the card whose noun best exemplifies the adjective on the green card.  The judge chooses which player’s card matches the green card the best, and awards the green card to the player who provided it.  Although the Wikipedia article about it lists many variations for this game, none matches the way my family plays, which is that each player makes an argument for his choice, to persuade the judge.  (We assumed this was the whole point of the game; otherwise, the game seems pointless.)  These arguments are often elaborate, sometimes ingenious, and always funny.  I’m hoping this game will help my kids learn the art of rhetoric.  I cannot imagine that a computer will be able to even create a rhetorical argument, much less teach rhetoric to a human or learn it from a game, anytime soon.

My favorite game, Sorry!, exists in a computer version, and though I haven’t tried this version (why would I? I have kids!), I can imagine that a computer could do okay against humans if all parties took a similarly cutthroat approach to the game.  But for me, a cutthroat approach is out of the question.

Why?  Well, for one thing, I’ve been playing this game with my kids since they were very young and given to bursting into tears when they got bumped or Sorry’d.  (It’s natural to feel singled out when an opponent, faced with multiple options of how to play a card, chooses the option that hurts you, as opposed to another player.)  I don’t like to make my kids cry.  Also, I like to give a little help to my younger daughter to better her chances against her big sister.  And of course I want the game to be fun.  But most of all, I want to teach my kids about quid pro quo.  I want to teach them how to make deals.

“Okay, I’m going to show you mercy here,” I’ll declare.  “I could split this seven and knock your pawn back to home, but I won’t—I’ll just move seven spaces.  But I want you to remember this the next time you draw a ‘Sorry’ card.”  There’s no codified way of keeping track of these favors … they’re informal and involve approximations of justice.  Such deal-making is a crucial capability—not just in a game but in life.  I cannot play Sorry, in fact, without thinking about the epic failure of Flavr Savr genetically engineered tomatoes.

I read about these tomatoes in a 1993 “New Yorker” article, written a few months before the product hit the market.  The obviously creepy idea of genetically engineered food is not all that stuck with me from the article.  I was very impressed by the account of a Ed Agrisani, a Rolex-sporting, big-time tomato salesman interviewed for the article, who predicted (accurately, as it turned out) that Calgene’s $25 million experiment would be a complete failure.  To Agrisani, the quality of the new tomatoes was almost beside the point, because Calgene had no experience actually selling tomatoes: 
“What separates the men from the boys in this business is whether you can sell your tomatoes when nobody wants them, when you’ve got a whole field that’s just going to rot out there unless you can move ‘em out.  I’ve got customers who know that when the supply is tight they can call me and I’ll sell ‘em a load.  So when I get oversupplied I can call them and say, ‘Hey, I know you don’t need it, but how about buying a load?’  And they’ll say, ‘We’ll send the truck.’  It took me sixteen years to get to where I had the relationships to do that.  Now, maybe the folks at Calgene think they can come in and do it overnight—and, like I say, I wish ‘em the best—but it’s not a simple deal.”
I’ll let somebody else teach my kids chess.  For me, the speech-making involved in Apples to Apples and the deal-making in Sorry! are the better skills to learn, as they completely transcend the game itself.

Conclusion

A computer can play a mean game of chess.  But perhaps chess is unique among games in relying mainly on intellect, strategy, and computational ability.  When we consider games that use the full spectrum of human intelligence—interpreting facial expressions, ad hoc profiling of opponents, making arguments that appeal to quasi-rational humans, making deals, having fun—it starts to look like AI is still pretty far from the end zone.  And even if a computer gets good at a game, it will remain utterly powerless to take what it’s learned and apply it to real life.  (Of which, of course, it has none.)

This is all fine with me.  I’m all for improvements in AI to the extent this makes the machines into better slaves.  I’m much less excited about a computer defeating me at anything.

No comments:

Post a Comment