What a field day for the heat
A thousand people in the street
Singing songs and carrying signs
Mostly saying, "hooray for our side"

Saturday, April 9, 2011

Why Watson's Appearance on Jeopardy Isn't AI's Moon Landing Moment

There's a lot being bandied about with Watson truly trumping several Jeopardy champions on TV earlier this year. And while it's a pretty damn good triumph, it isn't all that it's being trumped up to be.

Is Watson cool? Oh my, yes! Is Watson a step forward in natural speech recognition? More like a leapfrog, with rocket boosters. Is Watson the clarion call that AI is soon to sweep us all up and soon we'll all be under the thumb of the reality version of Skynet? (insert laughing with snorts here)

Watson is very cool, make no mistake. Watson is also very good a parsing natural speech (which is different than written language, or simple voice commands). And on that front, Watson is an amazing leap forward. However, Watson is very far from passing a Turing Test.

The first myth we need to dispel is the media talking about how computers have bested our best human chess player and now they bested our top Jeopardy champions, this proves how advanced they've become. 1) They were two very different computers, DeepBlue couldn't do what Watson did (even with reprogramming), and neither can Watson play chess. 2) Watson did trash two of the best Jeopardy champions (well, he mop & glowed the floor with them because of the second day), however its full record is much more spotty.

Answering questions isn't the same as intelligence (although in our modern culture that is test oriented, it seems like that). Watson can parse a question in natural, spoken English (an amazing feat considering how crazy our language can be with subtle meanings and conjunctions). Watson, however, can't initiate conversation. Watson can't even reverse the process by coming up with the "answers" that are really Jeopardy questions (the game format being you have to answer in the form of a question, based on the revealed answer on the game board).

If you ask Watson a question, chances are it can find you an answer (although it may not be the answer you're looking for, given how Watson parses your question). It can even offer alternatives (which is also pretty good). However it can't formulate a cohesive thesis based on the answers it finds. It can just find the answers (give them probabilities of being correct) and spit them out within the format it's programmed to handle (in this case, in the form of a question, "Who/What is…"). It can sort through mounds of data and find correlations and connections. It can identify the critical pieces of data that it should be considering.

Right now, Watson is a functioning idiot savant.

I'm long known as a holding heretical SF/F beliefs. I, for instance, don't believe the Singularity will happen. Or at least it won't happen anytime in the near future.

There's several reasons for this. First of all is a misunderstanding of the human brain, and the comparison to a computer. It's not. It's just that the computer is our most complicated machine we have at the moment, so we make the comparison. There are several fundamental differences.

A computer works through polarization of circuits, a brain functions on the depolarization of cells. We can translate a neuron to a transistor, but only as far as both of these are considered the smallest operational pieces. A transistor can only return either "sufficient charge" (ie. on/1) or "insufficient charge" (ie. off/0). Through the manipulation of electrical charges we get the difference engine functions. A neuron doesn't hold a charge, however it has potentials. A neuron can fire (equate to 1) because of either a single large stimulation, several small stimulations from several other neurons, or because one other neuron continues to fire at it. A transistor can only give two possible outcomes, and in only one direction. A neuron can give multiple outcomes (through neural transmitters) and can have multiple directions of outflow. A transistor can only hold a charge when it is stimulated from another current. A neuron can do that, but it can also fire because it doesn't have a stimulation, or because it has a physical stimulation (other than being stimulated with neurotransmitters which is the same as an electrical current in a transistor).

The more I learn about the human body, the more I realized that a computer just doesn't compare.

However, this doesn't mean a computer can't become intelligent, it just won't be the same as human intelligence. When robotics pioneers gave up on making human-like robots they gained tremendous ground in functionality by having their robots mimic the functions of ants and other animals. How? Well, there was nothing to compare it to. It's not like the ants were standing up and saying, "That's not how we do it." Instead they performed to our impression of how ants perform.

I think if computer scientists gave up on mirroring human intelligence in their computers they would also make great strides in developing actual computer intelligences. Once we let go of the ideal that we're the greatest thing because we made sliced bread, the world gets easier. It's an anthropomorphic trap we constantly put ourselves in.

Once we stop trying to make a computer intelligence in our image, but explore new possibilities of consciousness, I think we'll be successful. What will happen then will be completely unexpected and unpredictable. At this moment, all futurism is based on our machines developing and having similar motivations as we do. And that is an act of hubris that might lead to our undoing.

I hate to rain on anybody's parade, but when Skynet goes live, we probably won't even notice it. Instead of a Terminator moment, it will be more like alien first contact, or biological field research.


Dr. Phil (Physics) said...

I think your last paragraph is exactly right -- Why would Skynet suffer from Evil Mastermind Megalomania, common in any James Bond movie? (grin)

Dr. Phil

Steve Buchheit said...

Dr. Phil, I think there's the general mythos because it makes for good fiction and because, in the end, we're very human centric and think the rest of the universe would be as well. So to us, of course the computer would try and dominate/rule/kill us. Personally, I think it's suppressed wish fulfillment on the part of the writers.