Lets say, right now, I want a chocolate bar. "Want" is a pretty complex term. Does it refer to the lack of calories, calcium, sugar, magnesium, or even serotonin that my body's sensors detect in my bloodstream? Does it refer to the physical symptoms that manifest in my stomach and mouth that my brain recognizes, or the memory of how a chocolate bar negated these symptoms a week ago? Does it refer to my memories of having a chocolate bar while at the computer that have taught me chocolate is the "right" choice in this situation, just one of many learned behaviors? Does it refer to an emotional component, a combination of learned behavior and brain chemical levels that tell me that chocolate makes me 'happy'?
"Want" encompasses all of these things. It's complex, and it's more complex than anything AI can do right now as a whole. But when you break it down this way, what up there can we do that computers can't? We can create programs that take information from sensors. We can create programs that can access memories and find patterns, and determine a course of action based on that pattern. That's all intelligence is. The human mind, our meanings and desires, are only complex derivatives of very basic mechanical things, in the same way that the leaves on a vine create a beautiful spiraling pattern simply as a way of maximizing the sun coverage each leaf gets. Biological machines aren't inherently different from artificial machines. They've just had a head start.
Another argument he uses is that of understanding. Computers can't understand, they can only perform the actions they are told to. One example is that of the Translator's Room. A human is locked in a room with nothing but a pen, and dictionaries that translate one foreign language into another. The human knows neither of these languages. However, every day, they receive papers with writing in one of the languages. Using the books, they are able to perfectly translate the writing into the other language before passing the paper back out of the room. They can complete this task despite not 'understanding' either language.
This argument doesn't negate the possibility of artificial intelligence. It shows that a system can only do so much with limited information. If the books in the Translator's Room scenario had a picture for each word they translated, the human would be able to understand another component to the sentences they wrote. What if they were familiar pictures? Just like Helen Keller, with her hand underneath the spigot, the human could recognize water in any language if they just had another reference point. Another piece of information. Could that be considered understanding? A computer can store associations and memories just as a human mind can, and the more data a computer has access to, the more associations can be made. Isn't that all that understanding is? A summary of our experiences and the patterns we've derived from them?
The Jeopardy-Playing robot, Watson, which the article cites as an example of a lack of understanding THRIVES on those summaries. It doesn't have any visual or physical references, which make up most of our human understanding. But it knows a river is a flowing body of water. It knows water is a compound in a liquid state that is common on Earth, and necessary for human life. It knows flowing is a type of movement only fluids, like liquid, can achieve. Even without visual reference, how is this not understanding? Watson can induct. It can deduct. And it can use those abilities to answer questions.
I'd argue that's what intelligence is. Our ability to derive patterns from information and act using those patterns. That ability is just as real in computers as in any biological creature. It's just our job to prepare computers to use it.
No comments:
Post a Comment