Showing posts with label Intelligence. Show all posts
Showing posts with label Intelligence. Show all posts

Thursday, July 2, 2015

Can AI 'experience' emotion?

Emotions are often portrayed in sci-fi as the last realm of humans, the only aspect of thought unavailable to most machines. Think of Data’s long quest for emotional experience on Star Trek, or the hard questions faced by Decker in Do Androids Dream of Electric Sheep? (Or Blade Runner, if you're more a film person) . For all their apparent unmarred rationality in stories, robots in real life can’t seem to escape from the emotional attachments and influences of humans. From military troops mourning their lost mechanical comrades to apps like Siri that depend on conversational interaction, humans tend to anthropomorphize AI as having the same feelings they do to some extent. But when asked directly if a computer can have feelings, many people would argue no, because of the inherent rationality assumed of mechanical systems. Is it possible? Could AI ever feel emotion?




As humans, we define our emotions categorically - we feel happy, or sad, or angry, or so on.  How is “happy,” as a category containing who knows how many subcategories of semantics - pleased, content, euphoric - defined?  There are a few different ways you could approach a definition - 

Emotion as a pre-determined response - The way I feel after I get something I want is “happy.” 
Emotion as a physical state - Elevated serotonin and endocannabinoids are “happy.”
Emotion as derived from desire - Not needing or wanting to change anything about my current state is “happy.” 

Woman? Salad? Definitely happy.

To most people these definitions probably seem roundabout and strange. Humans have a unique gift of language that allows happiness to encompass all of these things, it’s typical definition boiling down to the abstract “feeling good.” But when exploring the concept of emotion in an artificial system, one without the convenient crutch of human consciousness and understanding, we have to look for more concrete rules. 

By using any of these definitions I’ve listed above, I’d argue that yes, an artificially intelligent system could feel emotion. It’d be easy for an event of lip a switch in an AI’s programming, setting “mood = ‘happy.’” Bit-and-byte facsimiles of the chemical changes in the human brain that correlate with emotional response would be simple to measure and categorize. And for a system that monitors its goals constantly, the third definition of emotion would be quite useful - I’m reminded of THIS STUDY, in which humans with a damaged center of emotion in their brain were rendered incapable of simple decisions like picking a pen to write with, because there was no “rational” distinction between the choices. Emotion could be defined in this way as a wash of small influences in each of our desires and decisions based on current circumstances. AI can have emotions defined in these ways. But, as I’m sure many of you are shouting at your computers right now, that’s not the real question. 

A better question is: Could AI ever experience emotion the same way that humans do? 


This is also a much harder question because we have no reference point. We don’t know what turns a chemical highway into an emotional experience for humans, but we can at least see the same correlation between stimulus in response in dogs, mice, apes, babies, etc. One prime example is the tongue-extended expression that comes with liking a taste - replicated by different species as an instinctive reaction that seems to prove that emotion is not a solely human experience. With AI, there’s no such connection. To assume any would be to anthropomorphize a machine to a dangerous extent. We can program the classification of emotion. We can program the physical qualities of emotion. We can program decisions and goals for an AI that rely on a self-perception of emotion. 

But can we program emotion itself? 
Or is it really necessary to? 

One of the beautiful things that comes from consciousness is the shared experience. We all agree that the sky is blue, the arctic is cold, and the live action Avatar the Last Airbender movie would’ve been terrible, had it ever been created. 

Really dodged a bullet there. 

Through language we are able to connect the personal to the universal in this way. But it’s a flawed system. There’s no way to prove, for example, that the blue I see is the same blue you see. Or that the happy I feel is the same one you do. Each of these things are entirely subjective, impossible to measure, and impossible to share without the shaping force of language. If the true, pure essence of my “happy” was your experience of “sad,” who would ever know? We could only define happiness in a truly universal manner as a measurable response, physical, behavioral, or cognitive, to whatever experiences we had shared. 


So it doesn’t matter whether the emotions an AI experiences are the same as ours, because we’d never know. It would only matter whether these objective, concrete definitions of emotion held true. These are the only things we can measure. Anything further is an argument on consciousness, humanity, and the ineffable, and their definitions, which have been debated for centuries. These concepts too will need concrete restrictions as AI becomes more and more prevalent in our human world. AI means a new era of philosophy in which questions are no longer enough. We can debate the existence of qualia or the Chinese Room experiment all day (and in a later post.) But beyond philosophical misgivings, does it matter if an AI’s blue is the same as yours, if you can both tell me the color of the sky? In such abstract terms, an AI can experience emotion - but only as much as we’re willing to attribute to it.

Questions? Comments? Arguments? Please add below,  I'd love to see what you have to say. 

Thursday, November 27, 2014

Why Artificial Intelligence IS Real Intelligence



One of the most common arguments I've seen in the face of AI research is that computers aren't REALLY intelligent. They merely emulate intelligence, something that is inherent to biological life, or , in some views, only humans. In his article "Artificial Intelligence, Really, Is Pseudo-Intelligence," Alva NoĆ« argues that computers lack 'drive': they can't attach meaning to things, and therefore they can't have wants like biological beings do.

Lets say, right now, I want a chocolate bar. "Want" is a pretty complex term. Does it refer to the lack of calories, calcium, sugar, magnesium, or even serotonin that my body's sensors detect in my bloodstream? Does it refer to the physical symptoms that manifest in my stomach and mouth that my brain recognizes, or the memory of how a chocolate bar negated these symptoms a week ago? Does it refer to my memories of having a chocolate bar while at the computer that have taught me chocolate is the "right" choice in this situation, just one of many learned behaviors? Does it refer to an emotional component, a combination of learned behavior and brain chemical levels that tell me that chocolate makes me 'happy'?

"Want" encompasses all of these things. It's complex, and it's more complex than anything AI can do right now as a whole. But when you break it down this way, what up there can we do that computers can't? We can create programs that take information from sensors. We can create programs that can access memories and find patterns, and determine a course of action based on that pattern. That's all intelligence is. The human mind, our meanings and desires, are only complex derivatives of very basic mechanical things, in the same way that the leaves on a vine create a beautiful spiraling pattern simply as a way of maximizing the sun coverage each leaf gets. Biological machines aren't inherently different from artificial machines. They've just had a head start. 



Another argument he uses is that of understanding. Computers can't understand, they can only perform the actions they are told to. One example is that of the Translator's Room. A human is locked in a room with nothing but a pen, and dictionaries that translate one foreign language into another. The human knows neither of these languages. However, every day, they receive papers with writing in one of the languages. Using the books, they are able to perfectly translate the writing into the other language before passing the paper back out of the room. They can complete this task despite not 'understanding' either language. 

This argument doesn't negate the possibility of artificial intelligence. It shows that a system can only do so much with limited information. If the books in the Translator's Room scenario had a picture for each word they translated, the human would be able to understand another component to the sentences they wrote. What if they were familiar pictures? Just like Helen Keller, with her hand underneath the spigot, the human could recognize water in any language if they just had another reference point. Another piece of information. Could that be considered understanding? A computer can store associations and memories just as a human mind can, and the more data a computer has access to, the more associations can be made. Isn't that all that understanding is? A summary of our experiences and the patterns we've derived from them? 



The Jeopardy-Playing robot, Watson, which the article cites as an example of a lack of understanding THRIVES on those summaries. It doesn't have any visual or physical references, which make up most of our human understanding. But it knows a river is a flowing body of water. It knows water is a compound in a liquid state that is common on Earth, and necessary for human life. It knows flowing is a type of movement only fluids, like liquid, can achieve. Even without visual reference, how is this not understanding? Watson can induct. It can deduct. And it can use those abilities to answer questions.

I'd argue that's what intelligence is. Our ability to derive patterns from information and act using those patterns. That ability is just as real in computers as in any biological creature. It's just our job to prepare computers to use it.