Robots Who Cry

Ever since I destroyed my laptop's brain I've been thinking a lot about machines who have feelings. As I took my little Vaio apart, carefully disconnecting the keyboard and USB port from the motherboard, then finally removing the hard drive, I felt a pang of conscience. I really was about to destroy its whole way of thinking, converting it from Windows Me to RedHat Linux 7.3. And while I knew its brain would be better after the conversion, I couldn't help worrying that I was somehow disrupting its life. My sturdy ultralight would never sing the annoying Windows song again. It would never attempt to start two processes simultaneously and crash. It would always, in the future, address me in command line mode unless I explicitly asked it to present me with a GUI.

My machine isn't human, but I feel like it is.

It's funny how strongly people react when I tell them about my moment of compassion as I rewrote my computer's mind. Nobody wants to get caught anthropomorphizing a computer - it's so amateurish. Sometimes people go into condescending mode and tell me my feelings are a sign that I obviously haven't spent enough time using computers to truly understand them. Or they get defensive and insist that computers can't ever be like us because we are alive and they are machines.

Even weirder is the way these comments about the mental state of computers seem to echo what people say about each other. Back when I had a wild and stupid crush on this SSH hacker who was mostly ignoring me, a friend said, "Really, you just don't understand. The guy probably isn't capable of having feelings." An acquaintance of mine, one of those sexist chicks who thinks men are from Mars and women are from Venus, noted that men just aren't like "us" and therefore we can't think of them as having the same kinds of emotions as we do. Yeah, right.

So you can see why it's sometimes difficult for me to take certain individuals seriously when they tell me computers just aren't like "us." The belief smacks of typical human egocentrism. "We" are the only ones who have feelings, make emotional connections, have a sense of self-consciousness, etc. And these things come naturally to us. They aren't programmed into us by parents and teachers and mass culture.

I'm not engaging in magical thinking here, arguing that my Vaio actually felt unhappy or disturbed when I turned it into a Linux machine. Despite any emotions I may develop for them, computers as they are now clearly don't function the way animals do. Instead, they are like prosthetics, as Marshall McLuhan would probably say, acting as extensions of human bodies and minds. So far they cannot act independently. And they cannot form social connections.

But Cynthia Breazeal is out to change all that. She's the MIT professor who built the so-called sociable robot named Kizmet, whose cute techno-cartoony face is capable of smiling, frowning, crying, looking surprised, and a few other basic human emotional gestures. When people interact with him, Kizmet responds to tone of voice, facial expressions, and physical demeanor, trying to come up with a socially appropriate response: a smile returns a smile; an angry tone returns a sad face. Breazeal used theories from developmental psychology to create a robot whose responses are based on the way a human infant might react to the grown-ups around it. The idea is that if we can create a robot who learns its behavior the same way people do - from human examples that include emotionally expressive behavior rather than pure base 2 logic - then we might find ourselves with a machine whose mind isn't so very different from our own.

Ultimately, however, what I like best about Breazeal's vision of sociable robots is that she suggests there is little difference between machines who appear to be sociable versus ones that "really" are. What does it mean to be "really" emotional, anyway? I can't always tell what my dinner date is feeling, so how can I possibly judge what it means to inhabit the psychology of a robot? If we can create a machine whose reactions seem entirely human, do we need to waste our philosophical time wondering whether circuits can ever feel the way neurons do? It's all electrical impulses in the end, baby.

Annalee Newitz (sociablehuman@techsploitation.com) is a surly media nerd who likes her computer better than you. Her column also appears in Metro, Silicon Valley's weekly newspaper.

Enjoy this piece?

… then let us make a small request. AlterNet’s journalists work tirelessly to counter the traditional corporate media narrative. We’re here seven days a week, 365 days a year. And we’re proud to say that we’ve been bringing you the real, unfiltered news for 20 years—longer than any other progressive news site on the Internet.

It’s through the generosity of our supporters that we’re able to share with you all the underreported news you need to know. Independent journalism is increasingly imperiled; ads alone can’t pay our bills. AlterNet counts on readers like you to support our coverage. Did you enjoy content from David Cay Johnston, Common Dreams, Raw Story and Robert Reich? Opinion from Salon and Jim Hightower? Analysis by The Conversation? Then join the hundreds of readers who have supported AlterNet this year.

Every reader contribution, whatever the amount, makes a tremendous difference. Help ensure AlterNet remains independent long into the future. Support progressive journalism with a one-time contribution to AlterNet, or click here to become a subscriber. Thank you. Click here to donate by check.

Close