Robots Who Cry

Ever since I destroyed my laptop's brain I've been thinking a lot about machines who have feelings. As I took my little Vaio apart, carefully disconnecting the keyboard and USB port from the motherboard, then finally removing the hard drive, I felt a pang of conscience. I really was about to destroy its whole way of thinking, converting it from Windows Me to RedHat Linux 7.3. And while I knew its brain would be better after the conversion, I couldn't help worrying that I was somehow disrupting its life. My sturdy ultralight would never sing the annoying Windows song again. It would never attempt to start two processes simultaneously and crash. It would always, in the future, address me in command line mode unless I explicitly asked it to present me with a GUI.

My machine isn't human, but I feel like it is.

It's funny how strongly people react when I tell them about my moment of compassion as I rewrote my computer's mind. Nobody wants to get caught anthropomorphizing a computer - it's so amateurish. Sometimes people go into condescending mode and tell me my feelings are a sign that I obviously haven't spent enough time using computers to truly understand them. Or they get defensive and insist that computers can't ever be like us because we are alive and they are machines.

Even weirder is the way these comments about the mental state of computers seem to echo what people say about each other. Back when I had a wild and stupid crush on this SSH hacker who was mostly ignoring me, a friend said, "Really, you just don't understand. The guy probably isn't capable of having feelings." An acquaintance of mine, one of those sexist chicks who thinks men are from Mars and women are from Venus, noted that men just aren't like "us" and therefore we can't think of them as having the same kinds of emotions as we do. Yeah, right.

So you can see why it's sometimes difficult for me to take certain individuals seriously when they tell me computers just aren't like "us." The belief smacks of typical human egocentrism. "We" are the only ones who have feelings, make emotional connections, have a sense of self-consciousness, etc. And these things come naturally to us. They aren't programmed into us by parents and teachers and mass culture.

I'm not engaging in magical thinking here, arguing that my Vaio actually felt unhappy or disturbed when I turned it into a Linux machine. Despite any emotions I may develop for them, computers as they are now clearly don't function the way animals do. Instead, they are like prosthetics, as Marshall McLuhan would probably say, acting as extensions of human bodies and minds. So far they cannot act independently. And they cannot form social connections.

But Cynthia Breazeal is out to change all that. She's the MIT professor who built the so-called sociable robot named Kizmet, whose cute techno-cartoony face is capable of smiling, frowning, crying, looking surprised, and a few other basic human emotional gestures. When people interact with him, Kizmet responds to tone of voice, facial expressions, and physical demeanor, trying to come up with a socially appropriate response: a smile returns a smile; an angry tone returns a sad face. Breazeal used theories from developmental psychology to create a robot whose responses are based on the way a human infant might react to the grown-ups around it. The idea is that if we can create a robot who learns its behavior the same way people do - from human examples that include emotionally expressive behavior rather than pure base 2 logic - then we might find ourselves with a machine whose mind isn't so very different from our own.

Ultimately, however, what I like best about Breazeal's vision of sociable robots is that she suggests there is little difference between machines who appear to be sociable versus ones that "really" are. What does it mean to be "really" emotional, anyway? I can't always tell what my dinner date is feeling, so how can I possibly judge what it means to inhabit the psychology of a robot? If we can create a machine whose reactions seem entirely human, do we need to waste our philosophical time wondering whether circuits can ever feel the way neurons do? It's all electrical impulses in the end, baby.

Annalee Newitz (sociablehuman@techsploitation.com) is a surly media nerd who likes her computer better than you. Her column also appears in Metro, Silicon Valley's weekly newspaper.

ACLU By ACLUSponsored

Imagine you've forgotten once again the difference between a gorilla and a chimpanzee, so you do a quick Google image search of “gorilla." But instead of finding images of adorable animals, photos of a Black couple pop up.

Is this just a glitch in the algorithm? Or, is Google an ad company, not an information company, that's replicating the discrimination of the world it operates in? How can this discrimination be addressed and who is accountable for it?

“These platforms are encoded with racism," says UCLA professor and best-selling author of Algorithms of Oppression, Dr. Safiya Noble. “The logic is racist and sexist because it would allow for these kinds of false, misleading, kinds of results to come to the fore…There are unfortunately thousands of examples now of harm that comes from algorithmic discrimination."

On At Liberty this week, Dr. Noble joined us to discuss what she calls “algorithmic oppression," and what needs to be done to end this kind of bias and dismantle systemic racism in software, predictive analytics, search platforms, surveillance systems, and other technologies.

What you can do:
Take the pledge: Systemic Equality Agenda
Sign up