Annalee Newitz

Does Obama Want to Replace Your Facebook Profile with Your Social Security Card?

Today U.S. President Obama announced plans for a "cyberspace strategy" that includes everything from possible offensive cyberwar strategies to education. It also contains a little-discussed "identity management" plan that makes me wonder if Facebook profiles are about to become the new Social Security cards.

Keep reading... Show less

My Last Column

I've been writing this column for nine years. I was here with you through the dot-com boom and the crash. I made fun of the rise of Web 2.0 when that was called for, and screamed about digital surveillance under the USA-PATRIOT Act when that was required (actually, that's still required). I've ranted about everything from obscenity law to genetic engineering, and I've managed to stretch this column's techie mandate to include meditations on electronic music and sexology. Every week I gave you my latest brain dump, even when I was visiting family in Saskatchewan or taking a year off from regular journalism work to study at MIT.

But now it's time for me to move on. This is my last Techsploitation column, and I'm not going to pretend it's not a sad time for me. Writing this column was the first awesome job I got after fleeing a life of adjunct professor hell at UC Berkeley. I was still trying to figure out what I would do with my brain when Dan Pulcrano of the Silicon Valley Metro invited me out for really strong martinis at Blondie's Bar in the Mission District and offered me a job writing about tech workers in Silicon Valley. My reaction? I wrote a column about geeks doing drugs and building insanely cool shit at Burning Man. I felt like the hipster survivalist festival was the only event that truly captured the madness of the dot-com culture I saw blooming and dying all around me. I can't believe Dan kept me on, but he did.

Since then, my column also found a home in the Guardian and online at, two of the best leftist publications I've ever had the honor to work with. I've always believed the left needed a strong technical wing, and I've tried to use Techsploitation to articulate what exactly it would mean to be a political radical who also wants to play with tons of techie consumerist crap.

There are plenty of libertarians among techie geeks and science nerds, but it remains my steadfast belief that a rational, sustainable future society must include a strong collectivist vision. We should strive to use technologies to form communities, to make it easier for people to help the most helpless members of society. A pure free-market ideology only leads to a kind of oblivious cruelty when it comes to social welfare. I don't believe in big government, but I do believe in good government. And I still look forward to the day when capitalism is crushed by a smarter, better system where everyone can be useful and nobody dies on the street of a disease that could have been prevented by a decent socialized health-care system.

So I'm not leaving Techsploitation behind because I've faltered in my faith that one day my socialist robot children will form baking cooperatives off the shoulder of Saturn. I'm just moving on to other mind-ensnaring projects. Some of you may know that I've become the editor of, a blog devoted to science fiction, science, and futurism. For the past six months I've been working like a maniac on io9, and I've also hired a kickass team of writers to work with me. So if you want a little Techsploitation feeling, be sure to stop by We're there changing the future, saving the world, and hanging out in spaceships right now.

I also have another book project cooking in the back of my brain, so when I'm not blogging about robots and post-human futures, I'm also writing a book-length narrative about, um, robots and post-human futures. Also pirates.

The past nine years of Techsploitation would have been nothing without my readers, and I hope you can picture me with tears in my eyes when I write that. I've gotten so many cool e-mails from you guys over the years that they've filled my heart forever with glorious, precise rants about free software, digital liberties, sex toys, genetic engineering, copyright, capitalism, art, video games, science fiction, the environment, and the future -- and why I'm completely, totally wrong about all of them. I love you dorks! Don't ever stop ruthlessly criticizing everything that exists. It is the only way we'll survive.

Three Myths About the Internet That Refuse to Die

Since I started writing this column in 1999, I've seen a thousand Internet businesses rise and die. I've watched the Web go from a medium you access via dial-up to the medium you carry around with you on your mobile. Still, there are three myths about the Internet that refuse to kick the bucket. Let's hope the micro-generation that comes after the Web 2.0 weenies finally puts these misleading ideas to rest.

Myth: The Internet Is Free

This is my favorite Internet myth because it has literally never been true. In the very early days of the Net, the only people who went online were university students or military researchers -- students got accounts via the price of tuition; the military personnel got them as part of their jobs. Once the Internet was opened to the public, people could only access it by paying fees to their Internet service providers. And let's not even get into the facts that you have to buy a computer or pay for time on one.

I think this myth got started because pundits wanted to compare the price of publishing or mailing something on the Internet to the price of doing so using paper or the United States Postal Service. Putting a Web site on the Net is "free" only if you pretend you don't have to pay your ISP and a Web hosting service to do it. No doubt it is cheaper than printing and distributing a magazine to thousands of people, but it's not free. Same goes for e-mail. Sure it's "free" to send an e-mail, but you're still paying your ISP for Internet access to send that letter.

The poisonous part of this myth is that it sets up the false idea that the Internet removes all barriers to free expression. The Internet removes some barriers, but it erects others. You can get a few free minutes online in your local public library, maybe, and set up a Web site using a free service (if the library's filtering software allows that). But will you be able to catch anyone's attention if you publish under those constraints?

Myth: The Internet Knows No Boundaries

Despite the Great Firewall of China, an elaborate system of Internet filters that prevent Chinese citizens from accessing Web sites not approved by the government, many people still believe the Internet is a glorious international space that can bring the whole world together. When the government of a country like Pakistan can choose to block YouTube -- which it has and does -- it's impossible to say the Internet has no boundaries.

The Internet does have boundaries, and they are often drawn along national lines. Of course, closed cultures are not the only source of these boundaries. Many people living in African and South American nations have little access to the Internet, mostly due to poverty. As long as we continue to behave as if the Internet is completely international, we forget that putting something online does not make it available to the whole world. And we also forget that communications technology alone cannot undo centuries of mistrust between various regions of the world.

Myth: The Internet Is Full of Danger

Perhaps because the previous two myths are so powerful, many people have come to believe that the Internet is a dangerous place -- sort of like the "bad" part of a city, where you're likely to get mugged or hassled late at night. The so-called dangers of the Internet were highlighted in two recent media frenzies: the MySpace child-predator bust, in which Wired reporter Kevin Poulsen discovered that a registered sex offender was actively befriending and trolling MySpace for kids; and the harassment of Web pundit Kathy Sierra by a group of people who posted cruelly Photoshopped pictures of her, called for her death, and posted her home address.

Despite the genuine scariness represented by both these incidents, I would submit they are no less scary than what one could encounter offline in real life. In general, the Internet is a far safer place for kids and vulnerable people than almost anywhere else. As long as you don't hand out your address to strangers, you've got a cushion of anonymity and protection online that you'll never have in the real world. It's no surprise that our myths of the Internet overestimate both its ability to bring the world together and to destroy us individually.

Using Sci-Fi to Change the World

Every year in late May, several thousand people descend on Madison, Wis., to create an alternate universe. Some want to build a galaxy-size civilization packed with humans and aliens who build massive halo worlds orbiting stars. Others are obsessed with what they'll do when what remains of humanity is left to survive in the barren landscape left after Earth has been destroyed by nukes, pollution, epidemics, nanotech wipeouts, or some combination of all four. Still others live parts of their lives as if there were a special world for wizards hidden in the folds of our own reality.

They come to Madison for WisCon, a science-fiction convention unlike most I've ever attended. Sure, the participants are all interested in the same alien worlds as the thronging crowds that go to the popular Atlanta event Dragon*Con or the media circus known as Comic-Con. But they rarely carry light sabers or argue about continuity errors in Babylon 5. Instead, they carry armloads of books and want to talk politics.

WisCon is the United States' only feminist sci-fi convention, but since it was founded more than two decades ago, the event has grown to be much more than that. Feminism is still a strong component of the con, and many panels are devoted to the work of women writers or issues like sexism in comic books. But the con is also devoted to progressive politics, antiracism, and the ways speculative literature can change the future. This year there was a terrific panel about the fake multiculturalism of Star Trek and Heroes, as well as a discussion about geopolitical themes in experimental writer Timmel Duchamp's five-novel, near-future Marq'ssan series.

While most science fiction cons feature things like sneak-preview footage of the next special effects blockbuster or appearances by the cast of Joss "Buffy the Vampire Slayer" Whedon's new series Dollhouse, WisCon's highlights run toward the bookish. We all crammed inside one of the hotel meeting rooms to be part of a tea party thrown by the critically-acclaimed indie SF Web zine Strange Horizons, then later we listened to several lightning readings at a stately beer bash thrown by old school SF book publisher Tor.

One of the highlights of the con was a chance to drink absinthe in a strangely windowless suite with the editors of alternative publisher Small Beer Press, whose authors include the award-winning Kelly Link and Carol Emschwiller. You genuinely imagine yourself on a spaceship in that windowless room -- or maybe in some subterranean demon realm -- with everybody talking about alternate realities, AIs gone wild, and why Iron Maiden is the best band ever. (What? You don't think there will be 1980s metal in the demon realm?)

Jim Munroe, Canadian master of DIY publishing and filmmaking, was at WisCon talking about literary zombies and ways that anarchists can learn to organize their time better, while guest of honor Maureen McHugh gave a speech about how interactive online storytelling represents the future of science fiction -- and fiction in general. Science fiction erotica writer/publisher Cecilia Tan told everybody about her latest passion: writing Harry Potter fan fiction about the forbidden love between Draco and Snape. Many of today's most popular writers, like bestseller Naomi Novik, got their start writing fan fiction. Some continue to do it under fake names because they just can't give it up.

Perhaps the best part of WisCon is getting a chance to hang out with thousands of people who believe that writing and reading books can change the world for the better. Luckily, nobody there is humorless enough to forget that sometimes escapist fantasy is just an escape. WisCon attendees simply haven't given up hope that tomorrow might be radically better than today. They are passionate about the idea that science fiction and fantasy are the imaginative wing of progressive politics. In Madison, among groups of dreamers, I was forcefully reminded that before we remake the world, we must first model it in our own minds.

How Do We Fight Corporate Control of the Internet?

Last week I wrote about the premise of Oxford professor Jonathan Zittrain's new book, The Future of the Internet and How to Stop It (Yale University Press). He warns about a future of "tethered" technologies like the digital video recorder and smartphones that often are programmed remotely by the companies that make them rather than being programmed by users, as PCs are. As a partial solution, Zittrain offers up the idea of Wikipedia-style communities, where users create their own services without being "tethered" to a company that can change the rules any time.

Unfortunately, crowds of people running Web services or technologies online cannot save us from the problem of tethered technology. Indeed, Zittrain's crowds might even unwittingly be tightening the stranglehold of tethering by lulling us into a false sense of freedom.

It's actually in the best interest of companies like Apple, Comcast, or News Corp to encourage democratic, freewheeling enclaves like Wikipedia or MySpace to convince people that their whole lives aren't defined by tethering. When you get sick of corporate-mandated content and software, you can visit Wikipedia or MySpace. If you want a DVR that can't be reprogrammed by Comcast at any time, you can look up how to build your own software TV tuner on Wikipedia. See? You have freedom!

Unfortunately, your homemade DVR software doesn't have the kind of easy-to-use features that make it viable for most consumers. At the same time, it does prove that tethered technologies aren't your only option. Because there's this little puddle of freedom in the desert of technology tethering, crowd-loving liberals are placated while the majority of consumers are tied down by corporate-controlled gadgets.

In this way, a democratic project like Wikipedia becomes a kind of theoretical freedom -- similar to the way in which the U.S. constitutional right to freedom of speech is theoretical for most people. Sure, you can write almost anything you want. But will you be able to publish it? Will you be able to get a high enough ranking on Google to be findable when people search your topic? Probably not. So your speech is free, but nobody can hear it. Yes, it is a real freedom. Yes, real people participate in it and provide a model to others. And sometimes it can make a huge difference. But most of the time, people whose free speech flies in the face of conventional wisdom or corporate plans don't have much of an effect on mainstream society.

What I'm trying to say is that Wikipedia and "good crowds" can't fight the forces of corporate tethering -- just as one person's self-published, free-speechy essay online can't fix giant, complicated social problems. At best, such efforts can create lively subcultures where a few lucky or smart people will find that they have total control over their gadgets and can do really neat things with them. But if the denizens of that subculture want millions of people to do neat things too, they have to deal with Comcast. And Comcast will probably say, "Hell no, but we're not taking away your freedom entirely because look, we have this special area for you and 20 other people to do complicated things with your DVRs." If you're lucky, Comcast will rip off the subculture's idea and turn it into a tethered application.

So what is the solution, if it isn't nice crowds of people creating their own content and building their own tether-free DVRs? My honest answer is that we need organized crowds of people systematically and concertedly breaking the tethers on consumer technology. Yes, we need safe spaces like Wikipedia, but we also need to be affirmatively making things uncomfortable for the companies that keep us tethered. We need to build technologies that set Comcast DVRs free, that let people run any applications they want on iPhones, that fool ISPs into running peer-to-peer traffic. We need to hand out easy-to-use tools to everyone so crowds of consumers can control what happens to their technologies. In short, we need to disobey.

Is the Creative Internet Just About Dead?

A couple of weeks ago I went to the annual Maker Faire in San Mateo, an event where people from all over the world gather for a giant DIY technology show-and-tell extravaganza. There are robots, kinetic sculptures, rockets, remote-controlled battleship contests, music-controlled light shows, home electronics kits, ill-advised science experiments (like the Mentos-Diet Coke explosions), and even a barn full of people who make their own clothing, pillows, bags, and more. Basically, it's a weekend celebration of how human freedom combined with technology creates a pleasing but cacophonous symphony of coolness.

And yet the Maker Faire takes place against a backdrop of increasing constraints on our freedom to innovate with technology, as Oxford University researcher Jonathan Zittrain points out in his latest book, The Future of the Internet and How to Stop It (Yale University Press). After spending several years investigating the social and political rules that govern the Internet -- and spearheading the Net censorship tracking project OpenNet Initiative -- Zittrain looks back on the Net's development and predicts a dystopian future. What's chilling is that his dystopia is already coming to pass.

Zittrain traces the Net's history through three phases. Initially it was composed of what he calls "sterile" technologies: vast mainframes owned by IBM, which companies could rent time on. What made those technologies sterile is that nobody could experiment with them (except IBM), and therefore innovation related to them stagnated.

That's why the invention of the desktop PC and popularization of the Internet ushered in an era of unprecedented high-tech innovation. Zittrain calls these open-ended technologies "generative." Anybody can build other technologies that work with them. So, for example, people built Skype and the World Wide Web, both software technologies that sit on top of the basic network software infrastructure of the Internet. Similarly, anybody can build a program that runs on Windows.

But Zittrain thinks we're seeing the end of the freewheeling Internet and PC era. He calls the technologies of today "tethered" technologies. Tethered technologies are items like iPhones or many brands of DVR -- they're sterile to their owners, who aren't allowed to build software that runs on them. But they're generative to the companies that make them, in the sense that Comcast can update your DVR remotely, or Apple can brick your iPhone remotely if you try to do something naughty to it (like run your own software program on it).

In some ways, tethered technologies are worse than plain old sterile technologies. They allow for abuses undreamed of in the IBM mainframe era. For example, iPhone tethering could lead to law enforcement going to Apple and saying, "Please activate the microphone on this iPhone that we know is being carried by a suspect." The device turns into an instant bug, without all the fuss of following the suspect around or installing surveillance crap in her apartment. This isn't idle speculation, by the way. OnStar, the manufacturer of a car emergency system, was asked by law enforcement to activate the mics in certain cars using its system. It refused and went to court.

Zittrain's solution to the tethering problem is to encourage the existence of communities like the ones who participate in Maker Faire or who edit Wikipedia. These are people who work together to create open, untethered technologies and information repositories. They are the force that pushes back against companies that want to sterilize the Internet and turn it back into something that spits information at you, television-style. I think this is a good start, but there are a lot of problems with depending on communities of DIY enthusiasts to fix a system created by corporate juggernauts. As I mentioned in my column ("User-Generated Censorship," 4/30/08), you can't always depend on communities of users to do the right thing. In addition, companies can create an incredibly oppressive tethering regime while still allowing users to think they have control. Tune in next week, and I'll tell you how Zittrain's solution might lead to an even more dystopian future.

User-Generated Censorship

There's a new kind of censorship online, and it's coming from the grassroots. Thanks to new, collaborative, social media networks, it's easier than ever for people to get together and destroy freedom of expression. They're going DIY from the bottom up -- instead of the way old-school censors used to do it, from the top down. Call it user-generated censorship.

Now that anyone with access to a computer and a network connection can post almost anything they want online for free, it's also increasingly the case that anyone with computer access and a few friends can remove anything they want online. And they do it using the same software tools.

Here's how it works: let's say you're a community activist who has some pretty vehement opinions about your city government. You go to, which is owned by Google, and create a free blog called Why the Municipal Government in Crappy City Sucks. Of course, a bunch of people in Crappy City disagree with you -- and maybe even hate you personally. So instead of making mean comments on your blog, they decide to shut it down.

At the top of your Blogger blog, there is a little button that says "flag this blog." When somebody hits that button, it sends a message to Google that somebody thinks the content on your blog is "inappropriate" in some way. If you get enough flags, Google will shut down your blog. In theory, this button would only be used to flag illegal stuff or spam. But there's nothing stopping your enemies in town from getting together an online posse to click the button a bunch of times. Eventually, your blog will be flagged enough times that Google will take action.

And this is where things get interesting. Google has the option of simply shutting down your access to the blog. They rarely do that, though, unless it's a situation where your blog is full of illegal content, like copyright-infringing videos. Generally what Google does if you get a lot of flags is make your blog impossible to find. Nobody will be able to find it if they search Blogger or Google. The only people who will find it are people who already know about it and have the exact URL.

This is censorship, user-generated style. And it works because the only way to be seen in a giant network of user-generated content like Blogger (or MySpace, or Flickr, or any number of others) is to be searchable. If you want to get the word out about Crappy City online, you need for people searching Google for "Crappy City" to find your blog and learn about all the bad things going on there. What good is your free speech if nobody can find it?

Most sites that have user-generated content, like photo-sharing site Flickr and video-sharing site YouTube, use a system of flags similar to Blogger's that allow users to censor each other. Sometimes you have to pick a good reason why you are flagging content -- YouTube offers you a drop-down menu with about 20 choices -- and sometimes you just flag it as "unsafe" or "inappropriate." Generally, most sites respond to flagging the same way: they make the flagged stuff unsearchable and unfindable.

Censorship isn't working the old-fashioned way. Your videos and blogs aren't being removed. They're simply being hidden in the deluge of user-generated information. To be unsearchable on the Web is, in a very real sense, to be censored. But you're not being censored by an authority from on high. You're being censored by the mob.

That's why I find myself rolling my eyes when I hear people getting excited about "the wisdom of crowds" and "crowdsourcing" and all that crap. Sure, crowds can be wise and they can get a lot of work done. But they also can also be destructive, cruel, and stupid. They can prevent work from being done as easily as they can make it easier. And just as the Web is making it easier for crowds to collaborate, the Web is also making it simple for mobs to crush free expression.

The Color Wars

Imagine that you had a group of friends and acquaintances you saw every day at school or at work, and one morning instead of saying "How are you?" they suddenly started saying, "Have you joined one of our teams yet?" At first, you would dismiss it as some dumb joke you missed on The Colbert Report the night before. But it keeps going: "I'm on white team. But Bob's on blue team," your pal says to you later. "Are you on the puce team?"

At this point, you truthfully believe that everybody has gone fucking crazy and that the people you thought were your friends are actually a bunch of kids living on that island from the movie Battle Royale (2000) where everybody has to kill one another for arbitrary reasons determined by a capricious authority figure who thinks he's a comedian.

This actually happened to me last week on a social network called Twitter, an online service that lets you send short messages to people on your list of friends. As you look at your Twitter "stream," you'll see your friends' names and short "tweets" about what they're doing or how they're feeling. When you work at home and don't have office pals to say hello to in the morning, Twitter is your surrogate office chit-chat zone. In the morning, I see my friends saying things like, "Yawn, I'm drinking coffee" or "Gotta finish this awesome project." In the evening, people will say, "Going to Sugarlump Café -- anyone want to come hang out?" Though I'm home on my computer, Twitter keeps me in touch with the social world.

But last week, for the first time, I felt like my friendly chat zone had become a freaky arena of prototribal warfare. And not the good kind of tribal warfare like in the recent flick Doomsday, with punk rock cannibals and Malcolm McDowell dressed as a medieval king. Everybody had started joining colored teams. I kept getting messages like the ones I described earlier, where people were saying, "I'm on blueteam! I'm on greenteam! What is your team?"

Finally, after hours of this, I typed a quick message to everybody: "I do not want to join a team." One of my friends replied, "OK, I've set up a team for you -- noteam! You can join that!"

No. I do not join colored teams. I don't join nonteams just to feel like I'm part of the team-joiners. I do not like when social spaces degenerate into meaningless competitions. It seems too much like Facebook.

I had to get to the bottom what the hell was going on. After a few quick searches, I discovered that the color wars were started by a popular Web personality named Ze Frank, who is most famous for doing funny shit online and creatively promoting the hell out of it. He decided it would be fun to say he was on "blue team" and then see how many people would join it or join other teams in response. On his blog, he wrote that it would be just like summer camp, where everybody joined a colored team and played tug-of-war or egg toss.

Except there are no potato sack races on Twitter. It's a communications medium, not a freaking summer camp. I love the hell out of tug-of-war and summer camp, but if you want to do that, why not create a "summer camp" group on Twitter and get everybody to go out to the park, form teams, and do shit? And then -- post all of the photos on Flickr? Why divide a gregarious social space into meaningless factions?

The whole thing depressed me more than it should have because it confirmed my worst suspicions about humanity: one, that people will blindly do what a charismatic figure asks them to do even if it's stupid; and two, that in the absence of conflict, people will still race to form teams that fight each other for no reason. This team thing took over a huge portion of the Twitter social network within a day. It spread that fast -- as fast, perhaps, as our desire to form alliances based on conflict.

So forgive me if I can't think of Ze Frank's little game as something "fun," like summer camp. It was about as fun as the Stanford Prison Experiment, and just as revealing.

Virtual Revolution

One of the social traditions that's carried over quite nicely from communities in the real world to communities online is revolution. You've got many kinds of revolt taking place online in places where people gather, from tiny forums devoted to sewing, to massive Web sites like devoted to sharing news stories.

And while they may be virtual, the protests that break out in these digital communities have much in common with the ones that raise a ruckus in front of government buildings: they range from the deadly serious to the theatrically symbolic.

How can a bunch of people doing something on a Web site really be as disruptive or revolutionary as those carrying signs, yelling, and storming the gates of power in the real world? By way of an answer, let's consider three kinds of social protest that have taken place in the vast Digg community.

According to Internet analysis firm ComScore, Digg has 6 million visitors per month who come to read news stories rounded up from all over the Web. About half of those visitors log in as users to vote on which stories are the most important: the one with the most votes are deemed "popular," and make it to Digg's front page to be seen by millions. A smaller number of people on Digg -- about 10 percent -- choose to become submitters of stories, searching the Web for interesting things and posting them to be voted on -- in categories that range from politics to health. Digg's developers use a secret-sauce algorithm to determine at what point a story has received enough votes to make it popular and worthy of front-page placement.

You can imagine that a community like this one, devoted to the idea of democratically generated news and controlled by a secret algorithm, might be prone to controversy. And it is.

Two years ago, I was involved in what I would consider one type of user revolt on Digg. It was a prank that I pulled off with the help of an anonymous group called User/Submitter. The group's goal was to reveal how easy Digg makes it for corrupt people to buy votes and get free publicity on Digg's front page. My goal was to see if U/S really could get something on the front page by bribing Digg users with my cash. So I created a really dumb blog, paid a couple hundred dollars to U/S, and discovered that you could indeed buy your way to the front page. Think of it as an anarchist prank designed to show flaws in the so-called democracy of the system.

But there have also been massive grassroots protests on Digg, one of which I wrote about in a column more than a year ago. Thousands of Digg users posted a secret code, called the Advanced Access Content System key, that could be used as part of a scheme to unlock the encryption on high definition DVDs. The goal was to protest the fact that HD DVDs could only be played in "authorized" players chosen by Hollywood studios. So it forced people interested in HD to replace their DVD players with new devices. It was a consumer protest, essentially, and a very popular one. Hollywood companies sent Digg cease-and-desists requesting that they take down the AACS key whenever it was posted, but too many users had posted it. There was no way to stop the grassroots protest. Digg's founders gave up, told the community to post the AACS key to their hearts' content, and swore they would fight the studios to the end if they got sued (no suit ever materialized).

Another kind of protest that's occurred on Digg came just last month, and it was a small-scale rebellion among the people who submit stories and are therefore Digg's de facto editors. After Digg developers changed the site's algorithm so that it was harder to make stories popular, a group of Digg submitters sent a letter to Digg's founders saying they would stop using the site if the algorithm wasn't fixed. You could compare this protest to publishing an editorial in a newspaper -- it reflected grassroots sentiment but was written by a small minority of high-profile individuals. Though the company didn't change its algorithm, this protest did result in the creation of town hall meetings where users could ask questions of Digg developers and air their grievances. Each of these kinds of protests has its correlates in the real world: the symbolic prank, the grassroots protest, and the angry editorial. So forgive me if I laugh at people who say the Internet doesn't foster community. Not only is there a community there, but it's full of revolutionaries who fight for freedom of expression.

You Cannot Afford Mars

Mars used to teem with life, but now it's a dead world. I'm not referring to actual Martian history, which we still know very little about. I'm talking about the way humans used to think of Mars and how they think about it now. As recently as the 1950s, Mars was packed with scary, incomprehensible creatures and hulking buildings set in a web of gushing canals. But now it's a cold, dry land full of rocks that are fascinating mainly due to their extraterrestrial nature. We even have two robots who live on Mars, sending us back pictures of mile after mile of beautiful emptiness that looks like the Grand Canyon or some other national park whose ecosystem is so fragile that tourism has already half-destroyed it.

Mars has, in short, been demystified. It's not an exotic source of threat or imagination; it's a place to which President George W. Bush has vowed to send humans one day. And Feb. 12 to 13, a conference was convened at Stanford University to discuss the feasibility of a United States-led mission that would send humans to the Red Planet. The attendees, mostly scientists and public policy types, were all pragmatism.

Reuters reports that consensus at the conference was that the National Aeronautics and Space Administration would need an additional $3 billion per year to plan for a Mars mission that would leave in the 2030s. (NASA's current budget is $17.3 billion per year.) So the question geeks like to ask one another -- "What would you take with you to colonize another planet?" -- now has a depressing and very non-science-fictional answer when it comes to Mars. It's $75 billion, paid out over the next 25 years.

But just to put things in perspective, a congressional analysis done in 2006 pegged the cost of the U.S. war in Iraq at $2 billion per week. Last year the total amount of money spent on the war surpassed $1.2 trillion.

So it's a hell of a lot cheaper to colonize Mars than it is to colonize our own planet. Still, it's too expensive. U.S. aerospace geeks are hoping that we can turn to Europe, Russia, and perhaps Asia to collaborate on a Mars mission because nobody expects that NASA will ever get even a sliver of the budget that the U.S. war machine does.

There is a tidy way to wrap this up into a lesson about how we're willing to spend more on destroying life as we know it than extending life to the stars. About how we'd rather burn cash on war than healthy exploration of other planets. But that's not the whole story.

Let's say the US government decides to leave Iraq alone and spends $2 billion per week on a mission to Mars instead. A mission that would culminate in a human colony. We could follow a plan somewhat like the one outlined in Kim Stanley Robinson's book Red Mars (Bantam, 1993), in which we first send autonomous machines to create a base and begin some crude terraforming. And then we send a small group of colonists, to be followed by bigger and bigger waves of colonists, who eventually live in domes. And who wage wars and rape the Martian environment.

I think the problem with colonizing Mars is that it would look all too much like colonizing Earth. We might even be killing a fragile ecosystem that we're not yet aware of. But most of us haven't demystified Mars enough to realize that. Sure, we know it's not packed with cool aliens, but we haven't realized that hunkering down on another planet isn't going to solve our basic problems as humans. On a planet, given the chance, we'll exploit all natural resources, including one another.

It's not that I'm against a mission to Mars. I just think getting the money for that mission is really the least of our problems. What I'm worried about is what humans tend to do with money when they aim it at something, whether that's a nation, a people, or a planet. Maybe it's better for Mars that we can't afford to go there.

Three Reasons to Hate Facebook

I know it's uncharitable of me to say I hate, because, after all, I have a Facebook profile and I log in to the infernal site several times a week. But I do hate it, and I'm not afraid to say why.

1. I don't want you to know who my friends are. Facebook is a second-generation social network, which means its developers have learned from the mistakes of early social networks like Friendster and MySpace. Like its predecessors, Facebook will give you a free profile page, where you can list as much information about yourself as you are willing to give up -- including what you've just bought online. As you make "friends," you link to their profiles and they link to yours.

Like its predecessors, Facebook is all about showing people who your friends are. And frankly, there are plenty of people I might want to connect with online who don't need to know about one another. It's not like I've got anything to hide, but even if I did, so the fuck what? Sometimes there are perfectly good reasons not to introduce all of my friends to one another.

I realize there are privacy restrictions on Facebook that allow me to hide my friend lists or make them only semivisible to people in networks, blah blah blah. But those are a pain in the ass to set up, and so, like most people on Facebook, I default to letting my friends see one another. I don't have to go around parties in real life advertising whom else at the party I know or have slept with. Why should I have to do so if I want to socialize online?

2. Too many annoying, inexplicable, and useless software applications are circulating on Facebook. Every time I log in to Facebook, I see in the upper right-hand corner of the screen all the "requests" and "pokes" and whatever the fucks I have from my social network. Many of these requests are generated by small software applications that people have written to run on top of Facebook. See, Facebook opened up parts of its system called application programming interfaces, or APIs, which allow anyone to write some dumb program that will send you crap.

Recently a number of those programs had allowed people in my social network to go through their friend lists and send automatically generated requests to join groups, take quizzes, or whatever. Here is the insane list I had in my requests bar: "1 gay request, 1 american citizen test request, 1 good karma request, 1 smartest friend invitation." And there have been so many others, like "hottest friends invitation," "zombie invitation," "vampire bite request," and "compare movie scores invitation." Yes, it sounds fun and whimsical at first, as if social relations have been turned into a fanciful playground. But then you get a spam feeling.

Usually, responding to requests requires you to sign up for something and give some information about yourself and download another piece of software. And why the hell do I want to answer a gay request from a zombie? I mean, that sounds good until you have to download unknown software from an unknown gay zombie. The fun turns out to be just noise. And there's nothing worse than noise in your personal profile space.

3. Facebook enforces social conformity. Some people are only figuratively forced to join Facebook, because if they don't it will be hard for them to network with friends and business associates. But I was actually forced to join by my employers, because we use Facebook as our employee contact list. OK, nobody pointed a gun at my head, but it was either join or be unable to access the contact information of anybody at my company. I'm not saying my company is evil or even wrong -- Facebook is a reasonable way of setting up an employee contact list for a company full of telecommuters.

It's just that being forced made me feel more than ever that Facebook is a tool of social conformity. The more public our friend lists are, the more we'll feel like we have to pick friends based on public opinion.

The Fragility of the Information Age

I was raised on the idea that the information age would usher in a democratic, communication-based utopia, but recently I was offered at least two object lessons in why that particular dream is a lie.

First, a dead surveillance satellite, one roughly the size of a bus, fell out of orbit and into a collision course with Earth. It will likely do no damage, so don't worry about being crushed to death by flying chunks of the National Security Agency budget. The important part is that nobody knew when the satellite died. Maybe a year ago? Maybe a few days? A rep from the National Security Council would only say, "Appropriate government agencies are monitoring the situation."

Is this our info utopia, wherein we literally lose track of bus-size shit flying through space over our heads? I mean, how many surveillance satellites do we have? It's not like I love the techno-surveillance state, but it is a little shocking that the SIGINT nerds who run it are so out of touch that they can't even keep track of their orbiting spy gear. Still, it's hard to be too upset when Big Brother isn't watching.

But that satellite could just as easily have been a forgotten communications satellite dive-bombing our atmosphere. And that would have sucked, especially since last week's mega Internet outage across huge parts of Africa, the Middle East, and Asia didn't bring down the global economy largely because people had satellite access to the Internet. This Internet outage, which took millions of people (and a few countries) off-line, happened when two 17,000-mile underwater fiber-optic cables running between Japan and Europe were accidentally cut. And this week, five more cables were mysteriously cut. No one is quite sure how they were severed, but it was most likely due to human error -- an anchor was probably dropped in the wrong place.

And so big chunks of Dubai went dark, as did many Southeast Asian countries. Businesses couldn't operate; people couldn't communicate. The people and businesses that were able to keep running were by and large the ones that didn't depend on cheap Internet services that use only one or two cables to route their traffic. It's cheaper to rent time on one cable, but if that cable is cut, you lose everything. Most customers don't research how their Internet service providers route Internet traffic across the Asian continent -- or across the Pacific Ocean -- so they don't realize their communications could be disrupted, possibly for weeks, if some drunken sailor drops anchor in the wrong spot.

In fact, few of us anywhere in the world consider the fact that our info utopia is a fragile thing based on networks that are both material and vulnerable. We think of the Internet as a world of ideas, a place "out there," unburdened by physical constraints. Even if you wanted to research which physical cables your ISP uses to route your traffic, it would be very difficult to do without a strong technical background and the help of the North American Network Operators' Group list, an e-mail list for high-level network administrators.

So why do a crashing spy satellite and a partly dark Internet mean we've entered the age of information dystopia? Quite simply, they are signs that our brave new infrastructure is failing around us even as we claim that it offers a shining path to the future. It's as if the future is breaking down before we get a chance to realize its potential.

But the information age doesn't have to end this way, in a world where can-and-string-network jokes aren't so funny anymore. There are a few simple things we could do. We could help consumers better understand what happens when they buy Internet access by showing them what routes their traffic might take and giving them realistic statistics about possible outages. People could then make better choices about what services to buy. And so could telcos and nations.

Why shouldn't we have solid research on which ISPs are most likely to suffer the kind of network outages we just witnessed from the severing of those two cables? Consumer groups could undertake this research. Or, since developed nations suffer more, perhaps the United Nations might want to conduct the investigation as a matter of Internet governance. We know where car traffic and sea traffic go. Why don't we know where Internet traffic goes?

Another thing we could do to stop the information dystopia is to cut down on spy satellites, but that, as they say, is another column.

A Polite Message from the Surveillance State

Say what you want about Google being an evil corporate overlord that steals all of your private data, turns it into info-mulch, and then injects it into the technoslaves to keep them drugged and helpless. There are still some good things about the company. For example, Google's IM program, Google Talk, sends you a warning message alerting you when the person on the other end of your chat is recording your chat session.

Just the other day I was chatting with somebody about something slightly personal and noticed that she'd suddenly turned on Record for our chat. I knew everything I was saying was being logged and filed in her Gmail. In this case I wasn't too concerned. For one thing, I wasn't saying anything I'd regret seeing in print. I'm used to the idea that anything I say on chat might be recorded and logged.

What was different about this experience was that Google warned me first -- told me point-blank that I was basically under surveillance from the Google server, which would automatically log and save that conversation. I appreciated that. It meant I could opt out of the conversation and preserve my privacy. It also meant that other people using Gtalk, who might not have had the expectation that all of their chat sessions might be recorded, would be enlightened.

It also reminded me forcefully that Google is a far more polite and privacy-concerned evil overlord than the United States government.

Right now members of Congress are trying to pass a law that would grant immunity to large telcos like AT&T that have been spying on their customers' private phone conversations and passing along what they've learned to the National Security Agency. The law, called the Protect America Act, would allow telephone and Internet providers to hand over all private data on their networks to the government -- without notifying their customers and without any court supervision of what amounts to mass wiretapping.

Last year the Electronic Frontier Foundation sued AT&T for violating the Fourth Amendment when a whistle-blower at AT&T revealed that the company was handing over private information to the NSA without warrants. That case has been working its way through the courts, and making some headway; in fact, it was starting to look like AT&T would be forced to pay damages to its customers for violating their rights. But the Protect America Act would stop this court case in its tracks by granting retroactive immunity to AT&T and any other company that spied on people for the NSA without warrants.

The whole situation is insane. First, it's outrageous that telcos would illegally hand over their private customer data to the government. And second, it's even more outrageous that when its scheme was discovered, the government tried to pass a law making it retroactively legal for AT&T to have broken one of the most fundamental of our civil rights: protection of our private data from the government.

Imagine what would happen if the phone and Internet systems in our country had the same warnings on them that Gtalk has. Every time you picked up the phone to make a call or logged on to the Internet, you'd get a helpful little message: "Warning: the government is recording everything that you are saying and doing right now." Holy crap.

The good news is that it's not too late. The Protect America Act must pass both houses of Congress to become law, so you can still alert your local congress critters in the House that you don't want retroactive immunity for telcos that are logging your private conversations for the NSA. Find out more at

And remember, everything you say and do is being logged. This polite message has been brought to you by the surveillance state.

What Happens When Blogs Go Mainstream?

Six years ago I wrote a column titled "Blog Anxiety," which was all about how bloggers make me nervous and jealous with their lightning-fast news cycles. I bemoaned my inability to commit words to public record without waiting for editorial oversight and without waiting for publication day (inevitably several days if not weeks after I had written those words).

I talked about how bloggers can cite sources they've talked to informally and how they seem blissfully unburdened by concerns about injecting a personal perspective into their writing. That was before It All Changed. And by "It All Changed," I don't just mean that I became a blogger, which I did. More profoundly, I mean that blogs themselves have changed.

They are not the subterranean upstart media without rules anymore. I'm certainly not the first person to observe that blogs are fast becoming indistinguishable from mainstream media, and indeed places like the New York Times and the Washington Post have blogs that are often more newsy than the papers themselves. This blurring between formerly mainstream media and formerly alternative media means that the upstarts are having to follow old-school rules.

While I can't speak for all bloggers, I prefer not to publish anything on my blog that hasn't been edited. I don't want readers to see my spelling errors and craptastic leaps in logic, thank you very much (of course you'll still see many, but not as many as you would if there were no edits). I also spend a fair amount of time on the phone or on e-mail interviewing sources for my posts, as well as doing research. And I won't publish anything that I think will get me sued, is libelous, or is just plain wrong, even if it's funny. What I'm saying is that my blog is not exactly the unedited, stream-of-consciousness outpourings of a person in pajamas. Well, OK, I am often in pajamas.

Recently I was reading a conversation thread on Metafilter, one of my favorite still-subterranean Web sites for smart talk and slagging. Somebody mentioned my science fiction blog, then snarked at me for starting a blog when I was on record saying that blogs freak me out. An unedited discussion full of spiky banter and maniacal analysis followed -- exactly the kind of conversation I once associated with all blogs. People were nastier than they would have been if writing for a mainstream publication, but the cool ideas-to-noise ratio was nevertheless far higher than you'd ever get in USA Today or CNN.

And this brings me to what scares me about blogs now. I worry that instead of taking the Metafilter ethos mainstream, many blogs are leaving it behind. That's not because we have editors or talk to sources -- I'm happy to see bloggers doing that. It's because our audiences are starting to be as big as those of the mainstream media, and the mainstream media have taught us to be afraid of saying what we really think to those audiences. They've taught us that we should tiptoe around hot-button issues like climate change and sex and delay publishing stories that might upset the government until such a time as the government is comfortable with those stories.

This is the source of my blog anxiety in 2008. Will blogs take on all the bad habits of the mainstream media, self-censoring when we should be publishing? Or will bloggers help the media progress just a little bit further toward independence of thought and bravery in publication?

It's still too early to tell. Even the most mainstream blogs don't suffer the same pressures that mainstream publications like the New York Times do. Blogs don't have the 100-year histories of many newspapers and magazines -- they don't have the huge staffs and long, elaborate relationships with corporations and governments and famous, influential people. And I am glad we don't have that history. I hope we can make our own, new history and shake up the way news is made and culture is analyzed. And then, in 30 years, I hope a new medium will come along and kick our asses too.

Technology in Wartime

War changes everything, including technology. In the United States we are roughly six years into what the Bush Administration calls the "war on terror," and what hundreds of thousands of soldiers know as the occupation of Iraq. Gizmos that a decade ago would have been viewed entirely as communications tools and toys are now potential surveillance and killing machines.

Don't believe me? Consider how much the Web has changed. Referred to naively ten years ago by Bill Clinton and Co. as the friendly, welcoming "information superhighway," the Web is now the NSA's surveillance playground. Last year, a whistleblower at AT&T revealed that every bit of internet traffic routed by AT&T was also being routed through an NSA surveillance system. Millions of innocent people's private internet activities, including online purchases and e-mail, were being watched without warrants.

Cuddly consumer robots epitomized by Sony's Aibo robot dog have changed too. iRobot, the US company that makes adorable Roomba vacuum robots, just announced a huge deal with the military to make reconnaissance and killing robots called PackBots for use in combat zones. Already, 50 PackBots have been deployed in Iraq and Afghanistan. These are the ground versions of unmanned aerial vehicles, remote-controlled spy planes that can also shoot weapons.

Tech security expert Bruce Schneier describes technology as having "dual uses": one for peacetime, and one for war. The Wii video game console, for example, is great for translating physical movements into movements on screen. That makes Wii great for party games where you swing your arms to move dancing penguins on the screen. It also makes a great interface for remote-controlled guns in a combat robot. Just move your arm to aim.

In a time of war, you can't enjoy a party game without thinking about your game console could be used to kill people. I realize that sounds melodramatic, but looked at pragmatically it's quite simply true.

Once you realize that every form of technology has a dual use, it becomes much easier to argue for ways of limiting the uses that aren't ethical or legal. Consider that a roboticized anti-aircraft cannon (similar to the PackBot) turned on its operators during a field exercise in South Africa last October, killing nine people before it ran out of ammo. The software error that led this robot to slaughter friendly soldiers is no different from errors that make your Roomba crash. What do we draw from this analogy? Perhaps robots that are perfectly legal as vacuums should be illegal on the battlefield? Perhaps no weapon should ever be completely autonomous like the Roomba is?

Questions like these led me and my colleagues at Computer Professionals for Social Responsibility (CPSR) to put together a conference at Stanford University on the topic of technology in wartime, focusing especially on ethics and the law. Coming up this January 26, the conference will be a day packed with talks and panels about everything from dual use technology (Bruce Schneier will be a keynote) to what happens when robots commit war crimes. We'll also hear from people who are appropriating military technologies for human rights causes: the very technologies that let military spies hide online also help human rights workers and dissidents hide online while still getting their subversive messages out.

We'll also have a panel on so-called cyberterrorism, or destructive hacks aimed at taking down a nation's tech infrastructure. But should fears of cyberterror lead to total government surveillance of the internet? Cindy Cohn, Electronic Frontier Foundation's legal director, will talk about how the NSA used AT&T to spy on US citizens, and the suit EFF has brought against AT&T for violating its customer's privacy rights.

If you want to find out how to change the way militaries are appropriating consumer tech, or just want to learn more about war is changing the way we use technology, come out to Stanford on Jan. 26 for the conference. It's open to the public, and you can register at The cost of admission gets a you free lunch and a t-shirt, as well as a chance to talk to some of the smartest people in the field. See you there!

A Story of International Intrigue

The following story is not entirely made up. But it's fictional enough that if you think you recognize yourself or your friends, then you must be mistaken.

He had a vaguely European-sounding name and a vague job doing something with the United Nations, or perhaps one of its subcommittees or projects or councils. It sounded important because it had a lot of words in it, and one of those words was Internet. That's why Shiva met him. They were at some kind of after-conference party, or maybe it was midconference.

Anyway, it was for some center or special interest group at Harvard that was very concerned about the Internet in Africa. Shiva had come late in the afternoon to hear the keynote presentation, which wasn't actually related to Africa. It was delivered by someone whom she admired, a technologist with a social conscience who would have done something about Africa if he had had time after haranguing the United States government about putting its citizens under surveillance without warrants.

The keynote speaker talked rousingly about how easy it was for governments -- even ones in Africa, he was careful to add -- to spy on people's activities online. He talked about all of the great activist groups at Harvard and elsewhere around the world where smart geeks were figuring out ways to hide personal data from invasive states. He invited them all to help out by contributing to several open-source software projects, and then he invited them to the reception for wine and cheese.

There Shiva met the guy with the European-sounding name, who regaled her with stories about the wine in Spain and setting up wireless networks in Africa. He was so entertaining that she forgot to ask him which country in Africa, and then she consciously decided not to ask him since she knew so little about African geography that she might come across as exactly the sort of person who didn't belong at Harvard. At one point he mentioned Lagos, which she knew (to her relief) was in Nigeria.

One thing led to another, and they wound up at Shiva's lab at MIT because the European guy got really excited when she told him about her project on assembling virus shells for drug delivery. He would be leaving for Lagos in the morning, he told her, and she thought, "What the hell? I'm going to take this guy back to my lab and fuck him." And she did, and it was pretty hot, especially because he seemed so interested in her work. Before he left they exchanged e-mail addresses.

Lagos is one of the biggest cities in the world, but its exact population is unknown. A 2006 census claims the state of Lagos (which includes the city) has a population of nine million, but locals say these numbers are low and should be as high as 10 or 12 million. A city like that, whose population can't even be determined to the nearest million, is a good place to disappear.

But the European guy didn't disappear, and he would occasionally write Shiva e-mails from Lagos, forwarding her links about local politics or commenting on how locals ate this green stuff they called simply vegetable. He was setting up wireless networks and writing reports about them for his UN group or council or whatever. To get data in and out of the country, he wrote, he had to hide it on USB devices that looked like toy models of the TARDIS spaceship. People were so suspicious of anything that looked like a computer.

Eventually, the e-mails trailed off. He was in Switzerland, then Dubai, then Africa again. Never Cambridge. Shiva was busy prepping a paper for Nature, and then she was prepping for a conference. She hooked up with a couple of other people, started exchanging other flirtatious e-mails, then forgot about the European guy entirely.

Until one day she saw a picture of him on her favorite blog, right next to a post about how to make bicycles from foam. Apparently he'd been selling bioweapons information to groups variously labeled terrorists or insurgents, using his UN gig as cover. He had been teaching guerrillas about viruses. Nobody could figure out where he'd gotten his data. They figured it was a disgruntled Islamic militant somewhere, a person with a vendetta against the US government. Shiva never knew if it was her.

Annalee Newitz ( is a surly media nerd who doesn't know anything about virus shells and has never been to Lagos.

Why Buying a Nintendo Wii Is Worth It

For the holidays, I don't want to do something nice for the earth. I don't want to buy special laptops loaded with Western video games and imagery for kids in Africa without computers. I don't want to get handmade iPod covers from the Etsy online store that nurtures local craftspeople. And I don't want to go off-line for a day to commune with people in the real world.

I want toxic Chinese toys covered with paint that will make me hallucinate. I want a sparkly-crap mobile phone that will break within a week and turn into circuit-board garbage that cannot be recycled and will therefore be shipped to developing countries where it will be hacked and resold. I want a media device that's wrapped in so many layers of plastic and nonrecyclable material that the very act of opening it is like smashing my carbon footprint onto the face of Mother Earth. I want a useless gizmo mass-produced by machines that stole jobs from nonunionized workers who stole jobs from the natives.

In short, I want a Nintendo Wii.

It's the biggest-selling video game console ever, and it's made from so much biosphere-destroying garbage that I'll be scrubbing methane out of the air for the rest of my life to make up for even thinking about owning one. Plus, Wii controllers are motion sensitive, which means they strap onto your body. Every time I use my Wii -- which, I would like to underscore, I do not yet own -- I will be turning myself into a literal extension of my machine.

Do you hear that, hippies? I want to strap electronics to my body and trance out to violent imagery while I wave my arms around, killing imaginary things. That's what I want to do for the holidays.

But the Wii isn't just a consumer electronics death monster. It's also something I think everyone should own or at least try out, because it truly represents the future of technology. The fact that people can now interact with a video game simply by waving their arms -- and the video game "sees" the waving and responds to it onscreen -- is revolutionary.

There's a good reason why Wiis are popular with people of all ages, unlike most game systems. They respond to natural human movement rather than force people to learn elaborate combinations of buttons and knobs on bizarrely shaped controllers. The Wii is a machine made for humans.

Already those humans are figuring out ways to repurpose the Wii and make it work with other kinds of devices. There's a Wii DJ (called, of course, WiiJ) who uses his Wii controllers to cue up and mix tracks on a computer. Somebody else is using a Wii controller to operate Bluetooth devices. And so on. The point is, the Wii is cool not because it's a video game system but because it's introduced a new way of interacting with computers. If you want to know what a home computer setup will look like in 10 years, play with a Wii. Your mouse will soon be replaced with a motion-based setup. You'll point with your finger and click by tapping two fingers together. Or by saying click.

I don't mean to romanticize the Wii, because it is, after all, just another thing with built-in obsolescence. It's a toy you'll throw away without thinking, consigning it to an unknowable half-life as indigestible silicon shards. It sucks when great future innovations are doomed to become garbage that may last longer than the benefits of the innovation itself.

But if the holidays are a time of reflecting on the past and the future, you might as well hang out with your friends and play Guitar Hero on the Wii. After all, donating to cool charities and supporting local artists is something you should be doing all year. You should buy a cute present for your sweetie from Etsy when it strikes your fancy, not just when the capitalist juggernaut tells you to. And, of course, you should never be off-line for a day. That's just taking things too far.

Who to Vote For When the Top Candidate Stinks

Last week's mayoral election in my hometown of San Francisco was one of those weird moments that make you think you're living in a Philip K. Dick novel, looking at hundreds of alternate futures peeling away from the present like little slivers of psychosis. It was a dismal election, in which the incumbent, conservative-for-San Francisco Gavin Newsom, was the only candidate who had any hope of winning. He was practically unopposed, but there was, technically, a cornucopia of candidates, spanning the gamut from qualified but unpopular to completely unqualified and silly, who were on the ballot running against him.

Things being what they are, the silly candidates got the most attention (albeit not most of the votes). Some guy named Chicken, known mostly for his participation in the art festival Burning Man, ran on a campaign pushing people to vote for him as their second choice, since San Francisco has ranked-choice voting. He definitely had great posters, given his connection to the arts community, but not much of a platform.

Then there was the sex club owner Michael Powers, who ran on a platform I never quite understood. Powers does have one of the nicest sex clubs I've ever seen, called (appropriately enough) the Power Exchange, and I wondered briefly if that might qualify him to run the city. But in the end, he got the fewest votes. And Chicken did not come in anywhere near second.

As I said, there were a few candidates, like Quintin Mecke, with relevant experience, but none had big enough constituencies to pull off a win. So when it came time to fill in my ballot, I voted for a guy who isn't a joke and has the kinds of political experience that might get him elected in 2035: Josh Wolf.

Media geeks may remember Wolf as the blogger who was sent to prison for refusing to identify for the police some protesters in video he posted of a political demonstration that turned violent. After he got out of prison he went on the Colbert Report, where he came across as well intentioned and with a burning passion for free speech. In the mayoral race, he ran on a platform that emphasized open democratic processes and a good wi-fi plan for the city. Nobody in his campaign thought he would win, and indeed he only garnered about 1,500 votes. But that's saying something in an election with only 17 percent turnout.

So why didn't I vote for somebody like Mecke, who had a good position on dealing with homelessness and had already done some work in city politics? Because, as I said, I felt like I was in this Dick novel looking into a zillion possible futures right there in the polling place.

There were the sure-to-fail futures represented by good candidates with no hope of winning, and then there was the dark future of creepy joke candidates like Chicken, whose mockery of the voting process was probably part of why so few people turned out for the election. Why vote when running for mayor had been turned into a joke?

So I voted for the best possible future I could find, the future in which, eventually, smart young people who care about freedom of expression online become mature politicians who understand new technologies and the socioeconomic conditions associated with them.

Maybe Wolf won't grow into that politician, but somebody like him will. And that person will probably understand things like how to organize Internet access for low-income city residents and why entertainment companies shouldn't be allowed to sue people for hundreds of thousands of dollars because they've been file-sharing. That person will also understand how easy it is to violate people's privacy online and will push for regulations that prevent companies and governments from dipping into private digital data supplies.

Of course, the future in which we have politicians like Wolf may never happen. We can't predict what will become of him, and we can't know if digital natives will mature into progressives who care about access and privacy reforms. There's always room for wired neocons and digital Puritans, whose intimate history with the Internet will make them particularly good at legislating censorship purges and invasive data mining. That's not the future I voted for, but I am always having to remind myself that's the future I may get.

Consumer Biotech

When will we tire of the endless scandals over bricking iPhones, RSI-causing Wiis, and PlayStation shootings? I think the time is coming soon, my friends. In fact, the whole consumer electronics craze is about to die off and give birth to a new home-tech phenomenon. I refer, of course, to the consumer biotech revolution that's just on the horizon.

Consumer biotech isn't a new idea. Home pregnancy tests are a form of consumer biotech, as are Viagra and Prozac. Many diabetics administer insulin using small computers that measure their blood sugar levels and administer appropriate doses when necessary. I call this stuff consumer biotech because it measures and alters biological states for the mass market. And when smart phones become as boring as dumb ones, the lust for cool new biotech will replace the lust for new game consoles. Here are a few ideas about what will happen when consumer biotech goes beyond medical devices and into the realm of entertainment.

DNA Crystal Ball

Already people are jumping at the chance to get their genome sequenced using cheapo services like Meanwhile, scientists at the Georgia Institute of Technology have invented a biosensor for identifying viruses that's the size of an attaché case. So it shouldn't be long before a company develops handhelds that identify sections of your DNA that offer hints of your distant parentage as well as what kinds of characteristics you're likely to develop as you age. Of course, nobody really cares about the science behind this crap -- they just want to be told a cool story that predicts what will happen to them based on their allele configuration. Thus Mattel will offer the DNA Crystal Ball, a little computer that will spit out pseudoscientific "predictions" about you based on poorly researched genomics studies. If you have this or that allele, you might become an artist! Or you might be quick to anger. Your ancestors might have been Indian princesses or African warriors! Since the device will be sold purely "for entertainment," it won't give you, for instance, valuable information about a predilection for breast cancer. But you'll metastasize happily knowing you've got the "gene" for friendliness.


Kids love Shrinky Dinks, the plastic toys you color and stick in the oven, shrinking them into hard little plastic ornaments. So why not do the same thing with tissue engineering? Using techniques already perfected by a bunch of Australian tissue artists from a lab called SymbioticA, kids will create wee "clonies," tiny versions of themselves grown from their own skin cells using tissue-engineering edifices. Just culture a bit of your skin and grow it in a petri dish while you build a little model of yourself out of the foamy edifice. Once you've got a few inches of skin, drape them on the edifice, let them grow for a few days, and presto! A tiny version of you, made of your own skin! You'll get days of fun, and then you can dispose of the clonie in a handy biohazard container (sold separately). Try it with your dog, and your friends!

Gene Expression Jam Session

Remember how cool Garage Band was back when people thought playing with computer networks was as fun as playing with cellular signaling mechanisms? Jim Munroe has predicted that in the future every kid will have an Easy-Bake Oven for growing new animals, but Gene Expression Jam Session will be way cooler. Mix and match the genes of your choice using an easy user interface and rewrite your biology on the spot. Want to glow green for the evening or sprout hair all over your body? How about growing an extra pair of arms on your torso? Gene Expression Jam Session will produce the genes you need to do it, enclose them in a nifty virus-shell vector for quick delivery to your DNA, and shoot 'em right into your arm for fast-acting fun! Once you're sick of your newly engineered appearance, you can buy a plug-in that reverses the effects of your newly added genes or adds extra genes to make you look even wilder!

And don't get me started on the consumer nanotech revolution. You haven't truly lived until you've turned your pet goldfish into a golf ball.

Always Away on Instant Messenger

My social world is divided into two camps: people who use instant messaging and people who don't. When I start my workday by booting up my computer, I consider myself to have arrived at the office when my IM program comes to life and is suddenly populated by dozens of tiny names and faces. In fact, it's sometimes hard for me to work with people who aren't on IM. E-mail just isn't fast enough. And the telephone is too fast.

I find meetings on the phone frustrating because I can't multitask easily while talking. Sure, I can check e-mail or browse the Web, but usually the person on the other end of the line notices. All of those awkward pauses between sentences make it obvious that I'm only giving this call 85 percent of my attention. That's considered rude on the phone, but not so with IM. Sometimes I'll be exchanging a flurry of messages with a colleague on IM when suddenly she'll take five minutes to answer a question. And that seems normal. She's dealing with another task and will get back to me when she can, and we'll resume where we left off.

Although IM technology has been around for years, I feel like it's reached a kind of singularity that early users of "chat" would hardly recognize. There's an etiquette culture that's grown up around IM, a set of appropriate and inappropriate behaviors that varies across groups of IM users. For example, most of the people I talk to via IM are colleagues. I work from home, so most of my human contact during the day comes via quick exchanges and meetings on IM. Nearly everyone on my IM list has their status set to "away," which is technically supposed to mean they're not at the keyboard. But in reality most of us set our status to away because we're at work and don't want to be disturbed by random people or purely social messages.

That's why every time I IM somebody who claims to be away, I discover they aren't. Acknowledging this, we add custom messages to our away flags to tell the truth about our status; "work only pls" is a common message, as is "on deadline do not disturb unless urgent." Other people set their messages to explain where they are: "in a meeting" or "in New York" or "eating lunch." What's great about the away flag, though, is that it gives you plausible deniability if you don't want to talk to somebody who has messaged you. After all, you might really be away. Who knows?

For a couple of years Sun Microsystems researcher Nicole Yankelovich has been studying the habits of people like myself who work remotely. What she's discovered is that people who don't work in a physical office tend to miss the casual chatter and bonding that happen before meetings or at lunch. These social interactions wind up improving work flow because people come up with good ideas while chatting casually, and brainstorming is easier in an informal environment. IM is how many of us are filling the gap. IM is our office space, where work chatter can become casual chatter. Like a closed office door, the away flag means "Please knock." And once you're in the office with the person, you can have a pretty interesting talk, even though you're supposed to be concentrating on your work.

It's funny how software that was first used primarily as a goof-around, social tool has become a way for people to have business meetings and talk shop.

Other groups of people who IM, however, do it mostly for social reasons. These people are generally flagged "available," and they have vast contact lists that look more like MySpace friend lists than office contact sheets. Occasionally, these social IM users and I have passed in the night, as it were: one of them will casually message me because they don't consider it weird to approach a stranger on IM to chat. For them, IM is like a giant nightclub or a college campus. Usually my away flag wards these people off, but sometimes it doesn't, and I have to politely tell them I'm busy. And I frankly refuse to respond to a repeated "Heya wassup?" from anybody whose name is something like SFKitty233. Unless, of course, SFKitty233 happens to be my colleague. Which she just might be.

To See or Not To See Violence

I did not know the screaming man, nor did I know what country he was in. My view of him was shaking -- the video was probably taken with a cell phone or cheapo digital camera with limited vid capability. Suddenly another man came into the frame and cut out the first man's throat, which didn't stop the screaming but instead turned it into a horrible, high-pitched wheezing. Eventually he sawed off the rest of his victim's head and threw it around a little bit just for good measure. I had to stop watching, so I killed the tab in my browser.

My first thought was: what the fuck? And then, as the nausea subsided: what the fuck are these people trying to prove by killing a man like this? I was hungry for context.

The next day, I found myself asking more questions, but not about the motives of the murderers. Instead, I wondered about the communications technologies that allowed me to see that video in the first place. A group of bloodthirsty guys had to have handheld video-capture devices, video editing software, and a high-speed Internet connection to upload the finished product. Then they had to host the video somewhere that anybody could see it. In this case, that somewhere was the Internet Archive, a nonprofit organization in San Francisco devoted to the preservation of history in digital form.

Most of the Internet Archive ( is organized as a physical-world archive would be: curators like film historian Rick Prelinger donate rare and antique collections of media that they've digitized, and the archive makes them available to the world. But archive founder Brewster Kahle has a populist streak. He believes the public should have a say in what gets preserved in the historical record, so he invites the public to contribute. That's why the Internet Archive has a small area on its Web site called the Open Source Movie Collection, where anyone can archive his or her media.

Kahle wasn't expecting to host raw war footage when he created the open source collection. But curator Alexis Rossi says the archive receives about 30 to 50 Arab-language videos per day that are related to the Iraq war. "About two or three per week are really violent," she adds. "They are taxing to watch." Kahle, for his part, wasn't sure what to do about them. They are undeniably a legitimate part of the historical record of the war and other conflicts in the Middle East. Watching them provides people in the West with a rare opportunity to see what Iraqi groups, including terrorists, are saying about themselves.

These videos don't threaten national security, and they aren't illegal because obscenity laws apply only to sexual content. So Kahle's worries are purely about social good. Though these videos form a crucial part of the historical record of the war, something about them seems just, well, wrong. Then again, who is to say what is wrong in this case? War is brutal and deadly -- hiding that fact isn't going to help us achieve peace.

After agonizing over how to deal with the archive's growing collection of war videos and consulting with experts, Kahle has come up with a solution that satisfies both his archivist and populist sides. He's planning to set up a system on the archive that will allow users to post warnings about violent footage. These warnings will show up before other people see the videos; this way, the community can warn its members not to watch unless they are prepared for extremely graphic content. Rossi also hopes that the Internet Archive community will get involved in other ways too. "I'd love somebody to translate some of these videos for us," she says. (You can find many of the Arab-language videos at

That warning policy is similar to community-policing systems on the movie-sharing site YouTube. The difference is that the Internet Archive -- unlike YouTube -- will rarely remove a video. Kahle is committed to preserving history in all its forms, even the ugly ones. It's a lesson he thinks the mainstream media, with its whitewashed coverage of the war, would do well to learn. If we don't remember the past, we're doomed to repeat it.

The Secret Messages NASA Sent to Aliens

As annoying as hippies can be, it's strangely comforting to think that the one bit of junk we shot into deep space is emblazoned with a hippie symbol. I'm talking about the golden records screwed onto the shells of Voyagers I and II, two space probes that completely changed our understanding of the solar system and then shot out into deep space bearing record albums intended for alien consumption.

Last week marked the 30th anniversary of the Voyager II launch. While most people recall the Voyager probes for creating close-up photographs and atmospheric readings from Jupiter, Saturn, Uranus, and Neptune, these probes were always intended to do more than send messages back home for human consumption.

In the mid-1970s, when the Voyager spacecraft were being completed, pop cosmologist Carl Sagan convinced NASA to include a message from Earth on the probes. They were to bring news of us to alien beings in the unknowable reaches of the galaxy and beyond.

In consultation with a bunch of other geeks (including Timothy Ferris, who produced the album), Sagan decided that the delivery mechanism for this message should be a golden record, packaged with a cartridge and needle, as well as abstract mathematical instructions for how fast to spin the disc and at what frequencies it would emit sound. You can listen to the entire recording at, and the experience is bittersweet, an auditory glimpse of a very different time in human history.

The tracks include greetings in dozens of languages, including ancient Sumerian, which of course nobody knows how to pronounce anymore. And Gaia help us, there is also a "whale greeting." There is a track devoted to "Earth sounds," all which sound totally cool while remaining unrecognizable as particularly Earthly. There are over a dozen music recordings from around the world, all of which are written (and mostly performed) by men.

Most are from the West, with a few Russian numbers thrown in -- probably for "diversity." Bach is presented alongside Chuck Berry, Navaho chants beside Beethoven. It's a Sesame Street notion of pluralism, with an emphasis on music and greetings rather than political speeches or academic treatises on economics.

Also included on these records are directions to Earth, using nearby stars as navigation points.

The golden records imply that music, math and images are universal symbolic systems, the best kind for communicating with beings radically different from ourselves. This is an idea that was popular in the 1970s -- Steven Spielberg immortalized it in Close Encounters of the Third Kind, in which humans meeting aliens establish communication via electronic sounds.

But as American historian Karen Ordahl Kupperman has pointed out, the idea that music (and the math underlying it) is a universal form of communication also comes from centuries-old encounters between Europeans and natives in the Americas. Early European explorers recount communicating with natives via music upon first meeting and reaching an understanding on that basis.

Music may be a near-universal form of communication among humans, and there is something glorious and touching about trying to share that with other creatures in space. Of course, the notion that aliens might share the idea of "hearing" with us is profoundly silly. What if these are creatures who communicate via molecular manipulation, or chemical signatures? What if they live in vacuum, and therefore cannot "hear" at all?

So yeah, the golden record is species-centric. It's also naively specific to one culture, for who can think of a golden record full of Western music as anything but the work of hippie liberal white dudes? Still, I'd rather be represented by its naive utopianism than by most of the signals shooting off this planet.

No doubt the golden record will bemuse any alien life that actually bothers to examine the goo on a piece of space junk. But a bemused alien may in fact be the one who comes closest to guessing the true meaning of the golden record, and perhaps the true meaning of human life itself. And so it seems fitting that our one letter to the universe reads something like this: We have no idea what we're doing, but we sound good! Wish you were here.

Anti-authoritarian Cities

Archaeologists have discovered that one of the oldest urban areas in the world was built in a way that completely defies conventional wisdom about how cities grow. Last week, a group of researchers working in Syria published a paper in Science about a 6,000-year-old city called Brak, a once-thriving urban center in northern Mesopotamia (now northern Syria, near its border with Iraq). It has long been believed that cities begin as dense centers that grow outward into suburbs. Brak, however, began as a dispersed group of settlements that formed a rough ring shape and gradually grew inward to form an urban center that boasted a massive temple and a thriving import-export trade in tools and pottery.

Built sometime between 4200 and 3900 B.C.E., Brak eventually grew to 130 hectares, making it one of the largest known cities of its era, surpassed in size only by Uruk in southern Mesopotamia. Long buried under a tell -- a hill of accumulated debris and layers of urban development -- Brak's early history has only recently been revealed. Archaeologists are racking their brains to figure out why the place doesn't follow known patterns of city growth.

Harvard anthropologist Jason Ur, one of the researchers investigating Brak, writes that Brak's history might mean that urban development can be an emergent property of many immigrant groups coming together in the same region. Early cities might have spontaneously arisen via the development of a handful of neighboring villages. This flies in the face of older theories, which hold that cities develop when a ruling elite or a set of dominant institutions creates a dense, hierarchical city center whose culture and populations spread outward from it.

Brak reflects a "bottom-up" model of city evolution, in which a central political power is built slowly out of diverse cultural interests. According to Ur and his colleagues, Brak's layout "suggests both dependence on but some autonomy from the political power on the central mound ... This pattern suggests a greater role for noncentralized processes in the initial growth of Brak and lesser importance for centralized authority."

Short of hopping into a time machine, we'll never know for certain if Brak's layout reflects its citizens' attitudes toward centralized authority or not. What we do know, however, is that not all cities develop in the same way. More important, we now have social scientific theories to explain why that might be -- theories that can help us understand urban development in the contemporary world as well. Cities today could be evolving from decentralized areas, out of geographically dispersed groups that do not view themselves as subject to one dominant set of institutions.

Brak might be an example of how cities would grow if neighborhoods had more social and political autonomy. What if, for example, we considered a city to be a federation of neighborhoods rather than a downtown with satellite suburbs? Would that be the foundation for an anti-authoritarian cityscape?

To answer, let's contemplate a possible modern-day version of Brak: Silicon Valley, a region of dispersed urban centers that have formed a loose economic federation via the computer industry and proximity to freeways. Residents of Silicon Valley may not consider the area to be one city, but it's very possible that people excavating the toxic remains of Silicon Valley 6,000 years from now will interpret the area as a single urban unit.

Like Silicon Valley dwellers, residents of Brak may have thought of themselves as living in a series of linked villages, united by nothing more than a system of commerce and a set of roads. Does this mean that authority in Brak was truly decentralized? Maybe the urban development of Brak belies a deeper truth, which is that centralized authority does not always reveal itself via tall buildings at a city's core. An entire region may bow to the authority of an economic system that stratifies it into rich and poor, and yet never have a common culture. Decentralized urban spaces are not necessarily anti-authoritarian ones.

The Trouble with Anonymity on the Web

Pundits of the Internet age are fond of excoriating the Web because anyone can post on it anonymously. Andrew Keen, whose recent book Cult of the Amateur is a good primer on why people hate the Web, highlights the horrors of anonymity in his work, contrasting the millions of unnamed Web scribblers with honorable, properly identified writers of yesteryear. Keen's point is that people who don't put their names on what they've written don't feel responsible for it; therefore they feel little compunction about lying or misrepresenting their chosen subjects. After all, an anonymous writer doesn't have to worry that their reputation will be tarred -- unlike, say, a writer at The New York Times, whose byline appears on his or her articles.

Every social stereotype has a caricature associated with it, and the "anonymous Web writer" has theirs. They're always portrayed as a he, first of all. And he's inevitably described as being "some blogger writing in his basement in his pajamas." In other words, this anonymous person is not a professional (hence the pajamas) and probably poor (he lives in a basement). He's a nobody, a loner who lashes out at the world from his dismal cell, hiding behind his anonymity and destroying the good reputations of nice people.

Where does this sad little man like to post his anonymous invective? Wikipedia, of course. He can change any entry without leaving his name, adding lies to biographies of innocent mayoral candidates and spewing spam all over facts. And the best part is that most people take Wikipedia seriously. They regard it as a reliable source of knowledge, despite the fact that it's written by unknown, basement-dwelling bloggers in pajamas.

That's why I was so gratified when California Institute of Technology grad student and mad scientist about town Virgil Griffith released his software tool Wikiscanner, which you can use to quickly check on who has been editing Wikipedia entries anonymously. You see, whenever you edit a Wikipedia entry, the encyclopedia logs your unique IP address, which can often be tracked back to a physical location, including your place of employment. Even if you think you're being stealthy with your anonymous writing, you're not. Wikipedia sees all.

And now the public can see all if they visit Griffith's Wikiscanner site. Turns out that all the anonymous propaganda and lies on Wikipedia aren't coming from basement dwellers at all -- they're coming from Congress, the CIA, The New York Times, The Washington Post, and the American Civil Liberties Union. Somebody at Halliburton deleted key information from an entry on war crimes; Diebold, an electronic-voting machine manufacturer, deleted sections of its entry about a lawsuit filed against it. Someone at Pepsi deleted information about health problems caused by the soft drink. Somebody at The New York Times deleted huge chunks of information from the entry on the Wall Street Journal. And of course, the CIA has been editing the entry on the Iraq war.

Wikiscanner allows you to search millions of edits, perusing a precise record of all the changes that have been made. While you can't figure out exactly who at the CIA made the changes to the entry on the Iraq war, you can be sure the changes came from somebody on the CIA's computer network.

Griffith created Wikiscanner for a frankly political reason. As he told the Times of London, he did it "to create minor public relations disasters for companies and organizations I dislike." In the process, however, he's revealed something far more fundamental than the fact that acolytes of Pepsi and the CIA will stop at nothing to propagandize on behalf of their employers: he's undermined the myth of the anonymous blogger in the basement.

It turns out that the people who are hiding behind anonymity online for nefarious or selfish reasons are not little guys in pajamas but the very bastions of accountability that haters of the Web have deified. It's not a mean dude with a grudge who is spreading lies on Wikipedia but rather a member of the federal government or a journalist at The New York Times. Cultural anarchy online is coming not from the hordes of scribbling bloggers but from the same entities that have always posed a danger to culture: corporations and governments who refuse to take responsibility for what they're doing.

Should Archivists Go Paperless?

Paper archives are dangerous. For the past several weeks, I've been standing knee-deep in paper untouched by human hands for decades, sorting through decaying files and strange pamphlets, breathing so much dust that I cough all night afterwards. It's even worse for archivists and librarians who work with materials that are older than a century; they report that spores and mold on materials give them headaches, short-term memory loss, diminished lung capacity, and severe allergies.

Back in 1994, an archivist working with century-old materials in an antique schoolhouse wrote an e-mail to a conservation listserv that sounded so ominous it could practically have been the introduction to a Stephen King novel. "For several months I sorted through water-damaged ledgers and artifacts. Many were covered with a black soot-like dust," she wrote. "After a few months, I noticed I was losing my balance, my short-term memory was failing, and I began dropping things."

Years later, after her lung capacity had dropped 36 percent and her memory was damaged permanently, a doctor finally diagnosed her condition. She'd been poisoned by mold on the archival materials she'd devoted her life to preserving.

A letter published in Nature in 1978 points out that old books and papers actually develop infections, colloquially called "foxing," that look like a "yellowish-brown patch" on the page. That patch, explain the letter writers, is actually a lesion caused by fungus growing on the book "under unfavorable conditions."

Today most libraries recommend that conservationists working in archives with old materials and books wear high-efficiency particulate air filtering masks. My archival adventures this month don't involve foxing, or brain-damaging mold. I'm preserving an historical paper trail that's too recent to have gone toxic.

In fact, I'm in the odd position of trying to organize the papers of an organization, Computer Professionals for Social Responsibility, whose entire mission since 1980 has been to promote the ethical uses of technology, and to build a prosocial, paperless future. With all the dangers of paper archives, and all the love for computers at the CPSR, why bother to preserve the organization's papers at all?

Why not, as one member of the CPSR asked me, just scan everything and create a digital version of CPSR history? There are million reasons why not, but all of them boil down to two things: scale and redundancy.

Over the past quarter century, the CPSR has accumulated 65 crates of papers and nine tall metal filing cabinets full of records. Some of the papers are cracking with age; some are old faxes or personal letters on onionskin paper; some are pamphlets or zines; some are poster-size programs; others are little, folded stacks of handwritten notes. There are photographs, floppy disks, VHS tapes, and even a reel of film.

Even if we had all the resources of the Internet Archive, a nonprofit that is scanning books onto the Web at a rapid clip, the CPSR scanning project would take weeks. After all, we aren't scanning regular papers and books. We have so many kinds of archival material that we'd need specialists who knew how to scan them properly without damaging the originals.

Plus, how would we label each item we'd scanned? Every single one would need to be put into a portable, open file format and labeled with data by hand to identify it. That's a project that could take months if done by a team of pros and years if it's being done by volunteers. So part of creating a paper archive is simply a matter of pragmatism. It's easier to preserve history on paper.

More important, though, we need a paper backup copy of our history. I love online archives as much as the next geek, but what happens when the servers blow out? When we stop having enough power to run data storage centers for progressive nonprofits? And even if digital disasters don't strike, history is preserved through redundancy. The more copies we have of the CPSR's history, in multiple formats, the more likely it is that generations to come will remember how a brave group of computer scientists in the 1980s spoke out against the Star Wars missile defense system so loudly that the world listened.

When it comes to preserving history, every digital archive should have a paper audit trail.

Kids Safer Online Than Ever Before

There's a horrifying new menace to children that's never existed before. Experts estimate that 75 to 90 percent of pornography winds up in the hands of children due to novel technologies and high-speed distribution networks. That means today's youths are seeing more images of perversion than ever before in the history of the world.

What are the "new technologies" and "distribution networks" that display so much porno for up to 90 percent of kids? I'll give you one guess. Nope, you're wrong; it's not the Internets. It's print.

The year is 1964, and I'm getting my data from financier Charles Keating. He had just formed Citizens for Decent Literature, an antiporn group whose sole contribution to the world appears to have been an educational movie called Perversion for Profit.

Narrated by TV anchor George Putnam, the flick is an exposé of the way "high-speed presses, rapid transit, and mass distribution" created a hitherto unknown situation in which kids could "accidentally" be exposed to porno at the local drugstore or bus station magazine rack. Among the dangers society had to confront as a result of this situation were "stimulated" youths running wild, thinking it was OK to rape women, and turning into homosexuals after just a few peeks at the goods in MANifique magazine.

A lot of the movie -- which you can watch for yourself on YouTube -- is devoted to exploring every form of depravity available in print at the time. We're treated to images of lurid paperbacks, naughty magazines, and perverted pamphlets. At one point, Putnam even does a dramatic reading from one of the books to emphasize their violence. Then we get to see pictures of women in bondage from early BDSM zines.

But the basic point of this documentary isn't to demonstrate that Keating and his buddies seem to have had an encyclopedic knowledge of smut. Nor is the point that smut has gotten worse. Putnam admits in the film that "there has always been perversion." Instead, the movie's emphasis is on how new technologies enable the distribution of smut more widely, especially into the hands of children. In this way, Keating's hysterical little film is nearly a perfect replica of the kinds of rhetoric we hear today about the dangers of the Web.

Consider, for example, a University of New Hampshire study that got a lot of play earlier this year by claiming that 42 percent of kids between the ages of 10 and 17 had been accidentally exposed to pornography on the Web during the previous year. The study also claimed that 4 percent of people in the same age group were asked to post erotic pictures of themselves online. News coverage of the study emphasized how these numbers were higher than before, and most implied that the Web itself was to blame.

But as Perversion for Profit attests, people have been freaking out about how new distribution networks bring pornography to children for nearly half a century. Today's cyberteens aren't the first to go hunting for naughty bits using the latest high-speed thingamajig either; back in the day, we had fast-printing presses instead of zoomy network connections.

It's easy to forget history when you're thinking about the brave new technologies of today. And yet if Keating's statistics are to be believed, the number of children exposed to porn was far greater in 1964 than it is today.

Perhaps the Web has actually made it harder for children to find pornography. After all, when their grandparents were growing up, anybody could just walk to the corner store and browse the paperbacks for smut. Now you have to know how to turn off Google's safe search and probably steal your parents' credit card to boot.

And yet Fox News is never going to run a story under the headline "Internet Means Kids See Less Pornography Than Ever Before." It may be the truth, but you can only sell ads if there's more sex -- not less.

Science Uncovers What Literary Critics Have Always Known

A couple of economic researchers have proven via scientific experimentation something that artists have known for millennia: people can feel pain and have fun at the same time. At last, we have a scientific theory that explains why the torture-tastic movie Saw is so popular. Not to mention the writings of Franz Kafka.

Eduardo Andrade and Joel Cohen, both business professors interested in consumer behavior, wanted to know why people are willing to plunk down money for what they called "negative feelings," the sensations of disgust and nastiness that arise during hideous but financially successful flicks like Hostel, the Jason and Freddy franchises, and The Silence of the Lambs. It's a good question, especially if you're one of those business types who want to peddle gore to the fake blood–loving masses. As a huge consumer of gore myself, I was immediately intrigued by the scholarly article Andrade and Cohen produced, which sums up four experiments they did with hapless undergraduates paid to watch bad horror movies and describe how this exercise made them feel. The researchers had two basic questions: Do audiences experience fear and pleasure at the same time while watching somebody get dismembered? If yes, how?

First, a word about the researchers' methods. Let it be known that they did not display discerning taste in horror movies. As a connoisseur of the genre, I'd have made those students watch Hostel, with its shocking scenes of eyeball gouging. Or perhaps 28 Days Later, with its white-knuckle zombie chase scenes. But Andrade and Cohen picked the 1973 seen-it-so-many-times-it's-no-longer-frightening flick The Exorcist and the craptastic, unscary 2004 version of ’Salem's Lot. Hey guys, call me before you do the next round of experiments, OK?

Aesthetic choices aside, the results of these movie-watching experiments were intriguing. Students were shown "scary" clips from both films and asked to rate how they felt during and after watching. Previous scholars had suggested that people who enjoy horror movies have a reduced capacity to feel fear or have fun only when the yuck is over and they leave the theater. What Andrade and Cohen found, however, was that students who loved horror movies reported nearly the same levels of fear as students who avoided these movies. Plus the horror lovers reported having fun during the movies, not just afterward. So, as I said earlier, science uncovered what literary critics have known forever: ambivalent feelings are the shit.

Horror movies appeal because humans like to feel grossed out and entertained pleasurably at the same time. There's a payoff in coexperiencing two conflicting emotions.

But Andrade and Cohen are careful to explain that the fun of ambivalence doesn't work for everyone and may not translate into real-world horrors. They suggest that people who enjoy the yuck-yay feeling of horror movies are masters at psychological framing and distancing. Horror viewers who have the most fun are also the ones who are most convinced that what they're watching isn't real. People who sympathize too much with tortured characters feel only horror. That also means horror fans who see real-life violence won't get a kick out of it.

The researchers proved this point by showing people horror films alongside biographies of the actors playing the main characters, constantly reminding viewers that these were just movies and the “victims� were playing roles. Even viewers who normally avoid horror movies reported that they were a lot more comfortable and had some fun when they were reminded that the action was staged.

I would argue that Andrade and Cohen's research into distancing is the key to understanding horror fans. Our pleasure in horror is not depraved -- it is purely a function of our understanding that what we're seeing isn't real. This knowledge frees us to revel in the frisson of ambivalent feelings, which are the cornerstone of art both great and small.

Images of the Future

The future is a crowded graveyard, full of dead possibilities. Each headstone marks a timeline that never happened, and there's something genuinely mournful about them. I get misty-eyed looking at century-old drawings of the zeppelin-crammed skyline over "tomorrow's cities." It reminds me that the realities we think are just around the corner may die before they're born.

A few weeks ago I was trolling YouTube and stumbled across a now-hilarious documentary from 1972, Future Shock, based on the 1970 futurist book of the same name by Alvin Toffler. The documentary focused on a few themes from the book and tarted them up by throwing in a lot of trippy effects and sticking in Orson Welles as a narrator.

As Welles intones ponderously about how fast the future is arriving, we learn that "someday soon" everybody will be linked via computers. Essentially, it was an extremely accurate prediction about Internet culture. Score one for old Toffler.

Things go tragically incorrect when the documentary turns to biology. Very soon, Welles assures his audience, people will have complete control over the genome and drugs will cure everything from anxiety to aging. Through the wonders of pharmaceuticals, we'll become a race of immortal super-humans. It sounds almost exactly like the kinds of crap that futurists say now, 37 years later. Singularity peddlers like futurist Ray Kurzweil and genomics robber baron Craig Venter are always crowing about how we're just about to seize control over our genomes and live forever. So far we haven't. But every generation dreams about it, hoping they'll be the first humans to cheat death.

Some dreams of the future, however, shouldn't outlast the generation that first conceived them. Suburbia is one of those dreams. In the fat post-war years of the 1940s and '50s, it seemed like a great idea to build low-density housing to blanket the harsh desert landscapes of the Southwest. But now the green lawns of Southern California have become an environmental nightmare of water-sucking parasitism. Just think of the atrocious carbon footprint left behind when you lay pavement, wires, and pipes over a vast area so that nuclear families can each have huge yards and swimming pools instead of living intelligently in high-density green skyscrapers surrounded by organic farms.

Oh wait -- I just gave away my own crazy futurist dreams, inspired by urban environmentalism. Today, many of us imagine that the future will be like the green city of Dongtan, an ecofriendly community being built outside Shanghai using recycled water, green building materials, and urban gardens that will allow no cars within its limits. The hope is that Dongtan will have a teeny tiny carbon footprint and be a model of urban life for the future. Of course, that's what suburbia was supposed to be too -- a model of a good future life. No future is ever perfect.

Perhaps the saddest dead futures, though, are the ones whose end may mean the end of humanity. I suppose one could argue that the death of an environmentally conscious future is in that category. But what I'm talking about are past predictions that humans would colonize the moon and outer space. As the dream of a Mars colony withers and the idea of colonizing the moons of Saturn and Jupiter becomes more of a fantasy than ever before, I feel real despair.

Maybe my desperate hopes for space colonization are my version of Kurzweil's prediction that one day we'll take drugs that will make us immortal. Somehow, I think, if we could just have diverted the global war machine into a space-colony machine sometime back in the 1930s, then everything would be all right. Today the planet wouldn't be suffering from overpopulation, plague, and starvation. We'd all be spread out across the solar system, tending our terraforming machines and growing weird crops in the sands of Mars.

Of course, we might just be polluting every planet we touch and bringing our stupid dreams of conquering the genome to a bunch of poor nonhuman creatures with no defenses. But I still miss that future of outer-space colonies. I can't help but think it would be better than the future we've got.

Who Would Have Thought the iPhone Would Become a Political Issue?

The marketing maestros at Apple have turned the iPhone into the summer's biggest consumer electronics blockbuster, and they didn't even have to pay Michael Bay millions of bucks to write robot piss jokes to do it. Everybody's talking about the damn things -- of course the usual gizmo-obsessed pubs like Wired and PC Magazine are drooling all over it, but some unexpectedly political critics and fans have gotten into the mix.

The tech community made its annoyance at iPhone boosterism felt when hacker David Maynor announced that he'd found a bug in Safari (the iPhone's Web browser) that would allow him to seize control of iPhones remotely. The Daily Show, which usually exhibits a modicum of geek savvy, blithely ignored tech criticisms and led off one episode last week with a breathy noncommentary on how the iPhone is the greatest thing ever. Then politicians started sounding off. Demos snarked at Republicans last week about the iPhone during a House subcommittee hearing on wireless innovation. Rep. Ed Markey (D-Mass.) told the committee that the iPhone was the "Hotel California" of mobiles because of an exclusive deal Apple cut with AT&T to provide network service for the multimedia devices. (Apparently Markey's one big pop culture moment was to listen to the Eagles' famous '70s song about a hotel where "you can check out any time you like, but you can never leave.")

CNET commentator Declan McCullagh spoke the latent convictions of many libertarian nerds when he responded to Markey's analogy: "Apple makes the iPhone. It has every right to sell it via only AT&T if it wishes. ... More broadly, Apple has the right to [make] iPhones only available for purchase on the third Monday of the month in even-numbered zip codes if it chooses." Activist group Free Press responded to ideas like McCullagh's by starting a "Free the iPhone" campaign ( designed to spur the Federal Communications Commission and Congress to consider passing regulations that would force vendors like Apple to make mobile phones interoperable with all phone network operators so that consumers could choose which carrier they want.

Meanwhile, digital freedom lovers have been up in arms over Apple's many closed-door policies for the phone. Not only are the damn things locked into using AT&T as a carrier, but iPhones are also designed to prevent users from writing additional software for them. Nothing but Apple-approved software may run on the iPhone. That means people who want to play music on the iPhone will have the same problems they have with iTunes on the iPod -- you can put as much music on the phone as you want, but you can't transfer it to another device. Nor can you choose a secure browser over Safari, or an e-mail program of your choice. Even free-software activist Richard Stallman is pissed about the iPhone, and he's a guy who rarely gives little toys from Apple a second thought.

So what's the big deal? Why do people even want a $600 phone, and why has this luxury device for the pampered techie become such a hot political issue? I think the answer to the first question is easy: the iPhone is the first truly cool convergence phone that combines multimedia with multispectrum goodies like Bluetooth, wi-fi, and of course, a phone network. Who doesn't wish to combine phones, iPods, and laptops into one nifty thing?

That's where politics come in. In the United States we have a long history of government regulations on the phone network, as well as on what can plug into the phone network, so naturally the public wonders what the government is going to do with the iPhone. Especially when other components of the iPhone, such as its ability to play music, touch on another government-regulated area: copyright law. And then there's another issue that few people have commented on, which is that Apple's chosen carrier for the iPhone, AT&T, has a history of letting the government spy on its phone networks. So every way you slice it, the iPhone is subject to government.

The iPhone is political because it somehow manages to capture the essence of authoritarianism in its shiny little box. Totally locked down, it runs only preapproved software on a prechosen phone network that is subject to government surveillance. Long live the iPhone! Long live democracy!

@2022 - AlterNet Media Inc. All Rights Reserved. - "Poynter" fonts provided by