Books

Are We Obsessed With Tech and the Distant Future Because the Near Future Looks So Scary?

We want to think we'll solve our problems by merging with machines in a virtual reality where there's no poison air, extreme weather and rampant poverty.

The following is an excerpt from the new book  Trees on Mars: Our Obsession with the Future by Hal Niedzviecki (Seven Stories Press, 2015):

We often hear about patients with amnesia. We’re fascinated with the blank-slate nature of their predicament. Imagine not knowing who you’ve been, what you’ve done. We rarely hear about the other aspect of this rare occurrence. Doctors report that not only are amnesiacs without memories, but they lack the ability to imagine the future. They can’t form a mental image of the next day or the next year. Brain scans done while a person is imagining something yet to happen show that this kind of thinking lights up the hippocampus—known to be a site in the brain crucial to the formation of new memories—like a Christmas tree. “In fact the regions of the brain used to recall the past largely overlap with those used to imagine the future.” When researchers gave people a key word and asked them to conjure up different scenarios for that word in the past and the future, they found that the “neural” signatures of remembering the past and future looked pretty much the same. “All three of the chief areas of the brain involved in imagining the future are part of the default network,” Claudia Hammond states. “It is almost as though our brain is programmed to contemplate the future whenever it finds itself unoccupied.”

The general assumption is that when we’re not actively engaging our brains, our minds are drifting backward into memory. But what’s really happening much of the time is that our minds are dipping in and out of past and stitching together mental tapestries of what is going to happen next. “The predominant theory of memory,” writes a science journalist, “is that it is an adaptive process, continually updating itself according to what knowledge may be important in the future.” Memory, it seems, is set up specifically to serve the goal of futuring. Notes Hammond: “The flexibility of our memories makes it relatively easy because we can meld all these different memories together seamlessly to invent a new imaginary scene, one which we have never even contemplated before, let alone witnessed. The flexibility of memory seems to be the key to imagining a future.”

Most likely, the surprising amount of time we spend imagining the future stems from the development of our innate, universal skills as the ultimate scrappy survivors of the animal kingdom. We developed mental structures to help us survive in the eras before grocery stores, cell phones, and rides home from dad. We developed the ability to conjure up the potentially precarious path to the watering hole, the fraught journey to the berry bushes. What dangers awaited you and how could you be ready for them? We think of memory as being about the past, but there’s mounting evidence to suggest that memory developed as a kind of Spidey Sense, an early warning system urging us to consider our options before we ran out of them. A study out of New York University found that people who received an uncomfortable electric shock while viewing pictures divided into two categories “recalled about 7 percent more items from the “shocked” category. For example, they remembered more tools if they had been zapped seeing tools.” The caveat: the shock effect only took hold when tested six hours or a day later. The conclusion: our brain isn’t sure what might be important tomorrow, and so privileges those memories that are, for instance, associated with pain or some other emotional reaction. Joseph Dunsmoor, primary author of the study and a postdoctoral fellow in cognitive neuroscience at New York University, explains it this way:

“The emotional experience of the shocks strengthened or preserved the memories of things that, at the time they were encoded, seemed mundane.” 

Writes John Coates, Cambridge research fellow and author of a book about the relationship between risk and mind:

“Many neuroscientists now believe our brain is designed primarily to plan and execute movement.” 

It makes sense. Our ability to plan and execute, to mentally time travel to an upcoming event, was the thing that soft-skinned, fragile, not particularly fast or strong humanoids developed in order to get a leg up on the many predators who wanted us for breakfast, lunch, and dinner. We don’t just do, act on instinct, follow old patterns; we envision, picture, predict, and plan on a whole other level than any other creature on earth. Birds build nests, squirrels put up acorns for the winter. They go about their business, but they don’t picture these things happening in minute detail before they happen, imagining all the things that could go wrong or right or better or worse.

“With future thinking,” writes Claudia Hammond, “you project yourself forward mentally to imagine the actual experience. This is different from actively planning, and it is this skill that seems to set us apart from other animals.” 

An example of this is depicted by the anthropologist Jared Diamond who writes of going on a bird watching expedition accompanied by a group of New Guinean tribesmen. Arriving at a perfect camping spot that afforded excellent views of unplundered vistas teeming with birds, Diamond wanted to pitch camp. But his companions refused to camp there, as the clearing was overshadowed by a large dead tree. They explained to Diamond that they wouldn’t camp under the tree, which could fall over and kill them. He argued with them, noting that the tree was very large and stable and hadn’t begun to rot and the chances of it falling on them were practically nil. But still, they refused. Afterward, reflecting on the incident, he concluded that they were making their decisions based less on the evidence before them and more on their collective memory. They, unlike him, had spent a lot of time camping in the forest and had known people to be unexpectedly crushed by falling trees. They were tapping into memory to imagine a possible future and head it off before it happened. Diamond calls this kind of decision making about the future “constructive paranoia”.

Today, though we can mostly walk down the sidewalks unencumbered by concerns about falling trees, rival tribes, or poisonous flora and fauna, we still go through even the smallest journey (unconsciously) in our minds—the house where the creepy dude lives, the gap in the concrete, the place with the pretty bushes that flower in the spring. Close your eyes and imagine your daily route. You probably do, every day, without even realizing it. Harnessing memory to imagine and reimagine future turned out to be such a successful survival strategy, it’s now an inherent, unconscious process at the core of how we think and are, one of those fundamental aspects that shapes us without our even noticing. What is your brain trying to do? Take care of you. Extend your likelihood of survival. Give you the edge you need to propagate the species and make more little humans who can also survive and grow up and live a long time and make a lot more little humans. 

The future impulse at the core of human cognitive process is very different from the future-first rhetoric of the techno-industrial complex. Human beings pursue future primarily to know what is happening next so that they can stave off dangerous and destabilizing change. The ideological system of future first has the opposite goal: to implant destabilizing so-called “disruption” at the heart of everything that happens. On a deep, unconscious, reflexive level, future means something very different to us—not change for change’s sake; not disruption under the rubric of innovation. We’re envisioning the paths we will take to accomplish our goals, we are thinking about the constancy that we need to survive, within and despite what surprises and changes might come. And yet, in our minds the two things become easily conflated. They seem as one. We are, as a result, irrationally excited to be doing anything related to preparing for the future, securing the future, shaping the future, and owning the future. The human mind, often operating on a kind of unconscious auto-pilot (remember the Florida study!), is easily pulled into the orbit of anything and anyone who promises to reward it with the future knowledge we collectively and by default deem necessary for survival. 

The result is a psychologically irresistible brew. Future, in the context of the unconscious rhythms of our minds, seems irresistible. Who doesn’t want to increase their odds of surviving and even regaining some modicum of reassuring social certainty while they’re at it? We’re psychologically drawn—even addicted—to all things future, but not because we want to disrupt and change and can’t resist chasing every shiny new thing. The real psychological appeal to us of future is knowing what is going to happen next so we can keep doing exactly what we’ve always done. Deep in our operating systems, the idea isn’t to disrupt and alter everything and anything as fast as possible. The idea is to see the danger, skirt the pitfalls, and pretty much maintain our core patterns and ways of being no matter how much surface change might be adopted. We don’t want to change. Not unless we absolutely have to. “Human beings, cultures, and institutions,” notes Daniel Innerarity, “have always equipped themselves with procedures to assure predictability to the greatest possible degree. It is one of humankind’s most rudimentary goals.”

The increasingly vocalized goal of many of the super-wealthy tech set is to live long enough to be embedded forever in a kind of virtual consciousness. This phenomenon is generally referred to as ‘the Singularity.’ Russian billionaire venture capitalist Yuri Milner, for one, told world leaders and influential thinkers gathered at an elite annual conference in the Ukraine that “the emergence of the global brain, which consists of all the humans connected to each other and to the machine and interacting in a very unique and profound way [is] creating an intelligence that does not belong to any single human being or computer.”

No doubt Yuri Milner is aware of the increasing popularity of futurist Ray Kurzweil’s ideas put forward in his 2005 book The Singularity Is Near. According to Kurzweil, “as we gradually learn to harness the optimal computing capacity of matter, our intelligence will spread through the universe at (or exceeding) the speed of light, eventually leading to a sublime, universe wide awakening.” Or as the description on the back of his book puts it, “our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today.”

For Kurzweil and like-minded spin-off thinkers and followers, the future is going to be comprised of unlimited lifespan lived out in virtual realms where we will be liberated from all physical and mental constraints. In this potential technological Singularity, computers become so smart they essentially form one giant massive intelligence (a singular intelligence) that human beings are drawn into. We merge with our hyper-intelligent technology, transcend mortality, and live the vast majority of our lives in virtual worlds. 

Like Patrick Tucker’s enthusiastic advocacy of futurism in general, Kurzweil’s ideas have gone, over the last fifteen years or so, from fringe to practically mainstream. Kurzweil is a sought-after speaker. He’s been the subject of a feature-length documentary. He’s regularly interviewed by the most august media organizations in the world ranging from NPR to BBC to the New York Times, and literally hundreds more outlets. Many of the richest and most influential denizens of Silicon Valley unabashedly endorse his vision of the singularity, both attending and funding the Kurzweil-branded Singularity University. At the same time a (non–Kurzweil-affiliated) website called 2045.com has 30,000 members and runs conferences plus regular educational and advocacy activities all promoting the notion that we can and will reach the singularity by, you guessed it, 2045. The site is backed by Dmitry Itskov, another extremely wealthy Russian tech-investor clearly heavily influenced by Kurzweil, whose stated goal, explains a reporter, is to figure out a way to put “a digital copy of your mind in a nonbiological carrier, a version of a fully sentient person that could live for hundreds or thousands of years.” Not that long ago, we would have dismissed Itskov and his ilk as dreamers at best, and crazy people at worst. But, as the New York Times attests in a notably lengthy profile, “he has the attention, and in some cases the avid support, of august figures at Harvard, M.I.T. and Berkeley and leaders in fields like molecular genetics, neuroprosthetics and other realms that you’ve probably never heard of.” Money talks, particularly when it is combined with psychologically irresistible ideology. 

I’ve put two ideas in play that I believe are crucial to understanding the way the doctrine of future has arrived and swept through society so quickly. The first is that we are adopting a techno-scientific notion of owning future as a replacement for the social certainty we crave and have now irretrievably lost. The second is that we are, psychologically, predisposed to future, which we see as a default survival mechanism. Both of these psychological takes on why the future is so irresistible to us nicely culminate in the idea of singularity. At first glance, the singularity might seem like the opposite of what I’m arguing is the intrinsic attraction of future for us—to reclaim continuity and extend survival by perceiving danger before it happens. But when we look more closely at it, particularly the claimed end point of all this disruptive turmoil, we see how the singularity is the future era’s take on our familiar longing for social certainty, a high-tech way to return us to the times when we were assured of the perpetual repetition of what is and will always be. Not only does the singularity offer social certainty, albeit wrapped up in the mantle of hyper-individualistic technologies of permanent personalized connectivity, it also promises extended survival, perhaps even immortality. And so with the singularity—as with the general ideology of the future-first techno tomorrow—we live in complete comfort, doing as we please, and never ever having to change. Is it any wonder that in the future era there are an expanding number of real-world entities actively pursuing scenarios in which the quest for future ends in a kind of timeless stasis, an eternal sunshine for the spotless, digitally transformed consciousness?

Ray Kurzweil and others perpetuating various iterations of the singularity have the avid support of corporations and institutions that matter. In 2013, Kurzweil went to work for the world’s biggest and most profitable tech company. He’s now full time at Google, working flat out on his quest to build an “artificial brain.” This isn’t surprising since, as an article puts it, “Google’s founders are involved in Singularity University, part of the belief system that humans and machines will at some point merge, making old age and death meaningless.” In fact, Google is more than just “involved” in the notion that innovation will eventually lead us down the path of immortality. Late in 2013, the company announced that it was creating a new venture, Calico, “a new biotechnology company to fight the aging process and the diseases that accompany it.” This is part of the Kurzweil ideology too. Kurzweil takes around one hundred vitamins and supplements a day as he seeks to forestall death long enough to get to the singularity. Other technologists and Silicon Valley entities are also betting on life extension to get us there. Peter Thiel invested in and sat on the board of Halcyon Molecular, a company with the goal of eventually curing, or at least forestalling, aging. It closed down in 2012 after receiving roughly $20 million in investment funding. Thiel also gave $3.5 million to the Methuselah Foundation, an organization aiming to “reverse human aging” and he has supported a nonprofit called Humanity Plus, “dedicated to transhumanism—the transformation of the human condition through technology.” Like Kurzweil and so many others, he is spreading his ample supply of chips on both odd and even, black and red, figuring that at least one of those bets will pay off eventually. In the meantime, keep betting, keep fighting against what he calls “the ideology of the inevitability of the death of every individual.” In our future era, death isn’t the ultimate institutional certainty, the penultimate collective truth—it’s an ideology that needs disruption; it’s a problem to be solved. Billions of dollars are now being spent on the parallel tracks of figuring out how to “conquer aging” and extend life indefinitely and, because there is probably a limit to that process, learning how to transfer our brains into machines so we can live forever in a virtual, interconnected world of our making—a heaven that can exist right here on Earth. 

All of this is going on despite the fact that 1) the general consensus in the scientific world is that we remain thousands of years away from “nonbiological intelligence,” if it’s even possible at all, and 2) it’s in no way clear that making such transhumanism possible would be in the best interests of humankind. I’ll note that this latter point is not even on the radar of Singularity proponents who take the beneficent nature of their project as a given. The popularity of the singularity epitomizes the way an individualized future of endless (virtual) bounty has become dominant in our society. It also shows just how powerful the idea of future is in our psychology. We don’t intrinsically desire eternal life or our minds downloaded into virtual realities. What we’ve always wanted and needed, however, is the kind of assurance of repetition generated by tribal certainty. In the absence of this, in the wake of the collapse of social certainty, outlandish promises of the post-human era are, increasingly, filling the void. Which is to say that this whole process has more to do with belief than it is has to do with science or engineering. Even in the age of big data, instead of actual, meaningful knowledge of the future, we have for the most part the promise of that knowledge. We have the sense that schemes involving things like mapping every aspect of the physical and interior worlds of humanity to create a parallel virtual world of infinitely malleable and ever-expanding data sets will allow us to ultimately return to a state of absolute certainty regarding the ebb and flow of humanity. These schemes seem so appealing—despite the fact that in many instances they require us to give up our hard-fought civil liberties, particularly the right to be unmonitored in what we say and do—because of the assurances they seem to offer. In this roundabout, quietly exponential way, we enter a new kind of belief system devoted to getting to the future where these hazy promises can actually be fulfilled. In this way, the future-first project has become a psychologically irresistible belief system, though it is no more grounded in an inevitability born out of the unbreakable laws of nature than it is a path chosen by society through discussion, debate, and elections. 

Out of the ashes of the old emerges the new. In this case, perhaps we need an entirely new religion that can accommodate the way our desire for social certainty is increasingly masquerading as our buy-in to the quest for personalized techno-immortality. One candidate is Terasem, a faith organized around four core tenets—“life is purposeful, death is optional, God is technological, and love is essential.”33 Founded by Martine Rothblatt (who started the satellite radio network Sirius XM) along with her wife Bina, Terasem followers are devoted to “personal cyberconciousness,” which manifests itself in the quest to make “mindfiles.” Mindfiles are basically recordings of yourself—your most detailed thoughts and feelings. Terasem stores those mindfiles on servers in Vermont and Florida and promises to keep them safe until they can be organized and uploaded into not-yet-invented singularity-like technology that will recreate your consciousness. “For us God is in-the-making by our collective efforts to make technology ever more omnipresent, omnipotent and ethical,” Martine says. “When we can joyfully all experience techno immortality, then God is complete.”34 Terasem also gives you the option of beaming the mindfile out into space via satellite just in case there’s some other race of aliens out there who have already figured out all this stuff and feel like reincarnating some humans. Around 32,000 people have created free mindfile accounts to date. “Einstein said science without religion is lame. Religion without science is blind,” Martine Rothblatt tells Time magazine. “Bina and I were inspired to find a way for people to believe in God consistent with science and technology so people would have faith in the future.”

The Time report, along with all the rest of the coverage, is respectfully incredulous: these people are nice, but nuts. But, in fact, Terasem is only seen as being on the lunatic fringe because they are putting a spiritual spin on what is an otherwise increasingly mainstream technological goal—virtual life extension via mind-machine meld. In 2013, Martine Rothblatt, who is also the founder of the biotechnology company United Therapeutics, was one of the speakers at Dmitry Itskov’s 2045 Global Future Congress in New York City. Clearly, her spiritual beliefs fit right into the tenor of the times. In fact, they neatly dovetail with the overall techno-utopian belief system revolving around “faith in the (technological) future.” In this extreme but also now mainstream belief system the goal of technological upgrade, of speeding up the process of change, isn’t the perfect iPhone; it isn’t a device downloaded into your brain, amplified and accessed via Google Glass; it isn’t even an army of robots we control with our minds who do our bidding and create unimaginable wealth and luxury for all.

The goal is to arrive at the perfect end—the end of institution, the end of collective humanity, the end of death, and even the end of future itself, which emerges triumphantly re-engineered as the endless present moment. For our minds on future, this last point is the one we fixate on. It’s the return to certainty, the reclamation of pattern. In this way, the promise of social certainty will finally be fully realized. Unconsciously, we feel that the more we believe in future, the more we invest in pursuing the future, the more likely it will be that no one has to die and everything and everyone can go on just as it once did—uninterrupted and forever. This is the future as an endlessly arriving second coming. In the future we will be infinitely intelligent. We will solve all of humanity’s problems by merging with our machines, downloading our brains, and living forever in a virtual reality where there is no poison air, no extreme weather, no hordes of angry poor people wondering what happened to their share of the trickle down. And guess what? This future isn’t thousands of years away. According to a growing number of technologists telling us exactly what we want to hear, it’s just around the corner. 

Sign Up!
Get AlterNet's Daily Newsletter in Your Inbox
+ sign up for additional lists
[x]
Select additional lists by selecting the checkboxes below before clicking Subscribe:
Activism
Drugs
Economy
Education
Election 2018
Environment
Food
Media
World