Why this philosopher fears we have a 1 in 6 chance of existential catastrophe within 100 years
Philosopher Toby Ord wants to save the future of humanity.
It’s a lofty goal, but in his newly released book, “The Precipice: Existential Risk and the Future of Humanity,” he argued that it is woefully neglected as a topic. Preserving the potential of our ancestors is much more important than we usually recognize.
“A lot of people, if they’re asked about it, would say that the long-term future matters as much as the present,” he told me in a recent interview. “But their actions don’t really reflect that, or they haven’t really thought through the consequences of such a view.”
Ord argues that, if we find a way to avoid extinction, our future could be nearly limitless. Our species may live for trillions of years into the future. We could, conceivably, colonize the galaxy — perhaps even other galaxies. Countless generations of our descendants could live lives of immeasurable value beyond our current imaginings.
That is, as long as we don’t screw it up.
What are the consequences of this view? From Ord’s perspective, we should try much harder to solve problems that could lead to the complete extinction of all humans, or at least to an unrecoverable collapse of civilization that robs us of our future potential. And while he thinks we’re up to the task, he also thinks we face big challenges.
In Ord’s estimation, humanity faces a (very rough) one in six chance of experiencing an existential catastrophe in the next 100 years.
And yet, he observed, we currently spend fewer resources as a species on preventing this calamitous fate than we do on ice cream.
What would it take to end humanity?
The end of the world is a fixture of cinema and comic book stories. But some of the risks that loom large in public consciousness don’t actually pose that high a likelihood of thwarting humanity’s potential any time soon.
Asteroids impacts, comet collisions, supervolcanoes, or the explosion of a nearby star all could, in theory, wipe out humanity — and plenty of other species of Earth — with overwhelming force. But the worst of these events are exceedingly rare. And since we know that humanity has survived for many centuries already — roughly 200,000 years — we can infer that a wholly natural catastrophe is unlikely to lay waste to our species in the next 100 years.
“The risk can’t be that high from natural risks,” Ord explained. “It couldn’t really be 1 percent per century, because there’s a very small chance that you’d survive 2,000 1 percent risks in a row.”
Since we've already survived 2,000 centuries, the chance we’ll be extinguished at the hand of nature any time soon is much less than 1 percent. Based on Ord’s research and reasoning, as explained in depth in “The Precipice,” humanity faces only a one in 10,000 chance of going extinct or suffering irrevocable civilization collapse from these causes within a century. And it seems plausible that, if we survive for another 100 years, and many centuries after that, we may be able to fortify our defense against these threats to reduce the risk even further.
Of course, a small chance of extinction is still a chance. And since the potential of humanity is so great, we could have strong reasons to work hard to prevent even a very small amount existential risk, Ord argues. It may be reasonable, then, for researchers to investigate how to deflect comets and asteroids, to delve into the chances that a supervolcano could erupt and make Earth uninhabitable for humans, and to better understand the risk faced by stellar explosions.
But given the low baseline of existential risk we can expect from these threats, they shouldn’t really keep us up at night if we’re worried about the loss of humanity’s potential.
What really worries Ord is the danger we pose to ourselves.
How humanity could be the cause of its own extinction
The title of Ord’s book, “The Precipice,” is the name he gives to the time we’re living in now. Ever since the detonation of the first atomic bomb, he argues, we have been in a new period of human history, the period in which we pose a greater threat to our own existence than nature does.
In the book, he writes eloquently of this fateful transition into the time of the precipice:
At that moment, our rapidly accelerating technological power finally reached the threshold where we might be able to destroy ourselves. The first point where the threat to humanity from within exceeded the threats from the natural world. A point where the entire future of humanity hangs in the balance. Where every advance our ancestors have made could be squandered, and every advance our descendants may achieve could be denied. The greater part of the book of human history left unwritten; the narrative broken off; blank pages.
But it’s not just the prospect of nuclear conflict — and thus, the potential for a humanity-destroying nuclear winter — that threatens us. Many new threats have emerged since the beginning of the atomic age.
Ord thinks the chances of nuclear war ending humanity in the next 100 years are about one in a 1,000. He rates climate change as posing the same level of existential risk — if it turns out to be much more severe than many expect — and, in a separate analysis, he says other forms of extreme environmental damage are comparably threatening.
It’s important to remember here that Ord doesn’t discount the possibility any of these threats could cause extreme disaster — even, potentially, the deaths of billions of humans. These possibilities are likely far likelier than the chance that the threats will cause total extinction or irreversible civilizational collapse. But it’s this worst-case scenario that is the focus of Ord’s book.
The single greatest risk to humanity’s future, in Ord’s estimation, is unaligned artificial intelligence. Exactly how this would result in something like an extinction event isn’t totally clear, but those who worry about the danger argue that we risk creating an AI that is so powerful it can shape the world to its ends and thwart any human attempt to undermine it. The goal, of course, is to design AI to only do the things we want and to accord its actions with human values. But many who research this subject come to find that actually aligning an AI’s goals to human values is much trickier than it might seem at first glance, and any deviation in the AI’s intentions from what we actually want could be unpredictable and potentially cataclysmic.
Ord pointed out to me that many researchers in this field become dismayed when the threat is often interpreted through a cinematic lens; people hear about AI risk, and they picture a robot takeover. But that’s not generally what those experts who worry about AI fear. In the book, Ord argued the risk is more like that posed by an extremely powerful totalitarian. If AI were to become such a threat with influence over society and politics, it could wrest control of humanity’s future from our hands, and the potential eons we could have to fulfill our potential would be obliterated.
Part of the problem with understanding AI risk is that all the dangers lurk in the unknowns. It’s unknown what eventual form a truly powerful AI system could take, when we will achieve it, how quickly it will advance, and in what ways its values may end up at odds with those of humanity. With all the unknowns, Ord concluded that this is the biggest risk to humanity’s future. He rates the chance that AI will cause an existential catastrophe at one in 10.
In combination with all the other risks humanity might pose for itself, including unforeseen risks, engineered pandemics, or the chance of ending up in an unrecoverable dystopia, Ord estimated the total existential risk in the next century is (roughly) one in six.
While this might strike some as a gloomy prediction, Ord sees himself as an optimist. He explained:
Finally, this is not a pessimistic book. It does not present an inevitable arc of history culminating in our destruction. It is not a morality tale about our technological hubris and resulting fall. Far from it. The central claim is that there are real risks to our future, but that our choices can still make all the difference. I believe we are up to the task: that through our choices we can pull back from the precipice and, in time, create a future of astonishing value — with a richness of which we can barely dream, made possible by innovations we are yet to conceive. Indeed, my deep optimism about humanity’s future is core to my motivations in writing this book. Our potential is vast. We have so much to protect.
Ord’s book was released at a surprisingly appropriate time, since the coronavirus pandemic has brought the idea of global catastrophic risk to the forefront of worldwide attention. He was quick to note, though, that there’s no reason to believe that the coronavirus poses the kind of existential risk that most concerns him. Clearly, most people survive the infection, so it will not kill all of humanity or irrevocably collapse civilization.
But could another pandemic pose such a risk? Ord leaves open that possibility.
He breaks down pandemic risk into two sorts: “natural” pandemics and engineered pandemics. It’s important to note, though, that he thinks even “natural” pandemics should be seen as more akin to human-caused dangers like nuclear war, rather than exogenous risks like asteroid impacts, because many modern human activities — such as international travel — significantly contribute to the chance and scale of an outbreak.
Still, even with these new factors, he sees the chance of existential risk from a naturally occurring pandemic to be quite low, only one in 10,000 over the next 100 years.
A much more frightening prospect is that some nation or malevolent actor might try to engineer a pandemic-causing pathogen. Some scientists even do this for altruistic reasons. In working to better understand the dangers posed by pathogens, they will sometimes create a more dangerous version of a virus or other contagion. Sometimes, by accident, these pathogens escape the lab.
It’s possible that, with the combination of nature’s toolbox and human ingenuity, an altered pathogen could spread through the human race with such devastation that we would never recover. Ord rates the chance of existential risk in the next 100 years from an engineered pandemic much higher than that of a natural outbreak: one in 30.
Yet on this front, the coronavirus may actually offer some modest good news. Devastating as it is, it may give us the opportunity to prepare for outbreaks that could be even worse.
“In some sense, society will get a kind of immunity” from pandemic risk, Ord told me. “For the next 20 years or so, we will very strongly remember this event. This seems to be the biggest global crisis since World War II, and something that really will stay in our memories for some time. And we will remember these aspects that we’re getting wrong at the moment.”
But the response from our institutions thus far is a mixed bag, giving us reasons for both hope and fear.
“We’re currently seeing signs in both directions. There were some very promising things, for example, the sequencing of the virus and sharing scientific information across the world. And the scientists in Europe had access to all this information about how the outbreak worked, the epidemiology in China and access to information about the case-fatality rates and things like this,” he explained. “So the countries that would get infected next did have an opportunity to learn from those that got infected first.”
He continued: “But there’s also been a lot of name-calling between countries and a lot of border closing. And this could lead to an era of isolationism — or perhaps one of realizing that we need a more concerted pandemic response across the world.”
We might be better off in the future, he pointed out, if the country that first discovers a potential pandemic takes drastic — and potentially costly — measures to stop it in its tracks. But because of the cost, any given country will be reluctant to take these steps on their own. All nations would be wise to recognize that they should collectively agree to support any country willing to take such early and stringent preventive measures. Some self-sacrifice and foresight could save the world.
And if the coronavirus prompts us to take extensive efforts to prevent the next natural pandemic, we may also fortify ourselves against the risk from engineered pathogens.
And if more politicians took Ord’s warnings seriously, maybe they could see this catastrophe as a moment to recognize the importance of existential risks to all of us. As mentioned above, Ord estimates that humans as a whole spend much more on ice cream alone than they do on preventing the end of the world. He envisions that it might be well worth it for countries to spend 1 percent of GDP working to forestall existential risks.
“Humanity should be perhaps prepared to spend a very large fraction of our effort on these things and to change how we conduct ourselves as a species in major ways,” Ord said. “For example, [by] making greater cooperation, internationally, a major priority.”
- The next major extinction event is here - Alternet.org ›
- How civilizations thwart extinction in the face of existential crises - Alternet.org ›
- UN chief warns humanity is 'unacceptably close to nuclear annihilation' - Alternet.org ›
- We’re losing our humanity — and the pandemic is to blame - Alternet.org ›