Woodward Academy
2011-2012 File Title
*** 1AC
1AC—#MARS
Contention One is #MARS:
Risk of extinction is high—consensus of experts.
Matheny 7 — Jason G. Matheny, Research Associate at the Future of Human Institute at Oxford University, Ph.D. Candidate in Applied Economics at Johns Hopkins University, holds a Master’s in Public Health from the Bloomberg School of Public Health at Johns Hopkins University and an M.B.A. from the Fuqua School of Business at Duke University, 2007 (“Reducing the Risk of Human Extinction,” Risk Analysis, Volume 27, Issue 5, October, Available Online at http://jgmatheny.org/matheny_extinction_risk.htm, Accessed 07-04-2011)
It is possible for humanity (or its descendents) to survive a million years or more, but we could succumb to extinction as soon as this century. During the Cuban Missile Crisis, U.S. President Kennedy estimated the probability of a nuclear holocaust as "somewhere between one out of three and even" (Kennedy, 1969 , p. 110). John von Neumann, as Chairman of the U.S. Air Force Strategic Missiles Evaluation Committee, predicted that it was "absolutely certain (1) that there would be a nuclear war; and (2) that everyone would die in it" (Leslie, 1996 , p. 26). More recent predictions of human extinction are little more optimistic. In their catalogs of extinction risks, Britain's Astronomer Royal, Sir Martin Rees (2003) , gives humanity 50-50 odds on surviving the 21st century; philosopher Nick Bostrom argues that it would be "misguided" to assume that the probability of extinction is less than 25%; and philosopher John Leslie (1996) assigns a 30% probability to extinction during the next five centuries. The "Stern Review" for the U.K. Treasury (2006) assumes that the probability of human extinction during the next century is 10%. And some explanations of the "Fermi Paradox" imply a high probability (close to 100%) of extinction among technological civilizations (Pisani, 2006 ).4 Estimating the probabilities of unprecedented events is subjective, so we should treat these numbers skeptically. Still, even if the probability of extinction is several orders lower, because the stakes are high, it could be wise to invest in extinction countermeasures.
Extinction is inevitable if we don’t get off the rock—multiple scenarios
Austen 11 [Ben Austen, contributing editor of Harper’s Magazine, “After Earth: Why, Where, How, and When We Might Leave Our Home Planet,” popular science, http://www.popsci.com/science/article/2011-02/after-earth-why-where-how-and-when-we-might-leave-our-home-planet?page=3]
Earth won’t always be fit for occupation. We know that in two billion years or so, an expanding sun will boil away our oceans, leaving our home in the universe uninhabitable—unless, that is, we haven’t already been wiped out by the Andromeda galaxy, which is on a multibillion-year collision course with our Milky Way. Moreover, at least a third of the thousand mile-wide asteroids that hurtle across our orbital path will eventually crash into us, at a rate of about one every 300,000 years. Why? Indeed, in 1989 a far smaller asteroid, the impact of which would still have been equivalent in force to 1,000 nuclear bombs, crossed our orbit just six hours after Earth had passed. A recent report by the Lifeboat Foundation, whose hundreds of researchers track a dozen different existential risks to humanity, likens that one-in-300,000 chance of a catastrophic strike to a game of Russian roulette: “If we keep pulling the trigger long enough we’ll blow our head off, and there’s no guarantee it won’t be the next pull.” Many of the threats that might lead us to consider off-Earth living arrangements are actually man-made, and not necessarily in the distant future. The amount we consume each year already far outstrips what our planet can sustain, and the World Wildlife Fund estimates that by 2030 we will be consuming two planets’ worth of natural resources annually. The Center for Research on the Epidemiology of Disasters, an international humanitarian organization, reports that the onslaught of droughts, earthquakes, epic rains and floods over the past decade is triple the number from the 1980s and nearly 54 times that of 1901, when this data was first collected. Some scenarios have climate change leading to severe water shortages, the submersion of coastal areas, and widespread famine. Additionally, the world could end by way of deadly pathogen, nuclear war or, as the Lifeboat Foundation warns, the “misuse of increasingly powerful technologies.” Given the risks humans pose to the planet, we might also someday leave Earth simply to conserve it, with our planet becoming a kind of nature sanctuary that we visit now and again, as we might Yosemite. None of the threats we face are especially far-fetched. Climate change is already a major factor in human affairs, for instance, and our planet has undergone at least one previous mass extinction as a result of asteroid impact. “The dinosaurs died out because they were too stupid to build an adequate spacefaring civilization,” says Tihamer Toth-Fejel, a research engineer at the Advanced Information Systems division of defense contractor General Dynamics and one of 85 members of the Lifeboat Foundation’s space-settlement board. “So far, the difference between us and them is barely measurable.” The Alliance to Rescue Civilization, a project started by New York University chemist Robert Shapiro, contends that the inevitability of any of several cataclysmic events means that we must prepare a copy of our civilization and move it into outer space and out of harm’s way—a backup of our cultural achievements and traditions. In 2005, then–NASA administrator Michael Griffin described the aims of the national space program in similar terms. “If we humans want to survive for hundreds of thousands or millions of years, we must ultimately populate other planets,” he said. “One day, I don’t know when that day is, but there will be more human beings who live off the Earth than on it.”
Reducing existential risk by even a tiny amount outweighs every other impact—the math is conclusively on our side.
Bostrom 11 — Nick Bostrom, Professor in the Faculty of Philosophy & Oxford Martin School, Director of the Future of Humanity Institute, and Director of the Programme on the Impacts of Future Technology at the University of Oxford, recipient of the 2009 Eugene R. Gannon Award for the Continued Pursuit of Human Advancement, holds a Ph.D. in Philosophy from the London School of Economics, 2011 (“The Concept of Existential Risk,” Draft of a Paper published on ExistentialRisk.com, Available Online at http://www.existentialrisk.com/concept.html, Accessed 07-04-2011)
Holding probability constant, risks become more serious as we move toward the upper-right region of figure 2. For any fixed probability, existential risks are thus more serious than other risk categories. But just how much more serious might not be intuitively obvious. One might think we could get a grip on how bad an existential catastrophe would be by considering some of the worst historical disasters we can think of—such as the two world wars, the Spanish flu pandemic, or the Holocaust—and then imagining something just a bit worse. Yet if we look at global population statistics over time, we find that these horrible events of the past century fail to register (figure 3).
[Graphic Omitted]
Figure 3: World population over the last century. Calamities such as the Spanish flu pandemic, the two world wars, and the Holocaust scarcely register. (If one stares hard at the graph, one can perhaps just barely make out a slight temporary reduction in the rate of growth of the world population during these events.)
But even this reflection fails to bring out the seriousness of existential risk. What makes existential catastrophes especially bad is not that they would show up robustly on a plot like the one in figure 3, causing a precipitous drop in world population or average quality of life. Instead, their significance lies primarily in the fact that they would destroy the future. The philosopher Derek Parfit made a similar point with the following thought experiment:
I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:
(1) Peace.
(2) A nuclear war that kills 99% of the world’s existing population.
(3) A nuclear war that kills 100%.
(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater. … The Earth will remain habitable for at least another billion years. Civilization began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history. The difference between (2) and (3) may thus be the difference between this tiny fraction and all of the rest of this history. If we compare this possible history to a day, what has occurred so far is only a fraction of a second. (10: 453-454)
To calculate the loss associated with an existential catastrophe, we must consider how much value would come to exist in its absence. It turns out that the ultimate potential for Earth-originating intelligent life is literally astronomical.
One gets a large number even if one confines one’s consideration to the potential for biological human beings living on Earth. If we suppose with Parfit that our planet will remain habitable for at least another billion years, and we assume that at least one billion people could live on it sustainably, then the potential exist for at least 1018 human lives. These lives could also be considerably better than the average contemporary human life, which is so often marred by disease, poverty, injustice, and various biological limitations that could be partly overcome through continuing technological and moral progress.
However, the relevant figure is not how many people could live on Earth but how many descendants we could have in total. One lower bound of the number of biological human life-years in the future accessible universe (based on current cosmological estimates) is 1034 years.[10] Another estimate, which assumes that future minds will be mainly implemented in computational hardware instead of biological neuronal wetware, produces a lower bound of 1054 human-brain-emulation subjective life-years (or 1071 basic computational operations).(4)[11] If we make the less conservative assumption that future civilizations could eventually press close to the absolute bounds of known physics (using some as yet unimagined technology), we get radically higher estimates of the amount of computation and memory storage that is achievable and thus of the number of years of subjective experience that could be realized.[12]
Even if we use the most conservative of these estimates, which entirely ignores the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 1018 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least ten times the value of a billion human lives. The more technologically comprehensive estimate of 1054 human-brain-emulation subjective life-years (or 1052 lives of ordinary length) makes the same point even more starkly. Even if we give this allegedly lower bound on the cumulative output potential of a technologically mature civilization a mere 1% chance of being correct, we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.
One might consequently argue that even the tiniest reduction of existential risk has an expected value greater than that of the definite provision of any “ordinary” good, such as the direct benefit of saving 1 billion lives. And, further, that the absolute value of the indirect effect of saving 1 billion lives on the total cumulative amount of existential risk—positive or negative—is almost certainly larger than the positive value of the direct benefit of such an action.[13]
The role of the ballot is to decrease existential risk—even if the probability is low the stakes are too high.
Anissimov 4 — Michael Anissimov, science and technology writer focusing specializing in futurism, founding director of the Immortality Institute—a non-profit organization focused on the abolition of nonconsensual death, member of the World Transhumanist Association, associate of the Institute for Accelerating Change, member of the Center for Responsible Nanotechnology's Global Task Force, 2004 (“Immortalist Utilitarianism,” Accelerating Future, May, Available Online at http://www.acceleratingfuture.com/michael/works/immethics.htm, Accessed 09-09-2011)
The value of contributing to Aubrey de Grey's anti-aging project assumes that there continues to be a world around for people's lives to be extended. But if we nuke ourselves out of existence in 2010, then what? The probability of human extinction is the gateway function through which all efforts toward life extension must inevitably pass, including cryonics, biogerontology, and nanomedicine. They are all useless if we blow ourselves up. At this point one observes that there are many working toward life extension, but few focused on explicitly preventing apocalyptic global disaster. Such huge risks sound like fairy tales rather than real threats - because we have never seen them happen before, we underestimate the probability of their occurrence. An existential disaster has not yet occurred on this planet.
The risks worth worrying about are not pollution, asteroid impact, or alien invasion - the ones you see dramaticized in movies - these events are all either very gradual or improbable. Oxford philosopher Nick Bostrom warns us of existential risks, "...where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." Bostrom continues, "Existential risks are distinct from global endurable risks. Examples of the latter kind include: threats to the biodiversity of Earth’s ecosphere, moderate global warming, global economic recessions (even major ones), and possibly stifling cultural or religious eras such as the “dark ages”, even if they encompass the whole global community, provided they are transitory." The four main risks we know about so far are summarized by the following, in ascending order of probability and severity over the course of the next 30 years:
Biological. More specifically, a genetically engineered supervirus. Bostrom writes, "With the fabulous advances in genetic technology currently taking place, it may become possible for a tyrant, terrorist, or lunatic to create a doomsday virus, an organism that combines long latency with high virulence and mortality." There are several factors necessary for a virus to be a risk. The first is the presence of biologists with the knowledge necessary to genetically engineer a new virus of any sort. The second is access to the expensive machinery required for synthesis. Third is specific knowledge of viral genetic engineering. Fourth is a weaponization strategy and a delivery mechanism. These are nontrivial barriers, but are sure to fall in due time.
Nuclear. A traditional nuclear war could still break out, although it would be unlikely to result in our ultimate demise, it could drastically curtail our potential and set us back thousands or even millions of years technologically and ethically. Bostrom mentions that the US and Russia still have huge stockpiles of nuclear weapons. Miniaturization technology, along with improve manufacturing technologies, could make it possible to mass produce nuclear weapons for easy delivery should an escalating arms race lead to that. As rogue nations begin to acquire the technology for nuclear strikes, powerful nations will feel increasingly edgy.
Nanotechnological. The Transhumanist FAQ reads, "Molecular nanotechnology is an anticipated manufacturing technology that will make it possible to build complex three-dimensional structures to atomic specification using chemical reactions directed by nonbiological machinery." Because nanomachines could be self-replicating or at least auto-productive, the technology and its products could proliferate very rapidly. Because nanotechnology could theoretically be used to create any chemically stable object, the potential for abuse is massive. Nanotechnology could be used to manufacture large weapons or other oppressive apparatus in mere hours; the only limitations are raw materials, management, software, and heat dissipation.
Human-indifferent superintelligence. In the near future, humanity will gain the technological capability to create forms of intelligence radically better than our own. Artificial Intelligences will be implemented on superfast transistors instead of slow biological neurons, and eventually gain the intellectual ability to fabricate new hardware and reprogram their source code. Such an intelligence could engage in recursive self-improvement - improving its own intelligence, then directing that intelligence towards further intelligence improvements. Such a process could lead far beyond our current level of intelligence in a relatively short time. We would be helpless to fight against such an intelligence if it did not value our continuation.
So let's say I have another million dollars to spend. My last million dollars went to Aubrey de Grey's Methuselah Mouse Prize, for a grand total of billions of expected utiles. But wait - I forgot to factor in the probability that humanity will be destroyed before the positive effects of life extension are borne out. Even if my estimated probability of existential risk is very low, it is still rational to focus on addressing the risk because my whole enterprise would be ruined if disaster is not averted. If we value the prospect of all the future lives that could be enjoyed if we pass beyond the threshold of risk - possibly quadrillions or more, if we expand into the cosmos, then we will deeply value minimizing the probability of existential risk above all other considerations.
If my million dollars can avert the chance of existential disaster by, say, 0.0001%, then the expected utility of this action relative to the expected utility of life extension advocacy is shocking. That's 0.0001% of the utility of quadrillions or more humans, transhumans, and posthumans leading fulfilling lives. I'll spare the reader from working out the math and utility curves - I'm sure you can imagine them. So, why is it that people tend to devote more resources to life extension than risk prevention? The follow includes my guesses, feel free to tell me if you disagree:
They estimate the probability of any risk occurring to be extremely low.
They estimate their potential influence over the likelihood of risk to be extremely low.
They feel that positive PR towards any futurist goals will eventually result in higher awareness of risk.
They fear social ostracization if they focus on "Doomsday scenarios" rather than traditional extension.
Those are my guesses. Immortalists with objections are free to send in their arguments, and I will post them here if they are especially strong. As far as I can tell however, the predicted utility of lowering the likelihood of existential risk outclasses any life extension effort I can imagine.
I cannot emphasize this enough. If a existential disaster occurs, not only will the possibilities of extreme life extension, sophisticated nanotechnology, intelligence enhancement, and space expansion never bear fruit, but everyone will be dead, never to come back. Because the we have so much to lose, existential risk is worth worrying about even if our estimated probability of occurrence is extremely low.
It is not the funding of life extension research projects that immortalists should be focusing on. It should be projects that decrease the risk of existential risk. By default, once the probability of existential risk is minimized, life extension technologies can be developed and applied. There are powerful economic and social imperatives in that direction, but few towards risk management. Existential risk creates a "loafer problem" — we always expect someone else to take care of it. I assert that this is a dangerous strategy and should be discarded in favor of making prevention of such risks a central focus.
Err affirmative—the availability heuristic and “good story bias” will make you undervalue our impact
Bostrom 11 — Nick Bostrom, Professor in the Faculty of Philosophy & Oxford Martin School, Director of the Future of Humanity Institute, and Director of the Programme on the Impacts of Future Technology at the University of Oxford, recipient of the 2009 Eugene R. Gannon Award for the Continued Pursuit of Human Advancement, holds a Ph.D. in Philosophy from the London School of Economics, 2011 (“The Concept of Existential Risk,” Draft of a Paper published on ExistentialRisk.com, Available Online at http://www.existentialrisk.com/concept.html, Accessed 07-04-2011)
Many kinds of cognitive bias and other psychological phenomena impede efforts at thinking clearly and dealing effectively with existential risk.[32]
For example, use of the availability heuristic may create a “good-story bias” whereby people evaluate the plausibility of existential-risk scenarios on the basis of experience, or on how easily the various possibilities spring to mind. Since nobody has any real experience with existential catastrophe, expectations may be formed instead on the basis of fictional evidence derived from movies and novels. Such fictional exposures are systematically biased in favor of scenarios that make for entertaining stories. Plotlines may feature a small band of human protagonists successfully repelling an alien invasion or a robot army. A story in which humankind goes extinct suddenly—without warning and without being replaced by some other interesting beings—is less likely to succeed at the box office (although more likely to happen in reality).
Reducing the probability of existential disaster through space colonization is more valuable than preventing specific impact scenarios. Overly detailed impact predictions are improbable and create false perceptions of security.
Yudkowsky 6—Co-founder and Research Fellow of the Singularity Institute for Artificial Intelligence—a non–profit research institute dedicated to increasing the likelihood of, and decreasing the time to, a maximally beneficial singularity, one of the world’s foremost experts on Artificial Intelligence and rationality [Eliezer Yudkowsky, “Cognitive Biases Potentially Affecting Judgment Of Global Risks,” Draft of a chapter in Global Catastrophic Risks, edited by Nick Bostrom and Milan Cirkovic, August 31st, 2006, Available Online at http://singinst.org/upload/cognitive-biases.pdf, Accessed 11-11-2010]
According to probability theory, adding additional detail onto a story must render the story less probable. It is less probable that Linda is a feminist bank teller than that she is a bank teller, since all feminist bank tellers are necessarily bank tellers. Yet human psychology seems to follow the rule that adding an additional detail can make the story more plausible. People might pay more for international diplomacy intended to prevent nanotechnological warfare by China, than for an engineering project to defend against nanotechnological attack from any source. The second threat scenario is less vivid and alarming, but the defense is more useful because it is more vague. More valuable still would be strategies which make humanity harder to extinguish without being specific to nanotechnologic threats - such as colonizing space, or see Yudkowsky (this volume) on AI. Security expert Bruce Schneier observed (both before and after the 2005 hurricane in New Orleans) that the U.S. government was guarding specific domestic targets against "movie-plot scenarios" of terrorism, at the cost of taking away resources from emergency-response capabilities that could respond to any disaster. (Schneier 2005.) Overly detailed reassurances can also create false perceptions of safety: "X is not an existential risk and you don't need to worry about it, because A, B, C, D, and E"; where the failure of any one of propositions A, B, C, D, or E potentially extinguishes the human species. "We don't need to worry about nanotechnologic war, because a UN commission will initially develop the technology and prevent its proliferation until such time as an active shield is developed, capable of defending against all accidental and malicious outbreaks that contemporary nanotechnology is capable of producing, and this condition will persist indefinitely." Vivid, specific scenarios can inflate our probability estimates of security, as well as misdirecting defensive investments into needlessly narrow or implausibly detailed risk scenarios.
Multiplying probability and magnitude is key to ethical risk assessment—the most serious threats to humanity are the unknown and unthinkable.
Rees 8 — Sir Martin J. Rees, Professor of Cosmology and Astrophysics and Master of Trinity College at the University of Cambridge, Astronomer Royal and Visiting Professor at Imperial College London and Leicester University, Director of the Institute of Astronomy, Research Professor at Cambridge, 2008 (“Foreward,” Global Catastrophic Risks, Edited by Nick Bostrom and Milan M. Cirkovic, Published by Oxford University Press, ISBN 9780198570509, p. x-xi)
These concerns are not remotely futuristic - we will surely confront them within next 10-20 years. But what of the later decades of this century? It is hard to predict because some technologies could develop with runaway speed. Moreover, human character and physique themselves will soon be malleable, to an extent that is qualitatively new in our history. New drugs (and perhaps even implants into our brains) could change human character; the cyberworld has potential that is both exhilarating and frightening.
We cannot confidently guess lifestyles, attitudes, social structures or population sizes a century hence. Indeed, it is not even clear how much longer our descendants would remain distinctively 'human'. Darwin himself noted that 'not one living species will transmit its unaltered likeness to a distant futurity'. Our own species will surely change and diversify faster than any predecessor - via human-induced modifications (whether intelligently controlled or unintended) not by natural selection alone. The post-human era may be only centuries away. And what about Artificial Intelligence? Super-intelligent machine could be the last invention that humans need ever make. We should keep our minds open, or at least ajar, to concepts that seem on the fringe of science fiction.
These thoughts might seem irrelevant to practical policy - something for speculative academics to discuss in our spare moments. I used to think this. But humans are now, individually and collectively, so greatly empowered by rapidly changing technology that we can—by design or as unintended consequences—engender irreversible global changes. It is surely irresponsible not to ponder what this could mean; and it is real political progress that the challenges stemming from new technologies are higher on the international agenda and that planners seriously address what might happen more than a century hence.
We cannot reap the benefits of science without accepting some risks - that has always been the case. Every new technology is risky in its pioneering stages. But there is now an important difference from the past. Most of the risks encountered in developing 'old' technology were localized: when, in the early days of steam, a boiler exploded, it was horrible, but there was an 'upper bound' to just how horrible. In our evermore interconnected world, however, there are new risks whose consequences could be global. Even a tiny probability of global catastrophe is deeply disquieting.
We cannot eliminate all threats to our civilization (even to the survival of our entire species). But it is surely incumbent on us to think the unthinkable and study how to apply twenty-first century technology optimally, while minimizing the 'downsides'. If we apply to catastrophic risks the same prudent analysis that leads us to take everyday safety precautions, and sometimes to buy insurance—multiplying probability by consequences—we had surely conclude that some of the scenarios discussed in this book deserve more attention that they have received.
My background as a cosmologist, incidentally, offers an extra perspective -an extra motive for concern - with which I will briefly conclude.
The stupendous time spans of the evolutionary past are now part of common culture - except among some creationists and fundamentalists. But most educated people, even if they are fully aware that our emergence took billions of years, somehow think we humans are the culmination of the evolutionary tree. That is not so. Our Sun is less than halfway through its life. It is slowly brightening, but Earth will remain habitable for another billion years. However, even in that cosmic time perspective—extending far into the future as well as into the past - the twenty-first century may be a defining moment. It is the first in our planet's history where one species—ours—has Earth's future in its hands and could jeopardise not only itself but also lifes immense potential.
The decisions that we make, individually and collectively, will determine whether the outcomes of twenty-first century sciences are benign or devastating. We need to contend not only with threats to our environment but also with an entirely novel category of risks—with seemingly low probability, but with such colossal consequences that they merit far more attention than they have hitherto had. That is why we should welcome this fascinating and provocative book. The editors have brought together a distinguished set of authors with formidably wide-ranging expertise. The issues and arguments presented here should attract a wide readership - and deserve special attention from scientists, policy-makers and ethicists
Evaluate impacts through a one-thousand year lens—focus on short term impacts makes extinction inevitable.
Tonn 4—Ph. D., leader of the Policy Analysis Systems Group at Oak Ridge National Laboratory, a professor in the Department of Political Science, University of Tennessee [Bruce E. Tonn, “Integrated 1000-year planning,” Futures, 36 (2004) 91–108, http://longnow.org/static/djlongnow_media/press/pdf/0200402-Tonn-Integrated1000yearplanning.pdf]
2. Why 1000 years?
Why tackle 1000 years and not shorter, more imaginable and manageable time horizons? Why worry about the long-term when there is so much suffering in the world right now? The most direct answer is that the world needs to focus both on improving the plight of the world’s poor in the short-term and protecting everyone’s well-being over the long-term. Focusing only on the short-term is like worrying only about how to arrange the chairs on the deck of the ill-fated Titanic. All the good work at improving the arrangement of the chairs was lost because the longer-term issue (the survival of the ship) was completely mis-handled, in part through misplaced overconfidence in the ability of the ship to withstand adversity. In the same way, short-term activities to improve people’s lives, whose value should not be diminished in any way, could be completely washed away (literally in the case of global warming) by problems orders of magnitude more serious and intractable if the future is not also dealt with.
Short time horizons constrain if not completely mask the recognition of big picture issues and threats. For example, over the next ten years, oil supplies may be manageable; over 1000 years, oil supplies and those of natural gas will probably be completely exhausted, thereby threatening the world’s economic and political stability if a plan is not in place to develop substitutes for these fossil fuels [1]. Over the next 50 years, rising sea levels may not be devastating, but within 1000 years, large swaths of countries like Bangladesh will most certainly disappear.1 Humanity must be prepared to deal with climate change induced human tragedies, as the window to prevent global warming has now closed. Even though only a fraction of the earth’s tropical rainforests disappear each year, add those small changes up over 1000 years and the forests are gone forever. Thus, by playing out important trends past normal policy horizons, the bigger picture contains some very disturbing and dangerous potential states-of-the-world.
The longer time horizon is also needed to facilitate a qualitative change in mindset from the short-term to the long-term. In a seeming paradox, with a longer time perspective, some actions will come to be seen as more urgent, such as actions needed to protect tropical rainforests and manage energy supplies. Longer-term perspectives indict the inherent selfishness of many of today’s economic and social policies, based as they are on purportedly rational theories but in reality on irrational, self-fulfilling and dogmatic belief systems that temporally discount moral and ethical obligations to future generations. A 1000-year perspective is long enough to drive home the point that humans will most likely be living on this planet, with few or no other true alternatives, for many thousands if not millions of years into the future. The daily closing state of the Dow Jones Industrial Average as a matter of importance ought to pale in comparison with the goal of keeping the planet liveable into the very distant future. This realization should lead to another, that 1000-year planning ought to be a permanent responsibility of humanity. In other words, even though 1000-year plans will most certainly need to be systematically evaluated and revised, maybe as often as every five years, humanity must accept permanent responsibilities for wise use of energy, land, ocean, and among many important resources that sustain life on earth.
A longer-time horizon is also needed to allow humanity to achieve the next to impossible. Many of today’s habitual naysayers preach inaction because they do not believe success is achievable, in the near-term. For example, we do not now have the technologies to defend the planet from collision with space-based objects and will not in the short-term, so the thinking is why spend much if any money on this endeavor. Of course, with that myopic view, conditions might never arise that would support the development of such technology. With a 1000-year perspective, the odds appreciably increase that such technology could be developed and deployed, so why not start today! The relatively small amounts of global funding allocated to fusion energy, space colonization, and carbon management are to some degree the result of myopic naysaying and would probably be increased if perspectives were lengthened and broadened. The longer time frame should foster the wisdom and allow the patience needed to envision the implementation of comprehensive, challenging and integrated global plans.
Predictions about existential risk are possible and necessary.
Bostrom 9 — Nick Bostrom, Professor in the Faculty of Philosophy & Oxford Martin School, Director of the Future of Humanity Institute, and Director of the Programme on the Impacts of Future Technology at the University of Oxford, recipient of the 2009 Eugene R. Gannon Award for the Continued Pursuit of Human Advancement, holds a Ph.D. in Philosophy from the London School of Economics, 2009 (“The Future of Humanity,” Geopolitics, History and International Relations, Volume 9, Issue 2, Available Online to Subscribing Institutions via ProQuest Research Library, Reprinted Online at http://www.nickbostrom.com/papers/future.pdf, Accessed 07-06-2011, p. 2-4)
We need realistic pictures of what the future might bring in order to make sound decisions. Increasingly, we need realistic pictures not only of our personal or local near-term futures, but also of remoter global futures. Because of our expanded technological powers, some human activities now have significant global impacts. The scale of human social organization has also grown, creating new opportunities for coordination and action, and there are many institutions and individuals who either do consider, or claim to consider, or ought to consider, possible long-term global impacts of their actions. Climate change, national and international security, economic development, nuclear waste disposal, biodiversity, natural resource conservation, population policy, and scientific and technological research funding are examples of policy areas that involve long time-horizons. Arguments in these areas often rely on implicit assumptions about the future of humanity. By making these assumptions explicit, and subjecting them to critical analysis, it might be possible to address some of the big challenges for humanity in a more well-considered and thoughtful manner.
The fact that we “need” realistic pictures of the future does not entail that we can have them. Predictions about future technical and social developments are notoriously unreliable – to an extent that have lead some to propose that we do away with prediction altogether in our planning and preparation for the future. Yet while the methodological problems of such forecasting are certainly very significant, the extreme view that we can or should do away with prediction altogether is misguided. That view is expressed, to take one [end page 2] example, in a recent paper on the societal implications of nanotechnology by Michael Crow and Daniel Sarewitz, in which they argue that the issue of predictability is “irrelevant”:
preparation for the future obviously does not require accurate prediction; rather, it requires a foundation of knowledge upon which to base action, a capacity to learn from experience, close attention to what is going on in the present, and healthy and resilient institutions that can effectively respond or adapt to change in a timely manner.2
Note that each of the elements Crow and Sarewitz mention as required for the preparation for the future relies in some way on accurate prediction. A capacity to learn from experience is not useful for preparing for the future unless we can correctly assume (predict) that the lessons we derive from the past will be applicable to future situations. Close attention to what is going on in the present is likewise futile unless we can assume that what is going on in the present will reveal stable trends or otherwise shed light on what is likely to happen next. It also requires non-trivial prediction to figure out what kind of institution will prove healthy, resilient, and effective in responding or adapting to future changes.
The reality is that predictability is a matter of degree, and different aspects of the future are predictable with varying degrees of reliability and precision.3 It may often be a good idea to develop plans that are flexible and to pursue policies that are robust under a wide range of contingencies. In some cases, it also makes sense to adopt a reactive approach that relies on adapting quickly to changing circumstances rather than pursuing any detailed long-term plan or explicit agenda. Yet these coping strategies are only one part of the solution. Another part is to work to improve the accuracy of our beliefs about the future (including the accuracy of conditional predictions of the form “if x is done, y will result”). There might be traps that we are walking towards that we could only avoid falling into by means of foresight. There are also opportunities that we could reach much sooner if we could see them farther in advance. And in a strict sense, prediction is always necessary for meaningful decision-making.4
Predictability does not necessarily fall off with temporal distance. It may be highly unpredictable where a traveler will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination. The very long-term future of humanity may be relatively easy to predict, being a matter amenable to study by the natural sciences, particularly cosmology (physical eschatology). And for there to be a degree of predictability, it is not necessary that it be possible to identify one specific scenario as what will definitely happen. If there is at least some scenario that can be ruled out, that is also a degree of predictability. Even short of this, if there is some basis for assigning different probabilities [end page 3] (in the sense of credences, degrees of belief) to different propositions about logically possible future events, or some basis for criticizing some such probability distributions as less rationally defensible or reasonable than others, then again there is a degree of predictability. And this is surely the case with regard to many aspects of the future of humanity. While our knowledge is insufficient to narrow down the space of possibilities to one broadly outlined future for humanity, we do know of many relevant arguments and considerations which in combination impose significant constraints on what a plausible view of the future could look like. The future of humanity need not be a topic on which all assumptions are entirely arbitrary and anything goes. There is a vast gulf between knowing exactly what will happen and having absolutely no clue about what will happen. Our actual epistemic location is some offshore place in that gulf.5
And Mars colonization would reduce existential risk—we need lifeboats for Spaceship Earth
Gott 11—Ph.D., professor of astrophysical sciences at Princeton University, recipient of the Robert J. Trumpler Award, an Alfred P. Sloan Fellowship, the Astronomical League Award, and Princeton's President's Award for Distinguished Teaching [J. Richard Gott, III, “A One-Way Trip to Mars,” Journal of Cosmology, 2011, Vol 13, http://journalofcosmology.com/Mars151.html]
I've been advocating a one-way colonizing trip to Mars for many years (Gott, 1997, 2001, 2007). Here's what I said about it in my book, Time Travel in Einstein's Universe:
"The goal of the human spaceflight program should be to increase our survival prospects by colonizing space. ... we should concentrate on establishing the first self-supporting colony in space as soon as possible. ... We might want to follow the Mars Direct program advocated by American space expert Robert Zubrin. But rather than bring astronauts back from Mars, we might choose to leave them there to multiply, living off indigenous materials. We want them on Mars. That's where they benefit human survivability.... Many people might hesitate to sign up for a one-way trip to Mars, but the beauty is that we only have to find 8 adventurous, willing souls" (Gott 2001).
I've been stressing the fact that we should be in a hurry to colonize space, to improve our survival prospects, since my Nature paper in 1993 (Gott 1993). The real space race is whether we get off the planet before the money for the space program runs out. The human spaceflight program is only 50 years old, and may go extinct on a similar timescale. Expensive programs are often abandoned after a while. In the 1400s, China explored as far as Africa before abruptly abandoning its voyages. Right now we have all our eggs in one basket: Earth. The bones of extinct species in our natural history museums give mute testimony that disasters on Earth routinely occur that cause species to go extinct. It is like sailing on the Titanic with no lifeboats. We need some lifeboats. A colony on Mars might as much as double our long-term survival prospects by giving us two chances instead of one.
Colonies are a great bargain: you just send a few astronauts and they have descendants on Mars, sustained by using indigenous materials. It's the colonists who do all the work. If one is worried that funds will be cut off, it is important to establish a self-supporting colony as soon as possible. Some have argued that older astronauts should be sent on a one-way trip to Mars since they ostensibly have less to lose. But I would want to recruit young astronauts who can have children and grandchildren on Mars: people who would rather be the founders of a Martian civilization than return to a ticker-tape parade on Earth. Founding a colony on Mars would change the course of world history. You couldn't even call it "world" history anymore. If colonizing Mars to increase the survival prospects of the human species is our goal, then, since money is short, we should concentrate on that goal. In New Scientist (Gott 1997) I said:
"And if colonization were the goal, you would not have to bring astronauts back from Mars after all; that is where we want them. Instead we could equip them to stay and establish a colony at the outset, a good strategy if one is worried that funding for the space programme may not last. So we should be asking ourselves: what is the cheapest way to establish a permanent, self-sustaining colony on Mars?"
I have argued that it is a goal we could achieve in the next 50 years if we directed our efforts toward that end. We would need to launch into low Earth orbit only about as many tons in the next 50 years as we have done in the last 50 years. But will we be wise enough to do this?
Colonization is necessary to avoid an inevitable extinction—Mars is the best place
Gott 9—Professor of Astrophysics at Princeton University, recipient of the Robert J. Trumpler Award, an Alfred P. Sloan Fellowship, the Astronomical League Award, and Princeton's President's Award for Distinguished Teaching [J. Richard, July 17th, “A Goal for the Human Spaceflight Program,” NASA, http://www.nasa.gov/pdf/368985main_GottSpaceflightGoal.pdf]
The goal of the human spaceflight program should be to increase the survival prospects of the human race by colonizing space. Self-sustaining colonies in space, which could later plant still other colonies, would provide us with a life insurance policy against any catastrophes which might occur on Earth. Fossils of extinct species offer ample testimony that such catastrophes do occur. Our species is 200,000 years old; the Neanderthals went extinct after 300,000 years. Of our genus (Homo) and the entire Hominidae family, we are the only species left. Most species leave no descendant species. Improving our survival prospects is something we should be willing to spend large sums of money on governments make large expenditures on defense for the survival of their citizens. The Greeks put all their books in the great Alexandrian library. I'm sure they guarded it very well. But eventually it burnt down taking all the books with it. It's fortunate that some copies of Sophocles' plays were stored elsewhere, for these are the only ones that we have now (7 out of 120 plays). We should be planting colonies off the Earth now as a life insurance policy against whatever unexpected catastrophes may await us on the Earth. Of course, we should still be doing everything possible to protect our environment and safeguard our prospects on the Earth. But chaos theory tells us that we may well be unable to predict the specific cause of our demise as a species. By definition, whatever causes us to go extinct will be something the likes of which we have not experienced so far. We simply may not be smart enough to know how best to spend our money on Earth to insure the greatest chance of survival here. Spending money planting colonies in space simply gives us more chances--like storing some of Sophocles' plays away from the Alexandrian library. If we made colonization our goal, we might formulate a strategy designed to increase the likelihood of achieving it. Having such a goal makes us ask the right questions. Where is the easiest place in space to plant a colony—the place to start? Overall, Mars offers the most habitable location for Homo sapiens in the solar system outside of Earth, as Bruce Murray has noted. Mars has water, reasonable gravity (1/3rd that of the Earth), an atmosphere, and all the chemicals necessary for life. Living underground (like some of our cave dwelling ancestors) would lower radiation risks to acceptable levels. The Moon has no atmosphere, less protection against solar flares and galactic cosmic rays, harsher temperature ranges, lower gravity (1/6th that of the Earth), and no appreciable water. Asteroids are similar. The icy moons of Jupiter and Saturn offer water but are much colder and more distant. Mercury and Venus are too hot, and Jupiter, Saturn, Uranus, and Neptune are inhospitable gas giants. Free floating colonies in space, as proposed by Gerard ONeill, would need material brought up from planetary or asteroid surfaces. If we want to plant a first permanent colony in space, Mars would seem the logical place to start.
Now is the key time—decisions made now on Earth will determine the future of life in the universe.
Rees 3—Martin J. Rees, Professor of Cosmology and Astrophysics and Master of Trinity College at the University of Cambridge, Astronomer Royal and Visiting Professor at Imperial College London and Leicester University, Director of the Institute of Astronomy, Research Professor at Cambridge, 2003 (“Prologue,” Our Final Hour: A Scientist's Warning: How Terror, Error, And Environmental Disaster Threaten Humankind's Future In This Century—On Earth And Beyond, Published by Basic Books, ISBN 046506826, p. 7-8)
It may not be absurd hyperbole—indeed, it may not even be an overstatement—to assert that the most crucial location in space and time (apart from the big bang itself) could be here and now. I think the odds are no better than fifty-fifty that our present civilisation on Earth will survive to the end of the present century. Our choices and actions could ensure the perpetual future of life (not just on Earth, but perhaps far beyond it, too). Or in contrast, through malign intent, or through misadventure, twenty-first century technology could jeopardise life's potential, foreclosing its human and posthuman future. What happens here on Earth, in this century, could conceivably make the difference between a near eternity filled with ever more complex and subtle forms of life and one filled with nothing but base matter.
Share with your friends: |