2NC – ! AI Outweighs
AI outweighs nuclear war.
Turchin & Denkenberger ‘18 (Alexey Turchin & David Denkenberger 18. Turchin is a researcher at the Science for Life Extension Foundation; Denkenberger is with the Global Catastrophic Risk Institute (GCRI) @ Tennessee State University, Alliance to Feed the Earth in Disasters (ALLFED). 05/03/2018. “Classification of Global Catastrophic Risks Connected with Artificial Intelligence.” AI & SOCIETY, pp. 1–17.)(Shiv)
According to Yampolskiy and Spellchecker (2016), the probability and seriousness of AI failures will increase with time. We estimate that they will reach their peak between the appearance of the first self-improving AI and the moment that an AI or group of AIs reach global power, and will later diminish, as late-stage AI halting seems to be a low-probability event. AI is an extremely powerful and completely unpredictable technology, millions of times more powerful than nuclear weapons. Its existence could create multiple individual global risks, most of which we can not currently imagine. We present several dozen separate global risk scenarios connected with AI in this article, but it is likely that some of the most serious are not included. The sheer number of possible failure modes suggests that there are more to come.
Greatest existential risk.
Share with your friends: |