Verbatim Mac



Download 357.18 Kb.
Page123/146
Date11.07.2022
Size357.18 Kb.
#59162
1   ...   119   120   121   122   123   124   125   126   ...   146
China Relations Core - Berkeley 2016
High Speed Rail Affirmative Politics Elections Link Turns UTNIF 2012
Pamlin and Armstrong 15 [Dennis Pamlin -- Executive Project Manager @ Global Challenges Foundation and Dr Stuart Armstrong -- James Martin Research Fellow @ Future of Humanity Institute, Oxford Martin School & Faculty of Philosophy, University of Oxford, “12 Risks that threaten human civilization”, Published February 2015 by Global Challenges Foundation, http://globalchallenges.org/wp-content/uploads/12-Risks-with-infinite-impact.pdf]-DD
Artificial Intelligence (AI) is one of the least understood global challenges. There is considerable uncertainty on what timescales an AI could be built, if at all, with expert opinion shown to be very unreliable in this domain.481 This uncertainty is bi-directional: AIs could be developed much sooner or much later than expected. Despite the uncertainty of when and how AI could be developed, there are reasons to suspect that an AI with human-comparable skills would be a major risk factor. AIs would immediately benefit from improvements to computer speed and any computer research. They could be trained in specific professions and copied at will, thus replacing most human capital in the world, causing potentially great economic disruption. Through their advantages in speed and performance, and through their better integration with standard computer software, they could quickly become extremely intelligent in one or more domains (research, planning, social skills...). If they became skilled at computer research, the recursive self improvement could generate what is sometime called a “singularity”, 482 but is perhaps better described as an “intelligence explosion”, 483 with the AI’s intelligence increasing very rapidly.484 Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime),485 and would probably act in a way to boost their own intelligence and acquire maximal resources for almost all initial AI motivations.486 And if these motivations do not detail487 the survival and value of humanity in exhaustive detail, the intelligence will be driven to construct a world without humans or without meaningful features of human existence. This makes extremely intelligent AIs a unique risk,488 in that extinction is more likely than lesser impacts. An AI would only turn on humans if it foresaw a likely chance of winning; otherwise it would remain fully integrated into society. And if an AI had been able to successfully engineer a civilisation collapse, for instance, then it could certainly drive the remaining humans to extinction.


Download 357.18 Kb.

Share with your friends:
1   ...   119   120   121   122   123   124   125   126   ...   146




The database is protected by copyright ©ininet.org 2024
send message

    Main page