Geist ‘15 (Edward Moore, is a MacArthur Nuclear Security Fellow at Stanford University's Center for International Security and Cooperation, 8/9/15, “Is artificial intelligence really an existential threat to humanity?,” http://thebulletin.org/artificial-intelligence-really-existential-threat-humanity8577)(Shiv)
A misguided approach to the control problem. The findings of artificial intelligence researchers bode ill for Bostrom’s recommendations for how to prevent superintelligent machines from determining the fate of mankind. The second half of Superintelligence is devoted to strategies for approaching what Bostrom terms the “control problem.” While creating economic or ecological incentives for artificial intelligences to be friendly toward humanity might seem like obvious ways to keep AI under control, Bostrom has little faith in them; he believes the machines will be powerful enough to subvert these obstacles if they want. Dismissing “capability control” as “at best, a temporary and auxiliary measure,” he focuses the bulk of his analysis on “giving the AI a final goal that makes it easier to control.” Although Bostrom acknowledges that formulating an appropriate goal is likely to be extremely challenging, he is confident that intelligent machines will aggressively protect their “goal content integrity” no matter how powerful they become—an idea he appears to have borrowed from AI theorist Stephen Omohundro. Bostrom devotes several chapters to how to specify goals that can be incorporated into “seed AIs,” so they will protect human interests once they become superintelligent.
If machines are somehow able to develop the kind of godlike superintelligence Bostrom envisions, artificial intelligence researchers have learned the hard way that the nature of reason itself will work against this plan to solve the “control problem.” The failure of early AI programs such as the General Problem Solver to deal with real-world problems resulted in considerable part from their inability to redefine their internal problem representation; if their designers failed to provide an efficient way to represent the problem in the first place, the programs usually choked.
Share with your friends: |