Terror Defense No Al Qaida Terror


Tech Defense Artificial Intelligence



Download 2.62 Mb.
Page46/81
Date18.10.2016
Size2.62 Mb.
#2908
1   ...   42   43   44   45   46   47   48   49   ...   81

Tech Defense

Artificial Intelligence

No Artificial Intelligence

No AI- you can’t program ideas. That’s a crucial limitation


Weiler 14 Craig Weiler, parapsychologist. https://weilerpsiblog.wordpress.com/2014/10/29/artificial-intelligence-is-not-taking-over-the-world/ October 29, 2014 Artificial Intelligence is Not Taking Over the WorldTina

How does this relate to a self driving car? The human mind has the mother of all shortcuts for dealing with vast amounts of data. Rather than have to learn, store and retrieve the patterns for every conceivable type of road, we only have to learn one thing: the idea of what a road is. What would require a computer to sift through terabytes of information, we accomplish with a single, not terribly complex (for us) idea. Once we have that idea we can not only recognize and navigate any passable road but we can also navigate a car where no road exists (i.e. drive carefully on a relatively flat patch of dirt around a tree that has fallen on the road) because we have an idea of the conditions necessary for driving a car somewhere. An idea of a road encompasses all possible versions, real and imaginary of what a road can be. An idea observes physical reality and computing IS physical reality. Another way to say this is that computing is always the observation, never the observer. An idea is not something physical, and a computer program is and therein lies the problem. We don’t know and don’t have any idea how to know, how to get from something physical to something non-physical. That is the hard problem of consciousness. Clearly we have brains, which are physical which are somehow necessary for consciousness. So there is definitely a relationship between consciousness and the physical world. That is not in doubt. The problem is that we have no idea what that relationship is and until we understand it, true AI will remain a distant dream. We may have to completely rethink our beliefs on what consciousness is and how it originates. (That discussion is beyond the scope of this article.) An idea is a product of consciousness, which is not material and cannot be duplicated by any physical process we know of. The idea of a road is not a representation of a road. Nor is it a specification or a diagram although it can incorporate these things. The idea of a road can include every real and imaginary road, as well as any type of representation of a road in any medium in which it is recognizable, even barely, as a road. What’s important here is that the idea of a road can take an infinite number of variables into account because an idea transcends the physical reality that it observes. Any computer, no matter how powerful, will never be able to do this. It cannot transcend it’s own physical reality. You cannot compute your way to the creation of an idea. All you can do is define and refine an idea that you already have. And to define an idea is to eventually fall into the trap of infinite variables. (You can never fully define an idea because they are an infinite number of definitions.) Increasing your computing power, memory and storage does not solve the problem of having to define everything. This is a crucial limitation of AI because it is impossible to define everything. This is why the formation of ideas is central to real intelligence. Ideas and concepts allow us to process otherwise unimaginable amounts of information by relying on a core of intangible concepts that encompass nearly everything we can hope to encounter rather than having to literally translate every single bit of input our minds receive. That’s why we can look at a road and identify it as a road with just a glance without ever having to really think about it. Computers, of course, do not operate anything like this. We function with intangibles and a computer functions with tangibles. That is a huge difference. So if you were worried that computers were going to take over the world, rest easy. Actual thinking is going to be the exclusive domain of living creatures for the foreseeable future

Tons of experts have a laundry list of problems with AI

Mollard and Ingham Pascale 14 Mollard; Richard Ingham is AFP's science, health and environment coordinator based in Paris. Dec 06, 2014 http://phys.org/news/2014-12-artificial-intelligence-hawking-debate.html Artificial intelligence: Hawking's fears stir debateTina

Other experts said "true" AI—loosely defined as a machine that can pass itself off as a human being or think creatively—was at best decades away, and cautioned against alarmism. Since the field was launched at a conference in 1956, "predictions that AI will be achieved in the next 15 to 25 years have littered the field," according to Oxford researcher Stuart Armstrong. "Unless we missed something really spectacular in the news recently, none of them have come to pass," Armstrong says in a book, "Smarter than Us: The Rise of Machine Intelligence." Jean-Gabriel Ganascia, an AI expert and moral philosopher at the Pierre and Marie Curie University in Paris, said Hawking's warning was "over the top." "Many things in AI unleash emotion and worry because it changes our way of life," he said. "Hawking said there would be autonomous technology which would develop separately from humans. He has no evidence to support that. There is no data to back this opinion." "It's a little apocalyptic," said Mathieu Lafourcade, an AI language specialist at the University of Montpellier, southern France. "Machines already do things better than us," he said, pointing to chess-playing software. "That doesn't mean they are more intelligent than us." Allan Tucker, a senior lecturer in computer science at Britain's Brunel University, took a look at the hurdles facing AI. Recent years have seen dramatic gains in data-processing speed, spurring flexible software to enable a machine to learn from its mistakes, he said. Balance and reflexes, too, have made big advances. Tucker pointed to the US firm Boston Dynamics as being in the research vanguard. It has designed four-footed robots called BigDog, with funding from the Pentagon's hi-tech research arm. "These things are incredible tools that are really adaptative to an environment, but there is still a human there, directing them," said Tucker. "To me, none of these are close to what true AI is." Tony Cohn, a professor of automated reasoning at Leeds University in northern England, said full AI is "still a long way off... not in my lifetime certainly, and I would say still many decades, given (the) current rate of progress." Police prepare a bomb detection robot on December 2, 2014 in Cologne Police prepare a bomb detection robot on December 2, 2014 in Cologne Despite big strides in recognition programmes and language cognition, robots perform poorly in open, messy environments where there are lots of noise, movement, objects and faces, said Cohn. Such situations require machines to have what humans possess naturally and in abundance—"commonsense knowledge" to make sense of things. Tucker said that, ultimately, the biggest barrier facing the age of AI is that machines are... well, machines. "We've evolved over however many millennia to be what we are, and the motivation is survival," he said. "That motivation is hard-wired into us. It's key to AI, but it's very difficult to implement."

3 barriers to AI means no impact


Bradford 15 Freelance Journalist specializing in Information Technology. The Facts, Fiction, and Future of Artificial Intelligence By Contel Bradford January 13, 2015 http://www.storagecraft.com/blog/facts-fiction-future-artificial-intelligence/Tina

Human experienced: In order to learn a new language with fluid efficiency, you have to spend time conversing with others who are familiar with the slang, culture, social context of the terminology, and other things that can’t all be learned from a piece of dictation software. A machine faces even greater challenges in this scenario due to its lack of cultural experiences, hardships, and everyday human interactions. An independent operator: The thought of robots rebelling against mankind makes for an intriguing plot, but due to the reliance on human assistance, it may never be possible. Even the most skilled of smart machines still depend on man to write their automation capabilities, perform repairs, and handle general maintenance. And let’s not forget those crucial OS updates. Without us, the most advanced AI machines would be non-functional sooner or later. Social savvy: Everyday, computers talk to one another to connect our systems to the Internet and power the network operations we thrive on. Like all aspects of technology, machine-to-machine interactions can and will evolve, but there is a limit to their communication capabilities. The inability to proactively socialize means a super intelligent computer can’t coordinate sadistic schemes from scratch with like systems, or even sway disloyal people into betraying their own kind – though that might not take much convincing.

No world-domination from AI, and it’s too complicated to happen anyway


Scharf 15 Caleb Scharf is the director of Columbia University's multidisciplinary Astrobiology Center Is AI Dangerous? That Depends… By Caleb A. Scharf | February 13, 2015. http://blogs.scientificamerican.com/life-unbounded/is-ai-dangerous-that-depends-8230/Tina

Except it's a little hard to find any details of what exactly that existential threat is perceived to be. Hawking has suggested that it might be the capacity of a strong AI to 'evolve' much, much faster than biological systems - ultimately gobbling up resources without a care for the likes of us. I think this is a fair conjecture. AI's threat is not that it will be a sadistic megalomaniac (unless we deliberately, or carelessly make it that way) but that it will follow its own evolutionary imperative. It's tempting to suggest that a safeguard would be to build empathy into an AI. But I think that fails in two ways. First, most humans have the capacity for empathy, yet we continue to be nasty, brutish, and brutal to ourselves and to pretty much every other living thing on the planet. The second failure point is that it's not clear to me that true, strong, AI is something that we can engineer in a pure step-by-step way, we may need to allow it to come into being on its own. What does that mean? Current efforts in areas such as computational 'deep-learning' involve algorithms constructing their own probabilistic landscapes for sifting through vast amounts of information. The software is not necessarily hard-wired to 'know' the rules ahead of time, but rather to find the rules or to be amenable to being guided to the rules - for example in natural language processing. It's incredible stuff, but it's not clear that it is a path to AI that has equivalency to the way humans, or any sentient organisms, think. This has been hotly debated by the likes of Noam Chomsky (on the side of skepticism) and Peter Norvig (on the side of enthusiasm). At a deep level it is a face-off between science focused on underlying simplicity, and science that says nature may not swing that way at all. An alternative route to AI is one that I'll propose here (and it's not original). Perhaps the general conditions can be created from which intelligence can emerge. On the face of it this seems fairly ludicrous, like throwing a bunch of spare parts in a box and hoping for a new bicycle to appear. It's certainly not a way to treat AI as a scientific study. But if intelligence is the emergent - evolutionary - property of the right sort of very, very complex systems, could it happen? Perhaps. One engineering challenge is that it may take a system of the complexity of a human brain to sustain intelligence, but of course our brains co-evolved with our intelligence. So it's a bit silly to imagine that you could sit down and design the perfect circumstances for a new type of intelligence to appear, because we don't know exactly what those circumstances should be.


Artificial Intelligence Benevolent

AI is here to help & it’s just a tool


Murphy 15 Google A.I. chief: Relax, robots aren’t going to kill us all By Mike Murphy / June 8, 2015 at 9:30 AMSilicon valley writer & web producer at mercury news http://www.siliconbeat.com/2015/06/08/google-a-i-chief-relax-robots-arent-going-to-kill-us-all/Tina

The head of artificial intelligence at Google’s DeepMind thinks we’re all being silly to worry about machines rising up to crush humanity. (Or maybe he’s just a superintelligent machine sent from the future to lull us into a false sense of security.) Mustafa Suleyman, co-founder of DeepMind, which Google bought last year, told a London audience Friday that critics and worrywarts have it all wrong. “Whether it’s Terminator coming to blow us up or mad scientists looking to create quite perverted women robots, this narrative has somehow managed to dominate the entire landscape, which we find really quite remarkable,” Suleyman said, according to a Wall Street Journal report. It’s not just conspiracy-theory crackpots and “Terminator” fans who are wary — Tesla’s Elon Musk has called artificial intelligence “potentially more dangerous than nukes,” and Stephen Hawking warned A.I. could “spell the end of the human race.” But Suleyman says we’re looking at it from the wrong perspective.The way we think about A.I. is that it’s going to be a hugely powerful tool that we control and that we direct, whose capabilities we limit, just as you do with any other tool that we have in the world around us, whether they’re washing machines or tractors. We’re building them to empower humanity and not to destroy us.”


Download 2.62 Mb.

Share with your friends:
1   ...   42   43   44   45   46   47   48   49   ...   81




The database is protected by copyright ©ininet.org 2024
send message

    Main page