Name Ms. Manning ap capstone Seminar: Summer Assignment 2017


From Paul Ford MIT Technology Review Feb. 11, 2015



Download 331.46 Kb.
Page2/2
Date23.04.2018
Size331.46 Kb.
#46499
1   2

From Paul Ford MIT Technology Review Feb. 11, 2015

Analyze the painting above. What do you think is the artist’s main idea or claim?

What evidence can be seen in the work of art that supports the artist’s main idea?

Source E and F

Instructions:


  1. Summarize each source, giving the authors’ main idea or thesis and analyzing the evidence and reasoning in each.

  2. Then, write your thoughts about these two sources down in a “thought piece” of between 200 and 250 words. A thought piece is like an extended paragraph or two where you write your thoughts on the issue but use evidence from the sources to support up your opinions. Please type this (double- spaced) and attach.

Source E

The future of artificial intelligence: benevolent or malevolent?

George Michael, Skeptic 20.1 (Winter 2015): p. 57

In his 2013 State of the Union Address, President Barack Obama announced federal funding for an ambitious scientific endeavor christened the BRAIN (the Brain Research Through Advancing Innovative Neurotechnologies) Initiative. The $3 billion project seeks to unlock the secrets of the brain by mapping its electrical pathways. That same year, the European Union unveiled its Human Brain Project, which will use the world's largest computers to create a copy of the human brain made of transistors and metal. Generous funding to the tune of 1.19 billion euros (about $1.6 billion) has been earmarked for this effort.

These two ambitious studies could create a windfall by generating new discoveries for treating incurable diseases and spawning new industries. Concomitant with these projects are exciting new developments in the field of artificial intelligence (AI)--that is, computer engineering efforts to develop machine-based intelligence that can mimic the human mind. Concrete progress toward this goal was realized in June of 2014, when it was announced that a computer had just passed the "Turing Test"--the ability to exhibit intelligent behavior indistinguishable from that of a human. At a test competition organized by Kevin Warwick, a so-called "chatterbot" convinced 33 percent of the judges that it was human with a 13-year old boys personality. (1) Two recent books examine trends in these areas of research and their implications.

In the Future of the Mind, Michio Kaku, a professor of theoretical physics at the City College and City University of New York, draws upon numerous fields, including biotechnology, psychology, evolutionary theory, robotics, physics, and futurism, to survey what lies ahead for the human race on the cusp of what could be a quantum leap in intelligence. As Kaku explains, the introduction of MRI machines could do for brain research what the telescope did for astronomy. Just as humankind learned more about the cosmos in the 15 years after the invention of the telescope than in all of previous history, likewise advanced brain scans in the mid-1990s and 2000s have transformed neuroscience. Physicists played an important role in this endeavor as they were involved in the development of a plethora of new diagnostic instruments used for brain scans, including magnetic resonance imaging (MRI), Electroencephalography (EEG), Computerized Tomography (CAT), and the Positron Emission Topography (PET).

Getting to our current level of human intelligence involved many evolutionary pathways. Previously in our evolution, those humans who survived and thrived in the grasslands were those who were adept at tool making, which required increasingly larger brains. The development of language was believed to have accelerated the rise of intelligence insofar as it enhanced abstract thought and the ability to plan and organize society. With these new capabilities, humans could join together to form hunting teams, which increased their likelihood of survival and passing on their genes. The increase in intelligence and expressive capabilities led to the emergence of politics as humans formed factions to vie for control of the tribe. What was essential to this progress was the ability to anticipate the future. Whereas animals create a model of the world in relation to space and one another, Kaku develops a "space-time theory of consciousness" for human psychology implying that humans, unlike other animals, create a model of the world in relation to time, both forward and backward. He argues that humans are alone in the animal kingdom in understanding the concept of tomorrow. Thus the human brain can be characterized as an "anticipation machine."

Kaku employs the metaphor of a CEO for how the human brain functions, in which numerous parties in a corporation clamor for the attention of the chief executive officer. The notion of a singular "I" making all of our decisions continuously is an illusion created by our subconscious minds, says Kaku; instead, consciousness amounts to a maelstrom of events distributed throughout our brains. When one competing process trumps the others, the brain rationalizes the outcome after the fact and concocts the impression that a single "self" decided the outcome.

Genetic engineering might someday be used to enhance human intelligence. By manipulating only a handful of genes, it could be possible to increase our I.Q. Brain research suggests that a series of genes acting together in complex ways is responsible for the human intellect. There's an upper ceiling for how smart we could become based on the laws of physics, however, as Kaku notes, nature has limited the growth and development of our brains. For a variety of reasons, it is not physically feasible to increase human brain size and add to the length of neurons. Thus, he says, any further enhancement of intelligence must come from external means.

In the field of medicine, brain research could increase longevity and enhance the quality of life for many patients. Engineers are currently working to create a "robo-doc," which could screen people and give basic medical advice with 99 percent accuracy almost for free. Such a device could do much to bring down accelerating healthcare costs. Through the fusion of robotics and brain research, paralyzed patients could one day use telekinesis to move artificial limbs. Complete exoskeletons would enable paraplegics to walk about and function like whole people. Taking this principle a step further, people could control androids from pods and live their lives through attractive alter egos in the style of the 2009 movie Surrogates starring Bruce Willis. Perhaps AI may even allow people to one day escape their bodies completely and transition to a post-biological existence.

Funding for artificial intelligence has gone through cycles of growth and retrenchment. Initial optimism is often followed by frustration as scientists realize the daunting task of reverse-engineering the brain. The two most fundamental challenges confronting AI are replicating pattern recognition and common sense. Our subconscious minds perform trillions of calculations when carrying out pattern recognition exercises, yet the process seems effortless. Duplicating this process in a computer is a tall order. In point of fact, the digital computer is not really a good analog of the human brain as the latter operates a highly sophisticated neural network. Unlike a computer, the human mind has no fixed architecture; instead, collections of neurons constantly rewire and reinforce themselves after learning a task. What is more, we now know today that most human thought actually takes place in the subconscious, which still remains something of a black box in brain research. The conscious part of our mind represents only a tiny part of our computations.

Kaku asks an important question: Flow should we deal with robot consciousness that could decide the future of the human race? An artificially intelligent entity programmed for self-preservation would stop at nothing to prevent someone from pulling the plug. Because of their superior ability to anticipate the future, "robots could plot the outcomes of many scenarios to find the best way to overthrow humanity." This ability could lead the way for a real-life Terminator scenario. In fact, Predator drones may soon be equipped with face recognition technology and permission to fire capabilities if it is reasonably confident of the identity of its target. Furthermore, inasmuch as robots are likely to reflect the particular ethics and moral values of their creators, Kaku sees the potential for conflict between them, a scenario perhaps not unlike that depicted in The Transformers movie series. Finally, Kaku speculates on what form advanced extraterrestrial intelligence might take. Assuming that once intelligent life emerges it will continue to advance, then our first contact with superior life outside of Earth could be with intelligent super computer entities that have long abandoned their biological bodies in exchange for more efficient and durable computational bodies.

Whereas Kaku's tone on AI is mostly optimistic, James Barrat's prognosis is dystopian to the point where our very existence may be threatened by AI. In Our Final Invention, the documentary filmmaker warns about the looming threat of smart machines. For his research he interviewed a number of leading scientists in the fields of AI and robotics. Although all of his subjects were confident that someday all important decisions governing the lives of humans would be made by machines, or humans whose intelligence is augmented by machines, they were uncertain when this epoch would be reached and what its implications might be.

Much of Barrat's book is devoted to countering the optimism of the so-called "singularitarians." Vernor Vinge first coined the term singularity in 1993 in an address to NASA called "The Coming Technological Singularity." The term was then popularized by Ray Kurzweil, a noted inventor, entrepreneur, and futurist who predicted that by the year 2045 we would reach the Singularity--"a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed." As he explained in his book, The Singularity is Near, people will begin the process of leaving their biological bodies and melding with computers. Fie predicts that by the end of the 21st century the non-biological portion of our intelligence will be trillions of trillions of times more powerful than unaided human intelligence. An unabashed technological optimist, Kurzweil believes that the singularity will herald a new era in human history in which problems such as hunger, disease, and even mortality will be solved. Based on the notion of accelerating returns, if humans survive this milestone, the 21st century should witness technological progress equivalent to 200,000 years. Inasmuch as technological evolution tends not to occur in linear trends, but rather, exponential trends, scientific development will advance so rapidly that the fabric of history will be torn. Singularitarians anticipate a future in which AI will allow us to realize our utmost potential.

The singularitarian movement has strong religious overtones, which Barrat argues is overly optimistic. In contrast to Kurzweil, Barrat fears that humans will eventually be left out of this historical process and relegated to the dustbin of evolution. Holding extreme misgivings about artificial intelligence, he warns that the singularitarians are naive about the peril posed by self-aware machines. The more sanguine scientists believe that this process will be friendly and collaborative akin more to a handover than a takeover; however, Barrat argues that such an assumption is misguided. Instead, he avers that the process will be unpredictable and inscrutable. He fears that we could lose control over AI and the results could be catastrophic. Hence, the ultra-intelligent machine could be our final invention.

As Barrat explains, trying to fathom the values of an entity a million times more intelligent than humans is beyond our comprehension. Simply put, the machine will not have human-like motives because it will not have a human psyche. Though AI may harbor no ill will toward humanity, the latter could get in its way and be deemed expendable. He finds it irrational to assume that an entity far more intelligent than we are and which did not evolve in an ecosystem in which empathy is rewarded and passed on to subsequent generations, will necessarily want to protect us. As he argues:

You and I are hundreds of times

smarter than field mice, and share

about 90 percent of our DNA with

them. But do we consult them before

plowing under their dens for agriculture?

Do we ask lab monkeys for their

opinions before we crush their heads

to learn more about sports injuries?

We don't hate mice or monkeys, yet we

treat them cruelly. Superintelligent AI

wouldn't have to hate us to destroy us.

As Barrat notes, the way we treat our closest relatives--the great apes--is not reassuring for those chimpanzees, orangutans, and gorillas that are not already bush meat, zoo inmates, or show biz clowns--the rest are either endangered or living on borrowed time.

Even today, computers are responsible for important decisions that affect the economy. In the realm of finance, up to 70 percent of Wall Street's equity trades are now made by computerized high-frequency trading systems--supercomputers that use algorithms to take advantage of split-second opportunities in price fluctuations of stocks. In recent years, Wall Street has been using agent-based financial modeling that simulates the entire stock market, and even the entire economy, to improve forecasting. Barrat fears that the intelligence explosion in the computational finance domain will be opaque for at least four reasons. First, it will probably take place in various "black box" artificial intelligence techniques closed to outsiders. Second, the high-bandwidth, millisecond-fast transmissions will take place faster than humans can react to them as witnessed during the so-called Flash Crash on May 6 of 2010 when the Dow Jones Industrial Average plummeted by 1,000 points within minutes. Third, the system is extremely complex and thus beyond the understanding of most financial analysts. And finally, any AI system implemented on Wall Street would more than likely be treated as proprietary information and kept secret as long as it makes money for its creators. In the near future, it is reasonable to assume that computer technology will have the power to end lives. As Barrat points out, semi-autonomous robotic drones now kill dozens of people each year on the battlefield.

Nefarious forms of quasi-artificial intelligence already have befallen us. For example, "botnets" that hijack infected computers (unbeknownst to their users) and launch DDOS (distributed denial of service) attacks are designed to crash and/or jam targeted networks. For Barrat, it would seem to logically follow that as AI develops, it will be used for cybercrime. Ominously, cyber-sabotage could be directed at critical infrastructure. If, for instance, the power grid were taken down it would have catastrophic results. As an example of the great peril posed by semi-autonomous computer programs Barrat cites the case of a joint U.S.-Israeli cyber campaign against Iran dubbed "Olympic Games," which unleashed the Stuxnet computer virus. Stuxnet was designed to destroy machinery, specifically the centrifuges in Natanz nuclear enrichment facility in Iran. Highly effective, the worm crippled between 1,000 and 2,000 centrifuges and set Iran's nuclear weapons program back two years. But as Barrat warns, malware of this sort does not just simply go away; thousands of copies of the virus escaped the Natanz plant and infected other PCs around the world. Barrat warns that such cyber operations are terribly short-sighted and carry a high risk of blowback. As he explains, now that Stuxnet is out in the public domain, it has dramatically lowered the cost of a potential terrorist attack on the U.S. electrical grid to about a million dollars.

Perhaps in the not-so-distant future, computers will be autonomous agents making decisions without guidance from human programmers. Moreover, the transition from artificial general intelligence to artificial super intelligence could come swiftly and without forewarning, thus we will not have adequate time to prepare for it. Once it has access to the Internet, an AI entity could find the fulfillment of all its needs, not unlike the scenario depicted in 2014 film Transcendence in which Johnny Depp starred as the mind behind a supercomputer.

To be safe, Barrat advises that AI should be developed with something akin to consciousness and human understanding built in. But even this feature could be dangerous. After all, a machine could pretend to think like a human and produce human-like answers it prepared to implement its own agenda.

Kurzweil has argued that one way to limit the potentially dangerous aspects of artificial intelligence is to pair it with humans through intelligence augmentation. As AI becomes intimately embedded in our bodies and brains, it will begin to reflect our values. But Barrat counters that super-intelligence could be a violence multiplier, turning grudges into killings and disagreements into disasters, not unlike how a gun can turn a fistfight into murder. Today, much of the cutting edge AI research is being undertaken by the Pentagon. The Defense Advanced Research Projects Agency (DARPA) has been investigating ways to implement artificial intelligence to gain an advantage on the battlefield. Put simply, intelligence augmentation is no moral fail-safe.

Invoking the Precautionary Principle, Barrat counsels that if the consequences of an action are unknown but judged by some scientists to carry a risk of being catastrophic, then it is better not to carry out the action. He concedes, however, that relinquishing the pursuit of artificial general intelligence is no longer a viable option. To do otherwise would cede the opportunity to rogue nations and gangsters who might not be as scrupulous in engineering safeguards against malevolent AI. There is a decisive first-mover advantage in AI development in the sense that whoever first attains it will create the conditions necessary for an intelligence explosion. And they can pursue this goal not necessarily for malevolent reasons, but because they will anticipate that their chief competitors, whether corporate or military, will be doing the same.

Perhaps the best course of action would be to incrementally integrate components of artificial intelligence with the human brain. The next step in intelligence augmentation would be to put all of the enhancements contained in a smart phone inside of us and connect it to our brains. A human along with Google is already an example of artificial super-intelligence. Inasmuch as AI is developed by humans, Kurzweil argues that it will reflect our values. He maintains that future machines will still be human even if they are not biological. To be safe, Barrat recommends applying a cluster of defenses that could mitigate the harmful consequences of malevolent AI, including programming in human features, such as ethics and emotions. These qualities will probably have to be implemented in stages because of the complexity involved, but by doing so, we could derive enormous benefits from machine-based intelligence without being consigned to evolutionary obsolescence.

Source E

How to make a mind: can non-biological brains have real minds of their own?

Ray Kurzweil The Futurist 47.2 (March- April 2013): p14.

The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in a yet more elaborate configuration.

This capability takes place in a brain structure called the neocortex, which in humans has achieved a threshold of sophistication and capacity such that we are able to call these patterns ideas. We are capable of building ideas that are ever more complex. We call this vast array of recursively linked ideas knowledge. Only Homo sapiens have a knowledge base that itself evolves, grows exponentially, and is passed down from one generation to another.

We are now in a position to speed up the learning process by a factor of thousands or millions once again by migrating from biological to nonbiological intelligence. Once a digital neocortex learns a skill, it can transfer that know-how in minutes or even seconds. Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart.

Consider the benefits. Electronic circuits are millions of times faster than our biological circuits. At first we will have to devote all of this speed increase to compensating for the relative lack of parallelism in our computers. Parallelism is what gives our brains the ability to do so many different types of operations--walk-ing, talking, reasoning--all at once, and perform these tasks so seamlessly that we live our lives blissfully unaware that they are occurring at all. The digital neocortex will be much faster than the biological variety and will only continue to increase in speed.

When we augment our own neocortex with a synthetic version, we won't have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we use today. We have about 300 million pattern recognizers in our biological neocortex. That's as much as could be squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about 80% of the available space. As soon as we start thinking in the cloud, there will be no natural limits--we will be able to use billions or trillions of pattern recognizers, basically whatever we need, and whatever the law of accelerating returns can provide at each point in time.

In order for a digital neocortex to learn a new skill, it will still require many iterations of education, just as a biological neocortex does. Once a single digital neocortex somewhere and at some time learns something, however, it can share that knowledge with every other digital neocortex without delay. We can each have our own private neocortex extenders in the cloud, just as we have our own private stores of personal data today.

Last but not least, we will be able to back up the digital portion of our intelligence. It is frightening to contemplate that none of the information contained in our neocortex is backed up today. There is, of course, one way in which we do back up some of the information in our brains: by writing it down. The ability to transfer at least some of our thinking to a medium that can outlast our biological bodies was a huge step forward, but a great deal of data in our brains continues to remain vulnerable.



The Next Chapter in Artificial Intelligence

Artificial intelligence is all around us. The simple act of connecting with someone via a text message, e-mail, or cell-phone call uses intelligent algorithms to route the information. Almost every product we touch is originally designed in a collaboration between human and artificial intelligence and then built in automated factories. If all the AT systems decided to go on strike tomorrow, our civilization would be crippled: We couldn't get money from our bank, and indeed, our money would disappear; communication, transportation, and manufacturing would all grind to a halt. Fortunately, our intelligent machines are not yet intelligent enough to organize such a conspiracy.

What is new in AT today is the viscerally impressive nature of publicly available examples. For example, consider Google's self-driving cars, which as of this writing have gone over 200,000 miles in cities and towns. This technology will lead to significantly fewer crashes and increased capacity of roads, alleviate the requirement of humans to perform the chore of driving, and bring many other benefits.

Driverless cars are actually already legal to operate on public roads in Nevada with some restrictions, although widespread usage by the public throughout the world is not expected until late in this decade. Technology that intelligently watches the road and warns the driver of impending dangers is already being installed in cars. One such technology is based in part on the successful model of visual processing in the brain created by MIT's Tomaso Poggio. Called MobilEye, it was developed by Amnon Shashua, a former postdoctoral student of Poggio's. It is capable of alerting the driver to such dangers as an impending collision or a child running in front of the car and has recently been installed in cars by such manufacturers as Volvo and BMW.

I will focus now on language technologies for several reasons: Not surprisingly, the hierarchical nature of language closely mirrors the hierarchical nature of our thinking. Spoken language was our first technology, with written language as the second. My own work in artificial intelligence has been heavily focused on language. Finally, mastering language is a powerfully leveraged capability. Watson, the IBM computer that beat two former Jeopardy! champions in 2011, has already read hundreds of millions of pages on the Web and mastered the knowledge contained in these documents. Ultimately, machines will be able to master all of the knowledge on the Web--which is essentially all of the knowledge of our human-machine civilization.

One does not need to be an Al expert to be moved by the performance of Watson on Jeopardy! Although I have a reasonable understanding of the methodology used in a number of its key subsystems, that does not diminish my emotional reaction to watching it--him?--perform. Even a perfect understanding of how all of its component systems work would not help you to predict how Watson would actually react to a given situation. It contains hundreds of interacting subsystems, and each of these is considering millions of competing hypotheses at the same time, so predicting the outcome is impossible. Doing a thorough analysis--after the fact--of Watson's deliberations for a single three-second query would take a human centuries.

One limitation of the Jeopardy! game is that the answers are generally brief: It does not, for example, pose questions of the sort that ask contestants to name the five primary themes of A Tale of Two Cities. To the extent that it can find documents that do discuss the themes of this novel, a suitably modified version of Watson should be able to respond to this. Coming up with such themes on its own from just reading the book, and not essentially copying the thoughts (even without the words) of other thinkers, is another matter. Doing so would constitute a higher-level task than Watson is capable of today.

It is noteworthy that, although Watson's language skills are actually somewhat below that of an educated human, it was able to defeat the best two jeopardy! players in the world. It could accomplish this because it is able to combine its language ability and knowledge understanding with the perfect recall and highly accurate memories that machines possess. That is why we have already largely assigned our personal, social, and historical memories to them.

Wolfram | Alpha is one important system that demonstrates the strength of computing applied to organized knowledge. Wolfram | Alpha is an answer engine (as opposed to a search engine) developed by British mathematician and scientist Stephen Wolfram and his colleagues at Wolfram Research. For example, if you ask Wolfram | Alpha, "How many primes are there under a million?" it will respond with "78,498." It did not look up the answer, it computed it, and following the answer it provides the equations it used. If you attempted to get that answer using a conventional search engine, it would direct you to links where you could find the algorithms required. You would then have to plug those formulas into a system such as Mathematica, also developed by Wolfram, but this would obviously require a lot more work (and understanding) than simply asking Alpha.

Indeed, Alpha consists of 15 million lines of Mathematica code. What Alpha is doing is literally computing the answer from approximately 10 trillion bytes of data that has been carefully curated by the Wolfram Research staff. You can ask a wide range of factual questions, such as, "What country has the highest GDP per person?" (Answer: Monaco, with $212,000 per person in U.S. dollars), or "How old is Stephen Wolfram?" (he was born in 1959; the answer is 52 years, 9 months, 2 days on the day I am writing this). Alpha is used as part of Apple's Sin; if you ask Sini a factual question, it is handed off to Alpha to handle. Alpha also handles some of the searches posed to Microsoft's Bing search engine.

Wolfram reported in a recent blog post that Alpha is now providing successful responses 90% of the time. He also reports an exponential decrease in the failure rate, with a half-life of around 18 months. It is an impressive system, and uses handcrafted methods and hand-checked data. It is a testament to why we created computers in the first place. As we discover and compile scientific and mathematical methods, computers are far better than unaided human intelligence in implementing them. Most of the known scientific methods have been encoded in Alpha,

In a private conversation I had with him, Wolfram estimated that self-organizing methods such as those used in Watson typically achieve about an 80% accuracy when they are working well. Alpha, he pointed out, is achieving about a 90% accuracy. Of course, there is self-selection in both of these accuracy numbers, in that users (such as myself) have learned what kinds of questions Alpha is good at, and a similar factor applies to the self-organizing methods. Some 80% appears to be a reasonable estimate of how accurate Watson is on Jeopardy! queries, but this was sufficient to defeat the best humans.

It is my view that self-organizing methods such as I articulate as the pattern-recognition theory of mind, or PRTM, are needed to understand the elaborate and often ambiguous hierarchies we encounter in real-world phenomena, including human language. Ideally, a robustly intelligent system would combine hierarchical intelligence based on the PRTM (which I contend is how the human brain works) with precise codification of scientific knowledge and data. That essentially describes a human with a computer.

We will enhance both poles of intelligence in the years ahead. With regard to our biological intelligence, although our neocortex has significant plasticity, its basic architecture is limited by its physical constraints. Putting additional neocortex into our foreheads was an important evolutionary innovation, but we cannot now easily expand the size of our frontal lobes by a factor of a thousand, or even by 10%. That is, we cannot do so biologically, but that is exactly what we will do technologically.

Our digital brain will also accommodate substantial redundancy of each pattern, especially ones that occur frequently. This allows for robust recognition of common patterns and is also one of the key methods to achieving invariant recognition of different forms of a pattern. We will, however, need rules for how much redundancy to permit, as we don't want to use up excessive amounts of memory on very common low-level patterns.

Educating Our Nonbiological Brain

A very important consideration is the education of a brain, whether a biological or a software one. A hierarchical pattern-recognition system (digital or biological) will only learn about two--preferably one--hierar-chical levels at a time. To bootstrap the system, I would start with previously trained hierarchical networks that have already learned their lessons in recognizing human speech, printed characters, and natural-language structures.

Such a system would be capable of reading natural-language documents but would only be able to master approximately one conceptual level at a time. Previously learned levels would provide a relatively stable basis to learn the next level. The system can read the same documents over and over, gaining new conceptual levels with each subsequent reading, similar to the way people reread and achieve a deeper understanding of texts. Billions of pages of material are available on the Web. Wikipedia itself has about 4 million articles in the English version.
I would also provide a critical-thinking module, which would perform a continual background scan of all of the existing patterns, reviewing their compatibility with the other patterns (ideas) in this software neocortex. We have no such facility in our biological brains, which is why people can hold completely inconsistent thoughts with equanimity. Upon identifying an inconsistent idea, the digital module would begin a search for a resolution, including its own cortical structures as well as all of the vast literature available to it. A resolution might mean determining that one of the inconsistent ideas is simply incorrect (if contraindicated by a preponderance of conflicting data). More constructively, it would find an idea at a higher conceptual level that resolves the apparent contradiction by providing a perspective that explains each idea. The system would add this resolution as a new pattern and link to the ideas that initially triggered the search for the resolution. This critical thinking module would run as a continual background task. It would be very beneficial if human brains did the same thing.

I would also provide a module that identifies open questions in every discipline. As another continual background task, it would search for solutions to them in other disparate areas of knowledge. The knowledge in the neocortex consists of deeply nested patterns of patterns and is therefore entirely metaphorical. We can use one pattern to provide a solution or insight in an apparently disconnected field.

As an example, molecules in a gas move randomly with no apparent sense of direction. Despite this, virtually every molecule in a gas in a beaker, given sufficient time, will leave the beaker. This provides a perspective on an important question concerning the evolution of intelligence. Like molecules in a gas, evolutionary changes also move every which way with no apparent direction. Yet, we nonetheless see a movement toward greater complexity and greater intelligence, indeed to evolution's supreme achievement of evolving a neocortex capable of hierarchical thinking. So we are able to gain an insight into how an apparently purposeless and directionless process can achieve an apparently purposeful result in one field (biological evolution) by looking at another field (thermodynamics).

We should provide a means of stepping through multiple lists simultaneously to provide the equivalent of structured thought. A list might be the statement of the constraints that a solution to a problem must satisfy. Each step can generate a recursive search through the existing hierarchy of ideas or a search through available literature. The human brain appears to be only able to handle four simultaneous lists at a time (without the aid of tools such as computers), but there is no reason for an artificial neocortex to have such a limitation.

We will also want to enhance our artificial brains with the kind of intelligence that computers have always excelled in, which is the ability to master vast databases accurately and implement known algorithms quickly and efficiently Wolfram I Alpha uniquely combines a great many known scientific methods and applies them to carefully collected data. This type of system is also going to continue to improve, given Stephen Wolfram's observation of an exponential decline in error rates.

Finally, our new brain needs a purpose. A purpose is expressed as a series of goals. In the case of our biological brains, our goals are established by the pleasure and fear centers that we have inherited from the old brain. These primitive drives were initially set by biological evolution to foster the survival of species, but the neocortex has enabled us to sublimate them. Watson's goal was to respond to Jeopardy! queries. Another simply stated goal could be to pass the Turing test. To do so, a digital brain would need a human narrative of its own fictional story so that it can pretend to be a biological human. It would also have to dumb itself down considerably, for any system that displayed the knowledge of Watson, for instance, would be quickly unmasked as nonbiological.



More interestingly, we could give our new brain a more ambitious goal, such as contributing to a better world. A goal along these lines, of course, raises a lot of questions: Better for whom? Better in what way? For biological humans? For all conscious beings? If that is the case, who or what is conscious?

As nonbiological brains become as capable as biological ones of effecting changes in the world--indeed, ultimately far more capable than unenhanced biological ones--we will need to consider their moral education. A good place to start would be with one old idea from our religious traditions: the golden rule.

Download 331.46 Kb.

Share with your friends:
1   2




The database is protected by copyright ©ininet.org 2024
send message

    Main page