Linsi Crabbs Artificial Intelligence



Download 56.3 Kb.
Date05.05.2018
Size56.3 Kb.
#48291

Crabbs |


Linsi Crabbs

Artificial Intelligence



Outline


  1. Intro

    1. What is artificial intelligence

  2. History of Artificial Intelligence

    1. Computers and Robots

      1. Greeks

      2. 13th – 19th century

      3. 1930’s – Present day

  3. Future

  4. Pros and Cons

  5. Robots in the work place

  6. My opinion

  7. Conclusion/Summary

In 2001 there was a futuristic movie released call Artificial Intelligence, where the human race is reaching a high point of technological advancement. In the movie, humans created a robot named David, who actually feels emotions and tries to become a real boy. As scary as this thought is, artificial intelligence is real, and humans are reaching a technological point in life where robots will be a part of every day life and experiences.

Artificial intelligence is the intelligence of machinery, or robots and computers created to take the place of, or think like, a human. Artificial intelligence is the technological development of a glimpse into the future and what we are capable of creating. Humans use technology to do just about everything; strategic work, precise measurements, graphic design work, and so fourth.

It is not my aim to surprise or shock you--but the simplest way I can summarize is to say that there are now in the world machines that can think, that can learn and that can create. Moreover, their ability to do these things is going to increase rapidly until--in a visible future--the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” --Herbert Simon

The history of artificial intelligence is traced back to Greek mythology. “Intelligent artifacts appear in literature since then, with mechanical devices actually (and sometimes fraudulently) demonstrating behavior with some degree of intelligence. After modern computers became available following World War II, it has become possible to create programs that perform difficult intellectual tasks. Even more importantly, general purpose methods and tools have been created that allow similar tasks to be performed” [AIT10]. It is difficult to understand how far advanced the Greek and Romans were, with the many inventions that occurred during that time, there were probably far more advanced then we assume.

Next on the history list is Aristotle, who invented syllogistic logic, in about the 5th century B.C., it was the “first formal deductive reasoning system” [AIT10]. Following Aristotle, we move to the 13th century. “Talking heads were said to have been created, Roger Bacon and Albert the Great reputedly among the owners. Ramon Lull, Spanish theologian, invented machines for discovering nonmathematical truths through combinatorics” [AIT10]. Representations of the machines are far different than the machines that we use today. Inventions were being created with the use of different tools available and intelligent mind sets.

Things are becoming a little more difficult to understand and a little more complicated by the 15th century. “The invention of printing by using a moveable type was invented and put to work. The Gutenberg Bible was first printed on this invention” [AIT10]. Clocks were also introduced in this century and are still used to this day! They also produced the first counting machines, which were devices used like calculators.

When we reach the 16th century, clockmakers learned new traits and skills such as “creating mechanical animals and other such novelties” [AIT10]. One example is “DaVinci’s Walking Lion. This was designed in 1515! Also, Rabbi Loew of Prague is said to have invented the “Golem”, a clay man brought to life, around the 1580’s” [AIT10].

Going into the 17th century things began to become more advanced and more elaborate. Early in this century, “Descartes proposed that bodies of animals are nothing more than complex machines” [AIT10], and many people laughed at this idea. “Many other 17th century thinkers offered variations and elaborations of Cartesian mechanism. Pascal created the first mechanical digital calculating machine in 1642. Thomas Hobbes published “The Leviathan” in 1651, containing a mechanistic and combinatorial theory of thinking. Leibniz improved Pascal's machine to do multiplication & division with a machine called the Step Reckoner and envisioned a universal calculus of reasoning by which arguments could be decided mechanically in 1671” [AIT10]!

Following the 17th century’s concepts and inventions came the 18th century. “This century saw a profusion of mechanical toys, including the celebrated mechanical duck of Vaucanson and von Kempelen's phony mechanical chess player, The Turk, which was designed in 1769” [AIT10]. Mostly this century was for the design of toys and the amusement of people.

The 19th century was full of inventions and wonderful inspirations that would help set up the future of the computers and electronic thinking devices we use today! “Luddites invaded and destroyed most of the machinery in England. Mary Shelley published the story of Frankenstein's monster in 1818! And George Boole developed a binary algebra representing some "laws of thought," published in “The Laws of Thought”” [AIT10]. The whole thought of artificial intelligence was starting to develop during this time with the technology that was beginning to be portrayed. Leading the modern ground-breaking development of artificial intelligence was “Charles Babbage and Ada Byron who designed a programmable mechanical calculating machine. A working model of this invention was built in 2002” [AIT10] just to show how ingenious these inventors were in the 19th century.

Now we have the very interesting and entertaining 20th century. This is where most of the inventions we see today were drawn up and designed and even put to practical use! “Bertrand Russell and Alfred North Whitehead published “Principia Mathematica”, which revolutionized formal logic. Russell, Ludwig Wittgenstein, and Rudolf Carnap lead philosophy into logical analysis of knowledge” [AIT10].

Also leading this revolutionary development of logical thinking in the 20th century were two men, people know them by name as Hewlett-Packard. Believe it or not in 1939 Hewlett-Packard was founded. “David Packard and Bill Hewlett found Hewlett-Packard in a Palo Alto, California garage. Their first product was the HP 200A Audio Oscillator, which rapidly becomes a popular piece of test equipment for engineers. Walt Disney Pictures ordered eight of the 200B model to use as sound effects generators for the 1940 movie “Fantasia.”” [Com08]. This invention of the HP 200A Audio Oscillator was ground-breaking; Hewlett-Packard had no idea that this invention was the earliest production of what is known today as a computer.

Upon the arrival of the 1940’s the computer industry had been improving on its technological abilities and the Complex Number Calculator (CNC) was created. “In 1939, Bell Telephone Laboratories completed this calculator, designed by researcher George Stibitz. In 1940, Stibitz demonstrated the CNC at an American Mathematical Society conference held at Dartmouth College. Stibitz stunned the group by performing calculations remotely on the CNC (located in New York City) using a Teletype connected via special telephone lines. This is considered to be the first demonstration of remote access computing”[Com08]. Computers and technology greatly advanced during the 1940’s because of World War II. These machines were built to interfere with Nazi communications.

Another great technological invention occurred in 1943. “The U.S. Navy approached the Massachusetts Institute of Technology (MIT) about building a flight simulator to train bomber crews” [Com08]. World War II had sparked interest in technological advancement, and our military was the forefront of this movement. “The Massachusetts Institute of Technology (MIT) first built a large analog computer, but found it inaccurate and inflexible. After designers saw a demonstration of the ENIAC computer, they decided on building a digital computer. By the time the [Project] Whirlwind was completed in 1951, the Navy had lost interest in the project, though the U.S. Air Force would eventually support the project which would influence the design of the SAGE program” [Com08].

The 1940’s most people forget was a decade that was darkened by war, but that also include excellent technological advancements, that would inspire people to create bigger and better uses for intellectual machines. “In February 1946, the public got its first glimpse of the ENIAC, a machine built by John Mauchly and J. Presper Eckert that improved by 1,000 times on the speed of its contemporaries. The floor space that the computer took up was 1,000 square feet” [Com08]. That’s enough floor space to fill up an entire room!

The year 1948 was an actual preview into, and the beginning of the word artificial intelligence. People began to realize what a computer and/or what an intellectual machine was capable of accomplishing. “Norbert Wiener published "Cybernetics," a major influence on later research into artificial intelligence. He drew on his World War II experiments with anti-aircraft systems that anticipated the course of enemy planes by interpreting radar images. Wiener coined the term "cybernetics" from the Greek word for "steersman." In addition to "cybernetics," historians note Wiener for his analysis of brain waves and for his exploration of the similarities between the human brain and the modern computing machine capable of memory association, choice, and decision making” [Com08]. This was an early discovery on the word artificial intelligence that would modernize the next several decades.

When the 1950’s approached the world was just starting to recover from a second world war and technology was still improving. “Engineering Research Associates of Minneapolis built the ERA 1101, the first commercially produced computer; the company´s first customer was the U.S. Navy. It held 1 million bits on its magnetic drum, the earliest magnetic storage devices. Drums registered information as magnetic pulses in tracks around a metal cylinder. Read/write heads both recorded and recovered the data. Drums eventually stored as many as 4,000 words and retrieved any one of them in as little as five-thousandths of a second!” [Com08]. This was turn of the century equipment, in the 50’s this was considered to be modern as far as current technology. Today we have things that run twice as fast! Imagine using this computer for the first time – must have been a true experience for the U.S. Navy.

Another turning point in the 1950’s was the construction of computer standards, making computers faster and more reliable. This was possible because “The National Bureau of Standards constructed the SEAC (Standards Eastern Automatic Computer) in Washington as a laboratory for testing components and systems for setting computer standards. The SEAC was the first computer to use all-diode logic, a technology more reliable than vacuum tubes, and the first stored-program computer completed in the United States” [Com08] Today, our computers are smaller and more high-tech. Instead of having vacuum tubes in and out of computers, our cell phones act like computers. Also, “magnetic tape in the external storage units stored programming information, coded subroutines, numerical data, and output”. By 1959 “MIT´s Servomechanisms Laboratory demonstrated computer-assisted manufacturing. The school´s Automatically Programmed Tools project created a language, APT, used to instruct milling machine operations” [Com08].

By the beginning of the 1960’s we turned our attention from computers to robots. The “UNIMATE was the first industrial robot that began work at General Motors. By obeying step-by-step commands stored on a magnetic drum, the 4,000-pound arm sequenced and stacked hot pieces of die-cast metal” [Com08]. Little did General Motors employees know that these robots would soon be replacing their jobs. These high-tech robots were working like humans! By 1963 “researchers designed the Rancho Arm at Rancho Los Amigos Hospital in Downey, California as a tool for the handicapped. The Rancho Arm´s six joints gave it the flexibility of a human arm. Acquired by Stanford University in 1963, it holds a place among the first artificial robotic arms to be controlled by a computer” [Com08]. It was nice to see that computers and technology were being put to good use. Designed to benefit people with special needs this invention proved to be unbelievably incredible.

By 1968 “Marvin Minsky developed the Tentacle Arm, which moved like an octopus. It had twelve joints designed to reach around obstacles. A PDP-6 computer controlled the arm, powered by hydraulic fluids. Mounted on a wall, it could lift the weight of a person” [Com08]. This was astonishing! In just 5 years a robot went from having six joints, to doubling that and having twelve joints. This improved the movement of the robot, allowing it to turn corners and lift a person!

The following year in 1969, “Victor Scheinman´s Stanford Arm made a breakthrough as the first successful electrically powered, computer-controlled robot arm! By 1974, the Stanford Arm could assemble a Ford Model T water pump, guiding itself with optical and contact sensors. The Stanford Arm led directly to commercial production. Scheinman went on to design the PUMA series of industrial robots for Unimation, robots used for automobile assembly and other industrial tasks” [Com08].

By 1970, artificial intelligence had become better known! This year was a phenomenal break through for artificial intelligence. “SRI International´s Shakey became the first mobile robot controlled by artificial intelligence”. The robot was “equipped with sensing devices and driven by a problem-solving program called STRIPS, and the robot found its way around the halls of SRI by applying information about its environment to a route. Shakey used a TV camera, laser range finder, and bump sensors to collect data, which it then transmitted to a DEC PDP-10 and PDP-15. The computer radioed back commands to Shakey — who then moved at a speed of 2 meters per hour” [Com08]. A picture of this robot looks like shelves on top of a cart with wheels. The very top of it looks like a bunch of wires and a lens for the video data.

As early as 1971 the first personal computer, “The Kenbak-1, advertised for $750”. People were unsure of this new device, and thought that $750 was rather expensive for something that was considered to be so new. “The computer was designed by John V. Blankenbaker using standard medium-scale and small-scale integrated circuits, the Kenbak-1 relied on switches for input and lights for output from its 256-byte memory. Unfortunately In 1973, after selling only 40 machines, Kenbak Corp. closed its doors” [Com08].

Remarkably, in 1974 “David Silver at MIT designed the Silver Arm, a robotic arm to do small-parts assembly using feedback from delicate touch and pressure sensors. The arm´s fine movements corresponded to those of human fingers” [Com08]. This was a technologic break-through for artificial intelligence during the 1970’s. After the failure of the Kenbak-1, the new technology of this robot gave hope for people to keep producing more technological advanced computers and robots. A robot that could actually feel sensations in hands like a human, through pressure sensors, is extreme use of technology!

“The January 1975 edition of Popular Electronics featured the Altair 8800 computer kit, based on Intel´s 8080 microprocessor, on its cover. Within weeks of the computer´s debut, customers inundated the manufacturing company, MITS, with orders. Bill Gates and Paul Allen licensed BASIC as the software language for the Altair. Ed Roberts invented the 8800 — which sold for $297 or $395 with a case— and coined the term "personal computer." The machine came with 256 bytes of memory (expandable to 64K) and an open 100-line bus structure that evolved into the S-100 standard. In 1977, MITS sold out to Pertec, which continued producing Altairs through 1978” [Com08].

Leaving behind the 70’s we enter a new decade of the 1980’s. America’s economy started to see change and prosperity that was globally headed in a direction that would once again spark technological change. In 1981 “Adam Osborne completed the first portable computer, the Osborne I, which weighed 24 pounds and cost $1,795. The price made the machine especially attractive, as it included software worth about $1,500. The machine featured a 5-inch display, 64 kilobytes of memory, a modem, and two 5 1/4-inch floppy disk drives” [Com08]. Imagine carrying around a box that weighed twenty four pounds, how advanced we thought that was back in the 80’s compared to what we have now. Incredibility, paying about $2,000 dollars is just about the same price, depending on what kind of computer it is, is just about the average price of a good computer today.

In 1984, “Apple Computer launched the Macintosh, the first successful mouse-driven computer with a graphic user interface. Based on the Motorola 68000 micro-processor the Apple Computer was set at an affordable price of $2,500” [Com08]. People could now successfully start using computers for personal reasons, with the new graphic user interface.

In 1986, “Daniel Hillis of Thinking Machines Corp. moved artificial intelligence a step forward when he developed the controversial concept of massive parallelism in the Connection Machine” [Com08]. This was another progressive step for artificial intelligence. People were taking technology to another level. With the use of modern robots and computers, the goal was to make everything better and faster then before. “The machine used up to 65,536 processors and could complete several billion operations per second. Each processor had its own small memory linked with others through a flexible network that users could alter by reprogramming rather than rewiring”[Com08]. This was considered to be a fast connection compared to previous computers. Being able to multitask and complete so many operations per second allowed for the user to use a computer for more things. “The machine´s system of connections and switches also let processors broadcast information and requests for help to other processors in a simulation of brain-like associative recall. Using this system, the machine could work faster than any other at the time on a problem that could be parceled out among the many processors” [Com08].

Enter the decade of the 90’s! The computer is rapidly changing along with technology and other modern day devices that will change the way people think, act, and work and so on, throughout the twenty century and into the twenty-first. “At the very beginning of the decade, the Microsoft Company released its Microsoft Windows 3 software program. The Windows 3 program afforded consumers a more attractive appearance than previous operating systems and it also had 640 Kb of RAM. Its faster performance allowed users to have multiple programs running at the same time so users could multitask and save time” [The09]. For the first time, people were able to write reports on a computer and not have to use a typewriter. Individuals were able to do multiple tasks at once, which made it easier for people who used their computer for work purposes. “Windows 3 also introduced the Microsoft Word program for word processing and the Microsoft Excel program that is used for creating and editing spreadsheets” [The09]. Again by using Microsoft Excel, people were able to take their office work home with them. Students were able to complete spreadsheets of data information in a neatly designed template. “The popularity of Windows 3 caused millions of copies to be sold to personal computer owners and businesses within the first year alone. This began the domination that the Microsoft Company would have over the computer software industry throughout the upcoming decade and beyond” [The09].

“Possibly the most technological and widely-used advancement of computers in the 1990s was the internet. The World Wide Web (known as the “www” that we type prior to a web address) made the internet popular and simple to use for any personal computer owner. When it first came out, many companies charged for online access by the minute. Its cool graphics and “point-and-click” capabilities created millions of “web surfers” all over the world. With the internet, sales of modems skyrocketed because this was the main piece of equipment needed to connect to the web. Not long after its creation, everybody who owned a computer wanted to own a modem so they could get connected to the world” [The09].

With the 90’s out of the way and the twenty-first century prevailing, artificial intelligence has a whole new meaning. Today, we look at technology and still expect more from it. “Development leaders from Microsoft, Netscape, General Electric, and Disney discussed numerous examples of artificial intelligence enabled products and product enhancements. Today's artificial intelligence is about new ways of connecting people to computers, people to knowledge, people to the physical world, and people to people” [Ret]. We have completely changed the way that we interact with other individuals. With the use of cell phones, facebook, twitter, and other useful sources, people are in constant contact with other people. Individuals at any age supervised or not, have the whole world at their finger tips. We use the modern technology for simple pleasures. Our cell phones now have different applications for anything imaginable; we use computers and include technology in our everyday lives.

“Today's artificial intelligence is enabled in part by technical advances and in part by hardware and infrastructure advances. The world-wide-web is here and truly enormous high-resolution displays are coming. Today's artificial intelligence invites investment in systems that:


    • Save mountains of money, through applications such as resource allocation, fraud detection, database mining, and training.

    • Increase competitiveness, through applications that provide, for example, on-screen help and low-cost rationale capture.

    • Create new capabilities and new revenue streams, in areas such as medicine and information access.

Artificial intelligence is no longer a “one-horse field”. Newer technical ideas, with labels such as agents, Bayes nets, neural nets, and genetic algorithms, combine with older ideas, such as rule chaining, to form a powerful armamentarium. All are important; none, by itself, is the answer” [Ret].

As for the future of artificial intelligence “it began as an attempt to answer some of the most fundamental questions about human existence by understanding the nature of intelligence, but it has grown into a scientific and technological field affecting many aspects of commerce and society” [Wal96]. In theory, people are afraid that robots are going to take over the world and by research that seems highly doubtful and unlikely. Robots were created to help us out; to do things that humans could not possibly do on our own. People have a perception that with the advancement of computers and technology that robots will take over everything. People need to look on the bright side of things and realize that we use robots for a number of reasons.

We use modern robots in time of war for example, would we rather send in a human body to disassemble a homemade explosive bomb, or could we send in a robot with a human controlling it. Of course, everybody should choose the robot! Modern day advancement allows for us to be able to control these types of situations at a safe distance. Successfully we could allow for the robot to act as a human, without the cause of a human casualty.

People are worried that robots are going to take over the work industry and cause employees to be out of a job because these robots can work for less pay. This situation is somewhat true. Take the auto industry for example; if we have x amount of people running an assembly line we could add robots and cut the x amount in half. However, there needs to be individuals who know how to operate the robotic arm that is assembling the automobile.

On the other hand, we need to remember that “robots are machines. Robots in the work place help load and unload stock, assemble parts, transfer objects, or perform other tasks. Robots are used for replacing humans who were performing unsafe, hazardous, highly repetitive, and unpleasant tasks. They are utilized to accomplish many different types of application functions such as material handling, assembly, arc welding, resistance welding, machine tool load/unload functions, painting/spraying, etc.” [Uni10]. We could continue to put people’s lives in danger when they go to work, or simply have a robot do the hazardous job for us.

“All industrial robots are either servo or non-servo controlled. Servo robots are controlled through the use of sensors which are employed to continually monitor the robot's axes for positional and velocity feedback information. This feedback information is compared on an on-going basis to pre-taught information which has been programmed and stored in the robot's memory. Non-servo robots do not have the feedback capability of monitoring the robot's axes and velocity and comparing with a pre-taught program. Their axes are controlled through a system of mechanical stops and limit switches to control the robot's movement” [Uni10].

Back to the auto industry example; there are pros and cons to this type of situation. The pros are that with the use of robots it will speed the process up in the amount of time that it takes to produce a vehicle. The assembly of the car is going to be more accurately precise then if a human creates the car. With the use of robots it is a constant routine and nothing about it will change unless it is reprogrammed. The cons to the situation are that individuals are going to be without a job. But the likelihood that everybody is going to be out of a job is slim to none. It is the unfortunate individuals who get laid off.

“Even as artificial intelligence technology becomes integrated into the fabric of everyday life, artificial intelligence researchers remain focused on the grand challenges of automating intelligence. Work is progressing on developing systems that converse in natural language, that perceives and responds to their surroundings, and that encodes and provides useful access to all of human knowledge and expertise” [Wal96]. I truly believe that we are using artificial intelligence to our advantage and that we are not taking our technological advancement and using it out of context to ourselves or our surroundings.

“The pursuit of the ultimate goals of artificial intelligence the design of intelligent artifacts; understanding of human intelligence; abstract understanding of intelligence continues to have practical consequences in the form of new industries, enhanced functionality for existing systems, increased productivity in general, and improvements in the quality of life” [Wal96]. Improvements, that essentially benefit humans more then we realize. Our intelligence is used to our advantage and we put it to good use. “But the ultimate promises of artificial intelligence are still decades away, and the necessary advances in knowledge and technology will require a sustained fundamental research effort” [Wal96].

The future, in part, is still unknown to everybody. Even with the most high-tech computers or robots available, we still will probably never be able to predict the future. Unbelievably, people all the way back to the Greeks and Romans have been trying to use machines and computers to their advantages; they just did not have the resources to do so. The courage of scholars and inventors of the early twentieth century paved the way to where we are now. We just need to focus more on the positives and benefits to computers and to the uses of robots, instead of focusing on the negatives. An ending quote by Barry Long: “Nothing is true in self-discovery unless it is true in your own experience. This is the only protection against the robot levels of the mind.”



Citations




AIT10: , (AITopics/History),

Com08: , (Computer History Museum - Timeline of Computer History),

Com08: , (Computer History Museum - Timeline of Computer History),

The09: , (The People History: Computers from the 1990's),

Ret: , (Rethinking Artificial Intelligence),

Wal96: , (Waltz),



Uni10: , (United States Department of Labor),


Download 56.3 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page