Running head: WILL A MACHINE EVER KNOW? 1
Will a Machine Ever Know?
Eleanor Ritzman
Virginia Commonwealth University
This essay was prepared for Focused Inquiry 112, Section 014, taught by Professor Corner.
In today’s society, technology is growing at such a fast rate, no one knows how advance we will be within the next ten years. Our society has produced fictional movies of where we are surrounded by a different kind of knowledge through technology. Our machines become more and more like us every day, and it is startling to think if these machines will ever be as advanced as the human brain itself. This brings us to the underlying question: Will machines ever know?
To define the term, “know” we must come to the same conclusion of the sort. What does it mean to know? How would a machine be able to “know” the same way a human does? In this case, we are looking at the detail of empathy. The definition of empathy is the feeling that you understand and share another person's experiences and emotions: the ability to share someone else's feelings (Merriam-Webster, 2015). BY understanding this term, we can investigate further on what it means “to know.” According to The Theory of Knowledge, knowledge is a justified, true belief. According to this definition, this means that a machine must be capable of having a belief or concept about something valid. In this case, we can see machines today carry this characteristic. For instance, a machine can be programmed to carry knowledge about a certain aspect of life. This knowledge can be regurgitated from the machine to the knowledge-seeker, only because the machine was programmed to “know” this specific belief. Unfortunately, this idea is too simple for what I am researching. In order to fully understand if a machine will truly ever know, we have to look at this question in a more humanly aspect. One movie that explores the curiosity of this topic is Bicentennial Man (1999). In this movie, a household android, named Andrew, is brought to a suburban family in 2005, made to care to all of their needs. However, over time, Andrew shows unusual characteristics for a household appliance. He believes to feel emotion, and carries free will. As being a part of the family, Andrew wishes to be free from his family. In this case, he becomes entirely free and sets off to discover the advancement of himself and other androids like him. As years go by, Andrew has no luck until he comes across a female android with a sharp personality almost as unique as his own. He soon discovers that she belongs to the android inventor’s son, and that she only contains a personality chip that makes her act that way; none of it comes from her own true self, like Andrew. The inventor’s son, known as Burns, agrees with Andrew to make him the first robot with human-like prosthetic organs, making robotics revolutionary. Andrew is also given skin and hair to make him seem human. He eventually reconnects with his first owner’s daughters, and meets her family. Andrew eventually falls in love with her granddaughter, Portia. However, they cannot be legally married because Andrew is not entirely human. Andrew allows himself to become a prosthetic human, which allows him to age. Once he is old, he goes to the World Congress to ask for his rights to be a human, in which they consider. On his death bed with Portia, the World Congress announces Andrew’s rights to be human as well as acknowledging the marriage between him and Portia. While listening to this, Andrew dies, and soon after Portia leaves with him (Bicentennial Man, 1999). This fictional movie depicts the idea of if machines ever become so advance, will they act the same way as Andrew? Another movie to analyze is I, Robot (2004). In the year of 2035, robots have become a part of society that serve as servants to mankind. The robots are programmed to follow the Three Laws of Robotics, which are to never harm a human or let a human be harmed, to obey humans unless it conflicts with the first law, and to protect its own existence unless it conflicts with the first or second laws. A police detective known as Del Spooner is trying to solve the case for the unusual suicide of Dr. Alfred Lanning, the co-founder of U.S. Robotics. Being that he fell 50 stories from his office, and he was alone, his death was called a suicide. However, detective Spooner thinks something else happened that could have killed Dr. Lanning that was not suicide. The number one suspect is a robot who calls himself Sonny. Sonny refuses to respond to any sort of investigation and says he has secrets and dreams. This confuses Spooner as he does not understand the robot’s capability to act so human-like unlike the other programmed robots. This is another example of robots and androids developing at a much faster, unknown rate than we believe. Sonny is the key to discovering what could potentially happen to robots who disobey the orders programmed into them. By the end of the film, Sonny confesses to killing Dr. Lanning, only because Lanning forced him to kill him, telling him to swear on listening to him before constructing him to commit the crime. In other instances, Sonny had disobeyed orders from other characters, as he deceives them in his own independent acts. This shows that Sonny is different from the other robots made for the use of humanity, because he was able to disobey the Three Laws of Robotics. In this sense, it is deduced that Dr. Lanning chose suicide to help lead the investigation to further robots corrupting society.
The way we see this in our world today is the vast growth of technology in our ordinary lives. From computers to consoles to cellular devices to watches, numerous companies and corporations are developing the latest software for our advanced lifestyles. With the development of this technology, who is there to say that our machines that we create will eventually be so advanced that we may reconsider? Will they eventually portrayal to the extent of them knowing as much as the human brain? We already have computers that are unbeatable at Chess, and calculators that can solve every math equation out there. How would it not be possible to see a machine that develops human emotion almost, if not identical to a human? Famous physicist Stephen Hawking is saying how dangerous our society is getting as our technology is advancing. In his first Ask Me Anything on Reddit.com, Hawking discussed how machines have the ability to become so competent that they could kill mankind. Hawking believes any type of artificial intelligence can do well at its commands, but if its not human’s commands, we are in trouble. Hawking warns, “You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants” (Independent 2015). Hawking also discussed how humans could become much more smart in the future than the first creators of artificial intelligence. This means that our own intelligence will be stronger in the future than what is now, which will give us no limit to the things we make. This could lead to something known as an “intelligence explosion,” which means artificial intelligence will be able to engineer themselves to be much more smart rather than allowing humans to do it. This could mean that the machines’ intelligence will exceed our own, resulting in chaos. No one knows for sure how soon it will be when artificial intelligence machines begin to work on their own, but nobody can be trusted to say when this will happen. When this does happen, Hawking states, “it's likely to be either the best or worst thing ever to happen to humanity, so there's huge value in getting it right”. As such, we should “shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence” (Independent 2015). Hawking says how important it is to begin researching the importance of this problem now rather than wait for the strongest form of artificial intelligence to be switched on momentarily. This could be a timely process: the machines will begin to take our jobs, as they are now, before they manage to gain control and wipe out human life. Similarly enough, Bill Gates agrees with Hawking’s call for the end of humanity by artificial intelligence. In January of 2015, Gates held a Reddit Ask Me Anything session in which he discussed the concerns expressed by other famous scientists. Through movies such as The Terminator, many audiences are not taking the idea of artificial intelligence ruling mankind very lightly. Gates says, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well… A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned” (Forbes, 2015). Before this, Musk spent $10 million to fund an organization that will attempt to keep artificial intelligence friendly. Both Gates and Musk want to make sure that machines of artificial intelligence stay friendly and viable, meaning that we do not turn our back on smart networks and devices, being that this type of problem could occur in Gates’ Microsoft and Musk’s SpaceX and Tesla. In this case, Gates and Musk might have the chance at being correct about the potential threat to mankind posed by artificial intelligence. At the rate in which we are headed, machines will in fact be able to withstand the characteristics of a human being. Once this millstone is hit, the possibilities for advancement in this field are endless. Development can only go up from here, and once a machine is capable of understanding the human emotion, the possibilities for empathetic development in all robots are endless.
There are pros and cons to this argument, and to fully understand if it is possible for machines to ever know, we must explore the possibilities concerning the case. For instance, why was it so hard for the World Congress to grant Andrew his human rights? One might suggest that there is a boundary that confounds Andrew because he has no soul. A soul is the spiritual part of a person that is believed to give life to the body and in many religions is believed to live forever (Merriam-Webster, 2015). As humans, we carry one common requirement; a soul. There is no human without one, and one is not complete without one. But with a soul comes empathy, something previously discussed in this essay. Does every individual with a soul have empathy? Are psychopaths able to understand and share someone else’s feelings? Absolutely not. Psychopaths are accredited for their inability to show no sympathy or compassion to another individual, and does so by killing of his own kind. The ethical debate about this topic leads to no moral acts were made, and that psychopathic killers carry the absence of what it is to understand other human beings. In terms of this, which is more valid to give a machine in order for them to be human; a soul or empathy? The greatest representation of mankind and how we, as humans, operate is the characteristic of empathy. Those who lack empathy are shunned upon, convicted of unjustified acts, and are altogether seen as inhumane. How would we say no to a machine that is capable of showing empathy? Another idea to look into is the Turing Test. Invented by Alan Turing, the Turing Test is a sequence where a “judge” is to decide which of the two individuals communicating with him from two separate rooms is the computer and which is the human. This experiment proves how a machine can decipher Enigma. The study shows that if the judge shows 50% or less of doubt when communicating, then this proves how compatible the machine is to human conversation and code (Artificial Intelligence, 1999). How could we, as humans, be fooled by a machine? The answers lie within the growth and advancement of how technology.
Will machines ever truly know? In the future, yes. These programs that we develop for our artificial intelligence only predict what will happen in the future for our compatible mechanics. Our creations will eventually take on their owner’s qualities, and soon imitate our distinct mannerisms. With this assumption, in no time, machines will operate almost identically to man, showing that machines will in fact eventually know. One question I will leave for further investigation is, how long will it take for machines to fully know?
References
-
Artificial Intelligence | The Turing Test. (1999). November 14, 2015
-
Bill Gates Says You Should Worry About Artificial Intelligence. (n.d.). Retrieved November 24, 2015
-
Columbus, C. (Director). (1999). Bicentennial Man. United States / Canada: Buena Vista Pictures.
-
Definition of Empathy. Webster’s Dictionary. (n.d.). November 14, 2015
-
Freeman, M. (Director). (2014). Through the Wormhole. Season 4. Episode 7. [Television Series].
-
Griffin, A. (n.d.). Stephen Hawking: Artificial intelligence could wipe out humanity when it gets too clever as humans will be like ants. Retrieved November 26, 2015
-
Keysers, C. (2013, July 24). Inside the Mind of a Psychopath – Empathic, But Not Always. November 14, 2015
-
Lewis, T. (2013, September 24). Blame the Brain: Why Psychopaths Lack Empathy. November 14, 2015
-
Megill, J. (n.d.). Emotion, Cognition and Artificial Intelligence, 24(2), 189-199.
-
Proyas, A. (Director). (2004). I, Robot [Motion picture]. Twentieth Century Fox Film Corporation.
-
Super-Intelligent Computers Could Enslave Humanity Says Oxford University. (Or They'll Just Kill Us.). (2014, December 3). Retrieved November 27, 2015
-
Theory of Knowledge Guide. (n.d.). November 14, 2015
Share with your friends: |