Why Artificial intelligence is not official: Making a case for consciousness

Respected Readers,
This paper is one of my philosophy assignments, and a project many seemed to take interest in. I have pasted it here for all to read and enjoy, as I believe it raises several debates and interesting points towards where AI systems are today. Discussion on this is welcome and encouraged!

Why Artificial intelligence is not official: Making a case for consciousness

The understanding of human beings has both become an issue explored throughout history and a limitation which humanity has yet to solve. For millennia, the concepts of self-awareness, the intricate nature of decision making, and the processes which govern our thoughts have all been explored to great lengths by philosophers, primarily for the intent of separating human beings from animals. To them, as God’s creations, we had to have a drive, a force which made us different from the rest. As science began taking over, this perception changed to encompass a wider array of theories; mainly, this included the desire to replicate much of the human self. While most developments in this have only taken place in the past century, science fundamentally changed the way in which we attempt to gauge our presence. We are no longer concerned about being different do to God, but rather research our ability to survive with greater consciousness than most species we perceive. As a result of this change, there has been a gradual rising of the concept known as Artificial intelligence (AI), whereby our human characteristics are replicated through mechanical means. Even during the 16th century, many philosophers posited the idea that animals are just like machines, lacking the fundamental abilities of reason and mind. As technology progressed, science has been able to attempt and decode not only the human genome, but also the function of those neurons which create a network of thought. This took the form of a purely materialistic framework, in that the mind and human personality can be turned into material and tangible creations. The term “soul” no longer had a religious undertone, but rather came to be defined through the intuitive processes which man uses to reason with. This change, while perhaps one of the greatest shifts in humans, could also be considered a downfall of science. Ever since the 1950s when the field of AI was born, people dreamt of talking and reasoning spacecraft, machines which integrate themselves into human society, and a world in which wonders are created by robot entities. Yet there is a reason for why in 2013, they only possess the intelligence of a four year old child (Thomson, The Register.co.uk). Science has attempted to explain human nature primarily from three key areas. These include the Science of the brain, genetics, and symbolic logic along with language. There is lesser understanding into the science of emotion, intuition, and the precise reason why humans create their decisions. Until science begins to think in a non-linear fashion, these problems may never be solved. Where do we draw the boundary between our desire to re-create ourselves materially and our stubborn nature to not accept the human condition from a more metaphysical standpoint?

Perhaps the first to argue for the ability of machines and humans being alike was the famous English philosopher Thomas Hobbes, in his book the Leviathan. “For what is the ‘heart’ but a ‘spring’; and the ‘nerves’ but so many ‘strings’; and the ‘joints’ but so many ‘wheels,’ giving motion to the whole body, such as was intended by the artificer” (Hobbes, 81). Unlike other philosophers before him such as Descartes, Hobbes did not believe that humans had their minds separated from their physical experiences. To him, there did not need to exist a mind which could explain the role of emotion and human decision making. Rather it lies with what the material aspects of the body can accomplish when working together. This gave rise to the branch of study known as “the philosophy of mind”, of which other philosophers such as Jerry Fodor helped create a better framework of understanding the human brain. This became known as the computational theory of mind, and as computers become more powerful, it is beginning to resurface as a valid way of understanding our abilities to reason (Murat, Guven, 265). Here are both, syntax and semantic logic is considered. Through syntax, we have the ability to use language to create rules by which the mind operates, and these rules are defined by the Semantics which are possessed by the person. Thus, the two are dependent on each other, since we use language to define concepts, which help create reactions within people. This is where a linear view of the mind comes into action, as reasoning tends to proceed from a prior state of mind. The first true “pioneer” of AI was Alan Turing, who devised one way in which a machine could be considered intelligent, regardless of how it “thinks” (Turing, 449). If it is able to create answers similar to a human on its own, taking place in a text form, it could be seen as passing the test. These answers do not have to be correct, but rather must resemble the typing and understanding of a real human’s conversational abilities. This way, intelligence is simplified to interpretation, and a machine only needs to provide relevant and understandable answers to pass it. Ray Kurzweil is well known for his theories which have brought computer intelligence into society, and he looks at this issue on a deeper level in a book titled “How to create a mind”. Kurzweil believes that evolution is not always there to improve intelligence, but rather can occur for several reasons (Kurzweil, 78). Despite this, random evolutions are what created the neocortex, a part of the brain which is responsible for pattern recognition and what we know as ideas. Most of what the brain does can, in fact, be translated into computerized inferences. For example, the visual information received by the eyes only consists of twelve pictures which are reconstructed by the mind in conjunction with the neocortex. Humans are the only species in which this translates into what we know as conscious thought, which includes our abilities to write down and store information we have gathered. We have created language for this, a system which allows us to store hundreds of megabytes of collected data just as words. Noam Chomsky is a linguist who has attempted to understand how language works, and how we seem to be able to acquire the rules of the system as children so quick. Clearly, genetic coding aids us with picking up the ability for language. This led him to understand intelligence from the paradigm of linguistics being a system in and of itself, much like we have visual and auditory systems for processing information. There are three levels on which this takes place. The computational level consists of input and output, where the input is the words we know, and the output comes from what we understand those words to mean. The second level is the algorithmic level, defined as the way in which this language is processed by the brain, and the structures that help make those words be recognized. The third and final level could be described as an “implementation level”, the way in which our cells take actions to what was interpreted by the brain. Every statement, such as taking out the trash, goes through these three phases of cognition (Katz, Atlantic.com).

Yet as Chomsky points out, the fundamental problem of artificial intelligence is that we have externalized the data which we are trying to interpret. Although Google might be aware who Barack Obama is (as the President of the United States), its ability to understand what the concept of “president” means is only driven by statistical data, rather than information it generates on its own. This also contradicts the Turing test, and one of the first people who spoke out against it was John Searle, who created his “Chinese room experiment” to help understand how intelligent digital systems are constructed. If a computer is able to take Chinese characters as input and process It by giving convincing output to the point of passing the Turing test, is it doing so from the premise of thinking? Should this program provide answers suitable for regular conversation, would it not be considered conscious by what we call “artificial intelligence” (Searle, 432). Suppose that he goes into the room carrying a manual of the program, and passes the same information based on the syntax given to the computer by the coder. Since Searle does not know how to speak Chinese, there would be no judgment he could make on what to say or how to say it. Rather, he would be doing what the computer was by interpreting code and outputting answers based on instructions. This is a very linear mode of thinking, one which the brain biologically does not do. Since the conception of working AI systems, people believed that the brain works similar to how computers do, by firing signals of zeros and ones to create activity. In the late 1960s and for decades to follow, Hubert Dreyfus heavily criticized the models of Artificial intelligence, during a time in which the general public believed that computers would come up with new mathematical theories by the 1970s. In actuality, a computer did not learn to play chess until 1995, far beyond the time anyone expected this to occur. Dreyfus also split human intuition into two models of reasoning, which he called “knowing that” and “knowing how” (Dreyfus, 62). The first of these refers to a human’s ability to use problem solving in order to come to conclusions, such as deductive reasoning. The ladder concerns a less conscious approach to the way we think, such as when detecting faces, recognizing when to say specific statements (and how these are formulated), and in general our seemingly random yet systematic reactions to situations. Humans have a remarkable ability to act quickly in situations where we might not go through complex systems to process and understand hundreds of environmental factors. This process also heavily relies on cultural attitudes, something which could often differ in people who are members of the same family. If a computer were to be created and trained with remarkable AI, the systems and models which it would use for reasoning would not differ unless millions of variations were created on that one product. This goes beyond personality, into the realm of understanding how conscious experiences shape subconscious attitudes about life and goals. On top of this, differentiating whether an agent installed on multiple computers counts as one entity or as their own raises serious social implications (Deutsch, The Guardian.com). This includes democratic rights, such as how votes are casted in elections. Deutsch points out a fundamental issue with the models we use to create our scientific theories on programming. The mind learns more from trial and error than it could ever from simple educational premises. This is also why it takes much repetition to jam knowledge into the heads of students; we try to teach ourselves in the education system via means of programming, not by encouraging individual experiences. This flaw carries forth to any computer code, and for true Artificial Intelligent systems to exist, we must get rid of notions connected to “downloading” knowledge and thought into people and objects. Does the brain have quantum properties which physics has yet to grasp? This statement is highly possible, since the more we learn about how the brain works, the greater puzzlement we gain from those concepts which we still have not grasped.

Perhaps the only form of AI which could succeed somewhat in society are what we today call wearable computers. Kurzweil’s arguments for machines merging with humans could be considered most valid here, if we imply that the resulting entity is a mix of human and machine. In this sense, the mind and “soul” could be contained within the human, and machines would enhance the experiences of self which it could not gain otherwise from nature. We must consider the implications of this from the context of both the present-day social norms and what the future might hold should a world like that succeed. Today, projects such as Google Glass hold much weight towards the possible terrifying experiences we could create for ourselves should the trend continue. This includes privacy issues and the way in which augmented realities could distance humans from their true selves. The former we see from the nervousness of Glass and its unobtrusive way of being able to record content or analyses the environment should the user wish to have the ability. The ladder comes forth from video games, which have proven themselves to be psychologically damaging to those who play them often. These people also tend to distance themselves from their reality. Should Kurzweil’s theories on the singularity hold true, the arguments made by opposing philosophers still apply. Where does the limit exist with respect to thinking? When are the computers the one doing the thinking for us, and when are we the ones who are still in control of our own steering wheels of life? Here we can still apply Chomsky’s theories, because having computers replace natural human elements could create unnatural “hick ups” in how our bodies and cells interpret the information it would be gaining. I believe that his arguments help explain the problems faced by even augmented AI. The information overload experienced by people today is a quantum leap in what we had even twenty years ago. For this reason we have machines such as Watson, which was IBM’s 2011 take on a computer system which could answer questions from Jeopardy. Although Watson knew most of the questions, it used heavy computational models to figure out proper responses. The questions were fed to it in text form, and it simply searched hundreds of terabytes of databases for the correct answers. In the context which it was working, it could probably have passed the Turing test, and many were amazed at how quickly and naturally it provided responses. With the rise of personal assistants on mobile phones around the time of Watson, it is clear that the direction in which machine intelligence is heading has taken a different spin. We are using computers not to replace our experiences of reality, but rather to enhance our ability to sort through the increasing wealth of information we have at our disposal. This is not much different from having a search engine, only in that access to the queries is done through voice and the push of a button, rather than text input or a specific format. Searle proves that anyone could do this simple programming on their own too, and for this reason what we might see as “complex AI” is technically “Simple AI” packaged differently. This is not to say that it is not useful, but rather that our notion of AI is defined more through a sense of utility rather than one of complicated algorithms.
Should research and prospects of creating a system such as described by science fiction novels become a reality someday, the human race must realize the flaws and problems which this could create. It could even cause the entire species major harm, and we would have many battles over the rights of any agent. Looking beyond our own imperfections can be difficult, however it is what could be considered the beauty of human thinking. We are not limited to any specific symbolic logic, nor are we tied to notions of linear and binary thinking. Millions of cells interact with each other every second, creating a complex “universe” within just one person’s body, one which is hard to even grasp scientifically. Although the concept of what a soul is debatable by some, the way in which the mind is able to make irrational decisions is a fact which even psychology is grappling with. The lack of our true dreams on what computers should be coming to fruition might not even be a terrible consequence, if we look it from a more abstract view and attempt to redefine our notion of what we desire. Perhaps if we use technological enhancements to improve the experiences for everyone, the human race could thrive with both a technical and consciousness framework cooperating. There is a difference between sorting information through a tool, and using that tool to directly alter our own perceptions of reality. One of these involves a complete shift in the natural order of humans, the other only aids communication and provides a greater free will on usage. As more people become conscious of their privacy and rights to share whatever they wish rather than what big corporations want, so does our ability to understand these risks increase exponentially. One day, science will look back at the early days of computers and think, “we’re so glad we changed our paradigm of what intelligence should be.”

Works Cited:

Deutsch, David. Philosophy will be the key that unlocks artificial intelligence. Latest US news, world news, sport and comment from the Guardian | guardiannews.com | The Guardian. N.p., 2 Oct. 2012. Web. 2 Aug. 2013.
Dreyfus, Hubert L. What computers still can’t do: a critical of artificial reason. Cambridge, MA.: MIT, 1992. Print.
Hobbes, Thomas. Leviathan: or the matter, form and power of a commonwealth ecclesiastical and civil. Lexington, KY: Seven Treasures Publications, 2009. Print.
Katz, Yarden. Noam Chomsky on Where Artificial Intelligence Went Wrong – Yarden Katz – The Atlantic. TheAtlantic.com. N.p., 2 Nov. 2013. Web. 2 Aug. 2013.
Kurzweil, Ray. How to create a mind: the secret of human thought revealed. New York: Viking, 2012. Print.
Murat, A, and Guven Guzeldere. Consciousness, Intentionality And Intelligence: Some Foundational Issues For Artificial Intelligence. Journal of Experimental & Theoretical Artificial Intelligence 12.3 (2000): 263-277. Print.
Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457. Print.
Thomson, Ian. IQ test: ‘Artificial intelligence system as smart as a four year-old’ • The Register. The Register: Sci/Tech News for the World. N.p., 16 July 2013. Web. 2 Aug. 2013. .
Turing, A. M. I.—Computing Machinery and Intelligence. Mind LIX. 236 (1950): 433-460. Print.

Leave a Comment