You are on page 1of 7

Rose 1 Rose, Evan ENGL100 Composition Dr.

Jenkins 28 September 2010 Real Artificial Intelligence My primary interest in computer science is artificial intelligence. This interest was largely a result of my curiosity built on philosophical questions about computers (e.g., Can machines think?). It came to my attention that the problem with answering such questions is that we have yet to draw the line between real intelligence and artificial intelligence. However, later in life, after having entrenched myself in books and articles on computer science and AI, I would come to see the parallels in the way a human brain works and the way that a machine works and regard the distinction as moot from a physical perspective. The truth is, it appeared, that we are merely very complicated programs running on very complicated hardware. This conclusion is often regarded as cold or even insulting. However, that says nothing about the conclusion's legitimacy. Debates in the field of computer science are typically solved in one of two ways. Either math proves something to be true or someone uses application over theory, typically for simplicity of demonstration. Either way, for the more interesting problems, sometimes the theory trails behind the application for decades, or vice versa. Alan Turing, a mathematician, logician, and computer scientist, is renowned for his theories on artificial intelligence. He invented what is now commonly referred to as the Turing test (originally named the imitation game by Turing). A very basic delineation of the Turing test is as follows: if a computer can frequently fool a human through textual communication into

Rose 2 thinking that he or she is speaking to another human, we may deem that computer intelligent (Crevier 23). It was estimated by Turing that by the year 2000, computers would fool an average human 70 percent of the time with five minutes of questioning. But from projection analysis, we can see that the prediction is likely to be off by about twenty-five years (Crevier 24). In 1959, John McCarthy wrote Programs With Common Sense, a breakthrough paper on artificial intelligence that formalized an idea for a machine capable of deduction. McCarthy named this theoretical program The Advice Taker. He described it as being a program for solving problems by manipulating sentences in formal languages (Programs With Common Sense 1). This was likely the first paper to suggest reasoning ability as the key feature in artificial intelligence ("About John McCarthy"). Artificial Intelligence research would continue making leaps in the late 1950s and early 60s. In 1957, scientists at Carnegie Institute of Technology developed a program that would prove 38 of 52 theorems from Principia Mathematica by Bertrand Russell (Dreyfus 5). Herbert Simon, one of the scientists working at CIT, would conclude that the mystery begins to dissolve from such vaguely understood processes as 'intuition' and 'judgement' (Dreyfus 6). Simon believed that the usage of heuristics, or 'rules of thumb', was the only way that humans made profound discoveries such as Newton's integral calculus as well as make brilliant chess moves. Human intelligence is not magical. However, it is very profound and quite complex. A famous competition occurred in 2011 when Ken Jennings faced off against IBM's Watson in the television game show Jeopardy!. Although Jennings defeated Watson when the match was aired, Watson actually won during trials. IBM's Watson was the most significant recent breakthrough in the study of AI. Watson was able to parse a question and then scour its

Rose 3 database of information for an answer. This is certainly not an easy feat and it may suggest that we aren't too far from a truly intelligent machine. Henry Markram is a computer scientist and the founder of the Blue Brain project at IBM. Using 16,000 processors, Markram has simulated about 1,000 virtual neurons. These 1,000 neurons emulate a small neocortical region of a rat's brain. He hopes to one day create a simulation of a mammalian brain who's potential for higher thinking can contest against humanity's. Henry Markram's incentive for such a bold project is the prospect of helping humanity understand the brain's feats as well as it's disorders. He states that if humanity could understand the triviality and deterministic nature of the workings of the brain, we wouldn't have a reason to handle confrontations with anything but civility (Palmer). Skeptics argue that a machine could never think the way a human can because humans possess creativity, which is something that computers supposedly lack. However, Markram takes a more objective point of view. The brain took billions of years to evolve and has many, many rules. If you can describe them faithfully, using mathematics, you can also build realistic models of the brain, asserts Markram (Palmer). Artificial intelligence scientists commonly cite aviation as a similar field in which philosophy frequently got in the way of productivity. For centuries prior to aviation, humankind debated the possibility of aviation. In 1903, the Wright brothers achieved flight and the argument over the so-called possibility ended. We were then able to discuss further innovations rather than limitations (Reingold). It appears likely that the same will happen for artificial intelligence on a long enough timespan. But when will this solution come? It's so difficult to make a judgement that most scientists even in the field of artificial intelligence refuse to answer. Most people who work in

Rose 4 AI don't study real artificial intelligence, but rather, they study specific fields of AI that are useful in industries. For example, some people study Optical Character Recognition, which is the study of making algorithms for computers to use in detecting handwriting. OCR is used in applications such as tablets and palm pilots. This technology is more immediately attainable and improvable and thus garners more research funding than creating something that can think well enough to do simple tasks that any two year old human can do, such as catching a ball being thrown to a mechanical arm, which is actually a popular research topic. We observe the same trend for other areas such as data mining and statistical learning. They are more presently useful and thus are funded. Science is often held back because the government and the people can't see any immediate value in it. However, when practical applications of a science are brought to attention by the media, suddenly a spike of interests in that science is found. A similar situation occurred with space exploration when the United States was competing with Russia to get to put a man on the moon. Perhaps when the United States and Russia (or any other superpower nations) see the war potential for artificial intelligence, we will have a huge spike in funding for artificial intelligence again. If this happens and the result is real artificial intelligence, then the debate as to whether or not real artificial intelligence could ever exist will be over. The arguments against real artificial intelligence is sometimes a result of religiousness or a belief in intrinsic self worth or specialness. Some of us like to believe that we are just too special to ever be recreated by another human being. It sounds much more comforting to simply agree that human minds are just so complex and wonderous that any attempt to emulate them is silly and impossible. However, if you look at it the other way, you can view humans as being so complex and wonderous that our minds are not limited to just creating things of lesser

Rose 5 intelligence, but rather we can also create things of equal and perhaps even greater intelligence than ourselves. But the question remains: if real artificial intelligence can exist, why doesn't it today? The answer to this question is sometimes hardware related. The simplest answer is that our computers are not strong enough to do all the processing power that a brain can do. Scientists estimate that the human brain is capable of making 20 million billion calculations per second (Westbury). This rate is far superior to even the fastest of supercomputers that exist today. However, Jeff Hawkins points out that this is inconclusive for an answers as to why we still don't have artificial intelligence. Neurons are slow, so in that half of a second, the information entering your brain can only traverse a chain one hundred neurons long. That is, the brain "computes" solutions to problems like this in one hundred steps or fewer, regardless of how many total neurons might be involved. From the time the light enters your eye to the time you press the button, a chain no longer than one hundred neurons could be involved. ... One hundred computer instructions are barely enough to move a single character on the computer's display, let alone do something interesting (Hawkins). This is a typical argument made by connectionists, people who study the connections of neurons in brains. It is called the hundred-step rule. If the human brain can compute solutions to things in only one-hundred steps, why can't our computers? The answer, Hawkins believes, is in what he calls the neocortical algorithm (Hawkins). Somehow, our brains are doing amazing feats of computation in only one-hundred steps. Thus, we should not be looking to improve hardware,

Rose 6 but rather, we should be looking to uncover the neocortical algorithm. This is where interest in studying neuroscience comes into play for artificial intelligence. Some scientists disagree with studying the brain to create artificial intelligence, noting that we didn't study how cheetahs run to create vehicles, we simply invented the wheel and it worked better. However, plenty of attempts have already been made to create artificial intelligence without the observation of nature and none have succeeded thus far. We need to rely on the fantastic ability of billions of years of evolution to weed out bad designs for intelligence. Nature has done so much of the work already, and if we can only understand the basic principles behind what it has blindly done, we'll be able to create our own intelligent machines. When this finally occurs, that is, when we create artificial intelligence and simulate a human brain by means of war pressure, excitement in the research, or new applications in the industry, the concept alone of having simulated intelligence will give us a fascinating and humbling perspective, which may be more valuable than the practical applications themselves.

Rose 7 Westbury, Chris. "How Fast Is the Brain?" University of Alberta. Web. 01 Dec. 2011. <http://www.ualberta.ca/~chrisw/howfast.html>. Crevier, Daniel. AI: the Tumultuous History of the Search for Artificial Intelligence. New York, NY: Basic, 1993. Print. McCarthy, John. Programs With Common Sense. Thesis. Stanford University, 1959. Print. McCarthy, John. "About John McCarthy." About John McCarthy. Web. 27 Oct. 2011. <http://www-formal.stanford.edu/jmc/index.html>. Dreyfus, Hubert L. What Computers Can't Do. New York: Harper & Row, 1972. Print. Reingold, Eyal. "Artificial Intelligence | Can Computers Be Creative?" Department of Psychology | University of Toronto. Web. 17 Nov. 2011. <http://www.psych.utoronto.ca/users/reingold/courses/ai/creative.html>. Palmer, Katie. "The Main Who Builds Brains." Discovery Magazine Jan. 2011: 2. Print. Hawkins, Jeff, and Sandra Blakeslee. On Intelligence. New York: Times, 2004. Print.

You might also like