Ben Yarbrough
Mr. Jones
Introduction into Academic Writing
3 November 2015
Why Robots Will Not Kill Us All
The future of technology, specifically artificial intelligence, is one that has been forecasted since the rise of computer technology. AI describes a computer system that is autonomous; it has visual perception, speech recognition, decision making, and translation between languages. Theoretical Physicist Michio Kaku describes AI as robots that have pattern recognition and common sense, which are two traits we humans perform naturally. Rapid strides of advancement in UAV’s and other military technologies as of late has caused the media and general public to revisit how close we are to this point of self-aware, human constructed intelligence. As a scientific community, experts in their fields are now just realizing that raw computer power is not the key to self-operating machines. The dawn of thinking inorganic beings will not be seen for decades to come. When the time does arrive, they will not enslave the human race as the media would have many believe.
In 2009, the world’s top leaders in artificial intelligence came together at the Asilomar conference in California. In the conference many issues were raised to be evidence of the coming breakthrough such as predator drones, OSIMO the robot, and other examples. Elon Musk, the founder of Tesla and the funding behind SpaceX, has speculated that humans “could be reduced to pets” by the coming super intelligent machines (Larson). Computer power doubles every 18 months as predicted by Mores Law, through the process of ever smaller computer chips and circuits etched into silicon using UV light. Some of the worlds most powerful super computers have even been able to simulate human brain activity, such as Blue Gene, built by IBM.
With all of the hype surrounding advancing computers “there is less than meets the eye” when discussing these examples (Kaku 76). Predator drones may be unmanned and be able to kill terrorists with impunity, but they are not operating without a certified human controller with a joystick making the calls. OSIMO the robot may be able to run, climb stairs, and converse in several languages, but all of these things are cleverly scripted beforehand by a team continuously writing code into OSIMO’s computer. Even our most powerful processors, such as Blue Gene, can only simulate the neural activity of a rat; which is roughly 2 million neurons. Compare that to the fact that humans have around 100 billion neurons, and that simulating activity does not mean recreating that animals behavior. To put that into perspective, Blue Gene computes at “the blinding speed of 500 trillion operations per second”, costs hundreds of millions of dollars to create, and occupies a quarter of an acre of land (Kaku 104). This may seem an impossible feat, but if Mores Law continues the trend of exponential growth in computing power, we will be able to simulate such things as the human brain in the coming future. One problem exists with such a plan: Mores Law is coming to an end very, very soon.
The secret to the computer power revolution is coming to an end in several different aspects; each of which stops our advance towards progress. The smallest wavelength of UV light possible is 10 nanometers, which creates a transistor 30 atoms across. This means that every computer made today has millions upon millions of transistors on a single chip. Even if a different type of wave is used that has a higher frequency, such as an X-Ray, it becomes “physically impossible” to etch transistors less than an atom across because of the uncertainty principle (Kaku 46). This principle states that one cannot pinpoint the exact position of an electron, and it can be in several places at once. Translate this to computing and it becomes a short circuit of the system, rendering it useless. To put this into perspective, we as of this moment are commercially producing 10nm transistors with 7 and 5nm nodes already created and expecting to be in the mass market within 2 or 3 years. Intel claims to have stopped the end of Mores Law with parallel computing, but the heat generated by transistors becomes the other side of the problem. While parallel computing may give you 2 times, 4 times, or even 8 times the computing power, the heat generated is several times greater than that due to a reduced surface area to transistor ratio. This would melt the delicate components of the silicon chip. The official answer to this problem comes with the age of quantum, DNA, and other developing forms of computing. While our advance in computer power will continue, it will be a much slower process; nowhere near the exponential rate of Mores Law.
Once we have the computer power after several decades, possibly even centuries, what will be the means to create this self awareness? There are two approaches being taken in order to create learning robots: the top-down and bottom-up. The top-down approach works by encoding such things as common sense and pattern recognition into a computer manually. An example of this is a robot by the name of Poggio, built at MIT, who recognizes patterns in pictures and is able to name objects faster humans. In order to achieve this the team behind Poggio had to model the robot’s recognition after the way we do it: Instead of splitting an object into shapes as computers have done in the past, it is “split into many layers” with immediate recognition (Kaku 85). The limit behind this becomes the amount of objects painfully encoded onto the computers heart drive and that this recognition stops at 2-D images. This approach seems promising until you consider that the average person knows approximately 100 million things about the world. The bottom up approach, lead by Yann Lecun at New York University, gives a robot by the name of LAGR, the ability to learn on its own. LAGR has “hardly any images in its memory” but this is irrelevant (Kaku 87). The robot’s computer learns by running into objects on a course: Every pass of the course refines a mental map that the computer commits to memory. Each attempt, both top-down and bottom-up, have seen considerable progress in recent years.
When looking at the big picture, even a cockroach has pattern recognition and can avoid objects. In this time “Mother Nature’s lowliest creatures can outsmart our most intelligent robots” (Kaku 87). When the average person thinks of the movie Terminator, they think of robots taking over as soon as they achieve consciousness. If we imagine robots running our lives like in I, Robot, we imagine them somehow figuring out that they can rule us; seemingly against the safety measures we put in place. At our current understanding of artificial intelligence, we can not even begin to create these images dreamed up by the movie industry. Although, are we already run by computers like in The Matrix, being used for our thermal energy? Friendly AI is the most likely scenario among others like a singularity ravaging our planet in a thirst for greater intelligence. It can be as simple as a chip in the robot’s brain that “automatically shuts them off” if they think murderous thoughts or about removing the chip, etcetera (Kaku 119). Other options to avoid such disasters include merging with our robotic creations and modifying our own bodies to be more intelligent and live forever.
The entirety of arguing around such topics draws from a singular definition of consciousness. In this aspect there has never been an official definition of the word to exist that all scientists agree upon. Most can agree that humans are conscious, but past that views vary wildly from humans being the only ones to inanimate object having qualities of this elusive word. A thermostat “can sense the temperature of the environment” and act accordingly, but does this derive consciousness (Kaku 111)? There may be a scale to this definition in which robots must slowly ascend in such rankings until they reach humanities current level.
Robots are likely to start impacting our lives in the near future. Intelligent and conscious robots, however they are defined, are likely to be created in the future as well. There are many obstacles: the end of Mores Law, reverse engineering the human brain, finding out the true meaning behind awareness, and so on. This change will be gradual in the eyes of our generation as computers slowly rise the evolutionary scale in processing power and understanding of the environment. Movies that scare the public and media into a frenzy will continue to make many weary of this progress, but this may act to our advantage: Making sure the brilliant minds behind Blue Gene, OSIMO, Poggio, and LAGR take the proper precautions in making these robots friendly and benevolent. In the midst of such advancements, it is hard to truly judge when, and even if, such questions as AI will ever be answered.
Works Cited
Bilton, Nick. “Artificial Intelligence as a Threat.” The New York Times. The New York Times, 5
Nov. 2014. Web. 3 Nov. 2015.
Kaku, Michio. Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives
by the Year 2100. New York: Doubleday, 2011. Print.
Larson, Erik. “Questioning the Hype About Artificial Intelligence.” The Atlantic. Atlantic Media
Company, 14 May 2015. Web. 3 Nov. 2015.
“End of Moore’s Law: It’s Not Just about Physics – CNET.” CNET. Web. 3 Nov. 2015.