AI Research

I wrote this essay in November 2001. An awful lot of digital water has passed under the bridge since then. One day it’ll be worthwhile to re-visit the predictions quoted here and analise how they’ve stood up over the years.


One of the main developers of VRML, Jaron Lanier once stated that artificial intelligence just dumbs down humans; it does not make computers smarter. Is AI research a misguided direction for computer technology?

Artificial intelligence (AI) could be described as a technology of complex information processing problems that have as their basis some aspects of biological information-processing (Marr, 1981). The notion of AI was conceived in 1950 when English mathematician Alan Turing wrote a paper known as “The Turing Test”. Turing proposed that if you cannot perceive the difference between the responses of a computer and a human being then you have no basis for believing that they are different. From this thesis, Turing says, you must decide that computers and humans are the same.

Jaron Lanier (1996) pointed out that the Turing test, in his opinion, had one basic flaw. If a computer and a person become indistinguishable, that may mean that the computer has become smart and human-like. However, another interpretation may be that if people believe in the ideas and abstractions of computers too much, then they may have a tendency to reduce themselves to support that illusion. The flaw could be that people may possibly become stupid and computer-like. Lanier is a strong promoter of primary humanism and he questions whether or not we should think of computers as things that exist in their own right or whether they are simply conduits between us. Lanier believes that computers should only exist subject to human interpretation.

The scenario of artificially intelligent machines that will be able to do our work for us could be just thirty years away if we base our predictions on the past rates at which computer technology has increased (Moravec, 2000). But will the human species be able to survive an encounter with a superior perhaps robotic species? The first dream of robotics is that machines can do our work for us allowing us to live utopian lives of leisure. Dyson (1997) warns us that; “In the game of life there are three players at the table: human beings, nature and machines. I am firmly on the side of nature. But nature, I suspect is on the side of the machines”. The purpose of this paper is to examine and discuss artificial intelligence research and whether it could be a misguided direction for computer technology.

Moravec bases his prediction on the growth of computer technology partly on the fact that for forty years the power of transistor based computing has been growing exponentially in accordance with Moore’s Law which states that the number of transistors that can be packed on a chip doubles every 18 months. Scientists say that by the year 2020, with transistor features just a few atoms thick, that Moore’s law will have run its course and silicon chips will have reached their physical limits (Kurzweil, 1999). However, new technology in the form of ‘Carbon Nanotube Transistors’ which can be used to create transistors 500 times smaller than silicon equivalents could well mean that Moravec’s predicted timing could eventuate sooner than expected [1].

Kurzweil (1999, p.2) makes the bold statement that before this current century is over humans will no longer be the most intelligent life form on the planet. However, Kurzweil clarifies his statement when he says that the above will depend on how “human” is defined and that the primary political and philosophical issue of the next century could well be the definition of who we are. Kurzweil warns that the progression of computer intelligence will sneak up on humans and cites the example of Gary Kasparov’s confidence in 1990 that a computer would never defeat him at chess, but by 1997 Kasparov was defeated easily by a computer.

Technology holds untold promises of near immortality and the treatments and possibly cures for most diseases that could improve the quality of life. Yet each of these small technologies leads to an accumulation of great power and concurrently great danger. That great danger could be foreshadowed in the out of control replication of computer viruses that is being seen today. At worst a virus can take down a network, but uncontrolled self-replication in newer technologies runs the risk of creating substantial damage in the physical world (Joy, 2000).

Joy (2000) agrees with Moravec and Kurzweil’s predictions of advances in computing power that will enable the building of a robot with enough intelligence to create its own species by the year 2030. Joy feels that humans are in danger because they have become so accustomed to living with almost routine scientific breakthroughs that they are now biased towards instant familiarity and unquestioning acceptance. Joy argues that society needs to come to terms with the fact that robotics, nanotechnology and genetic engineering pose different threats to those that have come before. Robots, engineered organisms and nanobots share a dangerously amplifying factor; they can self replicate and quickly get out of control. These 21st century technologies can spawn whole new classes of accidents and abuses and for the first time knowledge alone will bring these accidents and abuses within the reach of the individual and the small group. Joy believes that the possibility of knowledge enabled weapons of mass destruction hugely amplified by the power of self replication exists and that society is on the cusp of the further perfection of extreme evil. Accordingly he calls on scientists to halt potentially dangerous research.

Because society is increasingly dependent on computers and other machines, many critics of technology fear that, if research is not curtailed, computers will become so complex that humans will become slavishly dependent upon them for making decisions. This is part of the argument made by Ted Kaczynski in what is often referred to as the “Unabomber Manifesto”. Kaczynski postulated that if computer scientists succeeded in developing intelligent machines that can do things better than humans that either the machines might make all of their own decisions or human control over the machines may be retained. Kaczynski argued that if humans were foolish enough to hand over control to the machines then the fate of the human race would be at their mercy. However, Kaczynski sees that a greater threat could be that the human race might easily permit itself to drift into a position of such dependence on machines that it would have no practical alternative but to accept all of the machines decisions. As society becomes more complex and the machines more intelligent a stage may be reached where people will let machines make the decisions because they will bring better results than human decisions. At that stage, says Kaczynski, the machines will effectively be in control and people will be in no position to turn them off (Kaczynski, 1995).

Kaczynski also postulated that another scenario could be that if people retain control over the machines, that control of the large systems will be in the hands of a tiny elite. The elite will have greater control over the masses and because human work will no longer be necessary, the masses will become superfluous. If the elite were ruthless they may simply decide to eliminate the mass of humanity. If they are humane they may simply introduce techniques to reduce the birth rate until the masses are extinct. If they act as ‘good shepherds’ then the masses will have no point to their lives and will be reduced to the level of domestic animals (Kaczynski, 1995).

Are Joy and Kaczynski’s fears and predictions ill founded? Moravec believes that, due to the advances in AI, machines will become as intelligent as humans and replace them as the dominant life form on Earth. But according to Moravec, the machines will love humans. In an interview with Platt (1995, pp. 1-2) Moravec stated that the human form needs to undergo unnatural training to get the brain even half suited to scientific work and then you only live long enough to start figuring things out when the brain deteriorates and you die. “But wouldn’t it be great”, Moravec says, “if you could enhance your abilities via artificial intelligence, and extend your lifespan, and improve on the human condition?” According to Platt, Moravec has suggested severing a volunteer’s corpus callosum and interposing a computer to monitor thought traffic. Moravec feels that after the computer has had time to interpret the code it can start inserting its own input. Moravec has also suggested that futuristic robot surgeons could be used to peel away the brains of conscious patients and analyze each neuron until the entire mind has been removed from the body and transferred to a machine. Moravec is chief scientist at the Carnegie Mellon University’s Robotics Institute which is the largest robot research laboratory in the United States and he firmly believes that machines will begin their own process of evolution and render us extinct in our present form. Platt says that Moravec is laying the groundwork in his laboratory that may well facilitate the quantum leap from our current form to the ultimate form of human transcendence.

Moravec’s predictions sound chilling with regard to human survival as a species and Joy has warned that society has become far too accepting of the position and relevance of machines. Joy’s argument seems aligned with Lanier’s statement of the danger that humans may dumb themselves down to the level of the computers. Lanier also believes that people have an enormous amount of anxiety about what a person actually is, which aligns with Kurzweil’s earlier statement. In an article in Harpers Magazine (1997, p 45) Lanier suggests that the better computers become at performing tasks people find hard to do, the more the definition of what a person is becomes threatened. As technologies become able to simulate or take on human identity there is a profound fear of losing ones own identity. If technology is capable of replicating an individual or constructing a mind then it follows that it will also be capable of making something superior to that individual, a scenario that threatens humanity as it is currently defined. Joy (2000) is concerned that humans may well be on the path to losing their “humanness” and worries that the software that he is creating could well be the very tool that will facilitate the construction of technology to replace the human species. But what is the basis of Joy’s assumption that humans will simply allow technology to overcome the species?

Joy (2000) cites the development of the atomic bomb as an example. Initially, research undertaken by the United States to develop the bomb was spurred on by the omnipresent threat of Hitler’s Germany developing the technology first. After this threat was nullified, research still continued at a great pace despite scientists being aware of a number of possible dangers, not the least of which was the calculation that an atomic explosion may well set fire to the atmosphere. Within a month of the first successful atomic test, bombs were dropped onto Hiroshima and Nagasaki. It would have been possible to demonstrate to the Japanese just what this new weapon was capable of without any loss of life, but it is considered that the bombs were dropped because nobody had the courage or foresight to say no. The physicist Oppenheimer stood firmly behind the scientific attitude saying, “It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing of intrinsic value to humanity, and that you are using it to help in the spread of knowledge and are willing to take the consequences”[2]. The scientists however, brought society to the nuclear precipice. It is very possible that the scientists of today are moving humans closer towards the technological precipice and that once again there will be nobody with the foresight or courage to say no.

Are the predictions of Joy, Lanier, Kaczynski, Kurzweil and Moravec perched precipitously on the brink of reality? Will it truly be possible to replicate the human mind and create machines that think for themselves? Marvin Minsky published this definition of AI in 1968. “Artificial Intelligence is the science of making machines do things that would require intelligence if done by man”[3], but some researchers have faulted this definition on two grounds. The first is that they claim that AI is not a science but rather the combination of two disciplines, the branch of psychology that studies how people think and the branch of engineering that builds machines that can compute. Secondly, they query whether these two disciplines share the broad goal declared in Minsky’s definition. One group is interested in understanding the intelligence of humans while the other group is interested in improving the intelligence of the machines (Lindsay, 1988).

Lindsay (1988, p2) states that a representation of intelligence would place AI systems somewhere in between conventional computing and human intelligence. Lindsay uses Masoud Yazdani’s analogy of flight to help explain the paradox that computers can think but that they cannot really “think”. Yazdani asks us to consider the analogy between objects that think and objects that fly. For centuries birds were the ultimate flying objects just as humans were the ultimate thinking objects. The possibility of a machine made from beer cans flying was as improbable as the notion of a machine made from beer cans being able to think is to most people today. Yazdani likens a belief that the biochemical properties of the brain create beliefs and thoughts (thereby preventing any machine from ever thinking), to the belief that only the biochemical properties of birds allows them to fly. However research has revealed that the aerodynamic properties of the bird that allow them to fly can be simulated. That research has identified a complex body of laws governing aspects of flight that transcends the biochemical attributes of the birds. Yazdani concludes that “artificial flight has progressed, not in direct imitation of natural flight, nor instead of natural flight. It seems reasonable to assume that AI will eventually come to have a similar sort of relationship with natural intelligence. That is to say, AI should expect neither to imitate nor displace human intelligence (Lindsay, 1988).

The work of Moravec and his cohorts would seem to be taking humanity far beyond the moderate approach espoused by Yazdani, but is the research too far down the road to alter course? Joy (2000) seems to think not although he warns that the last chance for humans to assert control, the fail safe point, is rapidly approaching. We have the first pet robots and the breakthrough to wild self replication in them (the robots) could come as suddenly as the surprising breakthrough in the cloning of a mammal says Joy.

Joy (2000) quotes Thoreau as having said that humans will be “rich in proportion to the number of things which they can afford to let alone”, and that it would seem worthwhile to question whether it is necessary to entertain such a high risk of destruction to gain yet more knowledge and yet more things in the pursuit of happiness. Common sense dictates that there is a limit to our material needs – and that certain knowledge is too dangerous and is best foregone (Joy, 2000).

Some scientists disagree with Joy’s gloomy forecasts but agree that society must err on the side of cautiousness. Brown & Duguid (2000) respond to Joy’s article by reminding the reader that pessimists and luddites have, in the past, envisioned a nuclear apocalypse so that they could triumphantly gloat in the accuracy of their forecasts. While Brown & Duguid do not suggest that Joy is a luddite, they do maintain that Joy is being remarkably tunnel-visioned in focusing solely on technology and leaving people out of the picture. They cite the case of nuclear development and the forces other than technology that have been at work such as antinuclear protests, the environmental movement, government agencies and anti-proliferation treaties. Unsurprisingly, similar social forces are at work on the technologies today that the techno-enthusiasts have trouble bringing into view. Brown & Duguid argue that robots may seem intelligent, but their intelligence is profoundly hampered by their inability to learn in a significant way. Indeed the thing that handicaps robots the most is their lack of a social existence, the very thing that shapes people as humans. Science and society are constantly forming new dynamic equilibriums with far reaching implications and it is suggested that society should be grateful to Joy for his warnings as they may help to prevent the very future that is the basis of his concern.

There appears to be no doubt that the very near future holds massive changes in store for technology and the way in which we are able to utilise it. By the year 2009 it is possible that a $1,000 computer may be able to perform about one trillion calculations per second. Supercomputers could match the capacity of the human brain and perform twenty million billion calculations per second. Unused computers on the Internet may be being harvested to create virtual parallel supercomputers with human brain hardware capacity. Research into parallel neural nets, genetic algorithms and other forms of chaotic theory computing may have begun to replace conventional sequential programming (Kurzweil, 1999).

By the year 2029 it is possible that a $1,000 computer could have the computing capacity of one thousand human brains. Many of the specialized regions of the human brain may have been decoded and the massive parallel algorithms deciphered. Displays could now be mounted directly into the eyes as permanent or removable implants with images being projected directly onto the retina as high resolution 3D overlays. Direct neural pathways may be developed and perfected for high bandwidth connection to the human brain and a range of neural implants developed to enhance visual and auditory perception and interpretation, memory and reasoning (Kurzweil, 1999).

The possible benefits of AI research by the year 2029 could mean vast improvements in education and learning. Highly intelligent devices could provide improved lifestyles for the physically and mentally impaired in our society. Huge enhancements will be possible in communication, business and economics, politics and society. Bionic organs and other devices will improve and extend the human lifespan (Kurzweil, 1999).
It seems likely also that by 2029 that a sharp division will no longer exist between the human world and the machine world. Computers will routinely pass the Turing Test and controversy will exist about whether or not machine intelligence equals human intelligence. The distinction between the two intelligences may be blurring as machine intelligence is increasingly derived from the design of human intelligence, and human intelligence is increasingly enhanced by machine intelligence (Kurzweil, 1999).

The possible benefits for society that may evolve from AI research are too great to insist that we cease all advancements. However, it does appear as if some of the major research that is currently being undertaken in this field can be regarded as misguided. In the pursuit of new technologies, the material presented suggests that it would be prudent to retain the sense of human identity in the face of overwhelming technology. Humans are the source of their own values, creativity and sense of reality. If this can be remembered then all of the work with computers will be worthwhile and beautiful (Lanier, 1996).


(1). Information about Carbon Nanotube Transistor Technology available on-line here [2001, Dec.3]

(2). As quoted in: Joy, B. 2000, Why the Future Doesn’t Need Us. Available:
http://www.wired.com/wired/archive/8.04/joy_pr.html [2001, Nov.30]

(3). As quoted in: Lindsay, S. 1988, Practical Applications of Expert Systems, QED Information Sciences, Inc., Wellesley, Massachusetts.

References:
Brown, J. & Duguid, P. 2000, A Response to Bill Joy and the Doom and Gloom Technofuturists. Available:
http://www.aaas.org/spp/dspp/rd/ch4.pdf [2001, Dec.2]
Dyson, G. 1997, Darwin Among the Machines: the Evolution of Global Intelligence, Addison-Wesley Publishing Co., Reading, Massachusetts.
Harpers Magazine, May 1997 v294 n1764 p45(10), Our Machines Ourselves (human and computers) (panel discussion) Jack Hitt, James Bailey, David Gelernter, Jaron Lanier, Charles Siebert.
IBM Scientists Develop Breakthrough Carbon Nanotube Transistor Technology.
Available: here [2001, Dec.3]

Joy, B. 2000, Why the Future Doesn’t Need Us. Available:
http://www.wired.com/wired/archive/8.04/joy_pr.html [2001, Nov.30]
Kaczynski, T. 1995, The Unabombers Manifesto: Industrial Society and its Future. Available;
http://hotwired.lycos.com/special/unabom/list.html [2001, Dec.2]
Kurzweil, R. 1999, The Age of Spiritual Machines, Viking Penguin, New York.
Lanier, J. 1996, The Prodigy, in Digerati: Encounters with the Cyber Elite, John Brockman, HardWired Books, USA.
Lindsay, S. 1988, Practical Applications of Expert Systems, QED Information Sciences, Inc., Wellesley, Massachusetts.
Marr, D. 1981, Artificial Intelligence – A Personal View, in Mind Design, ed. Haugeland, J., MIT Press, Cambridge, Massachusetts.

Moravec, H. 2000, Ripples and Puddles. Available: here [2001, Nov. 30]
Platt, C. 1995, Superhumanism. Available:
http://www.wired.com/wired/archive/3.10/moravec_pr.html [2001, Nov 30]