The only way of discovering the limits of the possible is to venture a little way past them into the impossible (Arthur C. Clarke's 2nd law)

Friday, 1 July 2011

The Hard Takoff Hypothesis (extended abstract)

Ben Goertzel, Novamente LLC

The Hard Takeoff Hypothesis

Vernor Vinge, Ray Kurzweil and others have hypothesized the future occurrence of a “technological Singularity” -- meaning, roughly speaking, an interval of time during which pragmatically-important, broad-based technological change occurs so fast that the individual human mind can no longer follow what’s happening even generally and qualitatively. Plotting curves of technological progress in various areas suggests that, if current trends continue, we will reach some sort of technological Singularity around 2040-2060.

Of course, this sort of extrapolation is by no means certain. Among many counterarguments, one might argue that the inertia of human systems will cause the rate of technological progress to flatten out at a certain point. No matter how fast new ideas are conceived, human socioeconomic systems may take a certain amount of time to incorporate them, because humans intrinsically operate on a certain time-scale. For this reason Max More has suggested that we might experience something more like a Surge than a Singularity – a more gradual, though still amazing and ultimately humanity-transcending, advent of advanced technologies.

On the other hand, if a point is reached at which most humanly-relevant tasks (practical as well as scientific and technological) are carried out by advanced AI systems, then from that point on the “human inertia factor” would seem not to apply anymore. There are many uncertainties, but at very least, I believe the notion of a technological Singularity driven by Artificial General Intelligences (AGIs) discovering and then deploying new technology and science is a plausible and feasible one.

Within this vision of the Singularity, an important question arises regarding the capability for self-improvement on the part of the AGI systems driving technological development. It’s possible that human beings could architect a specific, stable AGI system with moderately greater-than-human intelligence, which would then develop technologies at an extremely rapid rate, so fast as to appear like “essentially infinitely fast technological progress” to the human mind. However, another alternative is that humans begin by architecting roughly human-level AGI systems that are capable but not astoundingly so – and then these AGI systems improve themselves, or create new and improved AGI systems, and so on and so forth through many iterations. In this case, one has the question of how rapidly this self-improvement proceeds.

In this context, some futurist thinkers have found it useful to introduce the heuristic distinction between a “hard takeoff” and a “soft takeoff.” A hard takeoff scenario is one where an AGI system increases its own intelligence sufficiently that, within a brief period of months or weeks or maybe even hours, an AGI system with roughly human-level intelligence has suddenly become an AGI system with radically superhuman general intelligence. A soft takeoff scenario is one where an AGI system gradually increases its own intelligence step-by-step over years or decades, i.e. slowly enough that humans have the chance to monitor each step of the way and adjust the AGI system as they deem necessary. Either a hard or soft takeoff fits I.J. Good’s notion of an “intelligence explosion” as a path to Singularity.

What I call the “Hard Takeoff Hypothesis” is the hypothesis that a hard takeoff will occur, and will be a major driving force behind a technological Singularity. Thus the Hard Takeoff Hypothesis is a special case of the Singularity Hypothesis.

It’s important to note that the distinction between a hard and soft takeoff is a human distinction rather than a purely technological distinction. The distinction has to do with how the rate of intelligence increase of self-improving AGI systems compares to the rate of processing of human minds and societies. However, this sort of human distinction may be very important where the Singularity is concerned, because after all the Singularity, if it occurs, will be a phenomenon of human society, not one of technology alone.

The main contribution of this paper will be to outline some fairly specific sufficient conditions for an AGI system to undertake a hard takeoff. The first condition explored is that the AGI system must lie in a connected region of “AGI system space” (which we may more informally call “mindspace”) that, roughly speaking,
  1. includes AGI systems with general intelligence vastly greater than that of humans
  2. has the “smoothness” property that similarly architected systems tend to have similar general intelligence levels. 
If this condition holds, then it follows that one can initiate a takeoff by choosing a single AGI system in the given mindspace region, and letting it spend part of its time figuring out how to vary itself slightly to improve its general intelligence. A series of these incremental improvements will then lead to greater and greater general intelligence.

The hardness versus softness of the takeoff then has to do with the amount of time needed to carry out this process of “exploring slight variations.” This leads to the introduction of a second condition. If one’s region of mindspace obeys the first condition laid out above, and also consists of AGI systems for which adding more hardware tends to accelerate system speed significantly, without impairing intelligence, then it follows that one can make the takeoff hard by simply adding more hardware. In this case, the hard vs. soft nature of a takeoff depends largely on the cost of adding new computer hardware, at the time when an appropriate architected AI system is created.

Roughly speaking, if AGI architecture advances fast enough relative to computer hardware, we are more likely to have a soft takeoff, because the learning involved in progressive self-improvement may take a long while. But if computer hardware advances quickly enough relative to AGI architecture, then we are more likely to have a hard takeoff, via deploying AGI architectures on hardware sufficiently powerful to enable self-improvement that is extremely rapid on the human time-scale.

Of course, we must consider the possibility that the AGI itself develops new varieties of computing hardware. But this possibility doesn’t really alter the discussion so much – even so, we have to ask whether the new hardware it creates in its “youth” will be sufficiently powerful to enable hard takeoff, or whether there will be a slower “virtuous cycle” of feedback between its intelligence improvements and its hardware improvements.

Finally, to make these considerations more concrete, the final section of the paper will give some qualitative arguments that the mindspace consisting of instances of the OpenCog AGI architecture (which my colleagues and I have been developing, aiming toward the ultimate goal of AGI at the human level and beyond), very likely possesses the needed properties to enable hard takeoff. If so this is theoretically important, as an “existence argument” that hard-takeoff-capable AGI architectures do exist – i.e., as an argument that the Hard Takeoff Hypothesis is a plausible one.

No comments:

Post a Comment