The only way of discovering the limits of the possible is to venture a little way past them into the impossible (Arthur C. Clarke's 2nd law)

Tuesday, 12 July 2011

Models of a Singularity (extended abstract)

Models of different categories of technological singularity

Anders Sandberg, Future of Humanity Institute

The set of concepts today commonly referred to as “technological singularity” has a long history in the computer science community. Concerns about automated reasoning outpacing human reasoning can be found in Samuel Butler’s Erewhon (1872), John von Neumann and Stanislaw Ulam conversed in the 1950’s on how ever accelerating progress would lead to “some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”, and I.J. Good in 1965 delineated the possibility of an “intelligence explosion” where sufficiently advanced artificial intelligence would rapidly self-improve.

The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence; others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge's seminal 1993 essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occurs in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether.

This chapter is attempting a simple taxonomy of models of technological singularity, hopefully helping to disambiguate the different meanings of the word. It also aims at a brief review of formal quantitative models of singularity-like phenomena, in the hope of promoting a more stringent discussion of these possibilities.

A brief list of meanings of the term "technological singularity" found in the literature and some of their proponents:

A. Accelerating change. Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzwei, John Smart)

B. Self improving technology. Better technology allows faster development of new and better technology. (Flake)

C. Intelligence explosion] Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good, Eliezer Yudkowsky)

D. Emergence of superintelligence

E. Prediction horizon. Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Verno Vinge)

F. Phase transition. The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences, a punctuated equilibrium transition or the emergence of a new metasystem level. (Teilhard de Chardin, Valentin Turchin, Francis Heylighen)

G. Complexity disaster. Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different. (Didier Sornette, Geoffrey West)

H. Inflexion point. Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to deceleration. (T. Modis)

I. Infinite progress. The rate of progress in some domain goes to infinity in finite time. (Few, if any, hold this to be plausible)

In addition to the general meaning(s), the singularity might be local or global (capability take-off of an entity or small group, or broad evolution of the whole economy), fast or slow (occurring on computer timescales, hardware development timescales, human timescales, or historical timescales).

Various models have been proposed to describe various of these singularities:
  • Linear takeover (Types D,F): Singular events can occur when one form of growth outpaces another. This form of ``linear singularity'' does not necessarily involve any acceleration of progress.

  • Logistic growth (type H): Growth is self-limiting and will eventually slow down from exponential. In this case the singularity would denote the inflexion point, the (historically very brief) transition from a pre-technological state to a maximally advanced state.

  • Metasystem transition (type F): The evolutionary emergence of a higher level of organisation or control in a system. A number of systems become integrated into a higher-order system, producing a multi-level hierarchy of control. Typically modelled qualitatively rather than quantitatively, there exist at least one quantitative model with a finite time singularity.

  • Economic input output models (type A): These models depict economic growth as the result of different sectors interacting, building up a typically exponential amount of goods.

  • Endogenous growth models (type A,B,I): Models the growth of an economy with improving technology, where the technology is assumed to be growing as a function of the economy and allocation of resources to it. It was developed as a response to exogenous growth models, where diminishing returns predict that growth would stop rather than continue; ironically a major issue with endogenous models is that they tend to show superexponential growth that is regarded as economically unrealistic. But the observation that finite-time (mathematical) singularities are generic when knowledge production has economies of scale might also be seen as a prediction that singularity-like growth might occur in the real world.

  • Law of Accelerating returns (type A,B): Evolution processes favour more capable solutions to problems, making the returns of an evolutionary process (e.g. speed, cost-effectiveness) increase over time. As a particular evolutionary process becomes more efficient, more resources also become available to it, leading to a doubly exponential growth.

  • Vinge/Moravec/Solomonoff models (type A,B,I): These are a set of models by Vernor Vinge and Hans Moravec that essentially correspond to endogenous growth models of technology and knowledge. Like the economic models, these generically shows finite time singularities.

  • City economics (Type A,G): A model due to (Bettencourt et al. 2007) analysing the growth of cities, where resources are used to maintain existing structure and extend it. Due to economies of scale there is a singularity, which is however limited by resource constraints. If these can be overcome through innovation there is an accelerating series of crises/singularities leading to a final singularity. While the model describes cities, the general structure seems applicable to economic systems with increasing returns.

  • Microeconomic models (Type A): these models analyse the lead-up to singularity by examining the constraints of capital needs, available mental capital and a transition from a human-dominated to machine-dominated economy.
Various attempts at fitting real data to models have been done, leading to predictions of when different singularity scenarios could occur. Their utility is debatable.

The most solid finding given the above models and fits is that even small increasing returns in a growth model (be it a model of economics, information or system size) can produce radical growth. Hence identifying feedback loops with increasing returns may be a way of detecting emerging type A singularities. The models also strongly support the conclusion that if mental capital (embodied in humans, artificial intelligence or posthumans) becomes relatively cheaply copyable, extremely rapid growth is likely to follow. Hence observing progress towards artificial intelligence, brain emulation or other ways of increasing human capital might provide evidence for or against technological singularities.

No comments:

Post a Comment