Anna Salamon, Singularity Institute of Artificial Intelligence
Nick Bostrom, Future of Humanity Institute, Oxford University
The Intelligence Explosion: Evidence and Import
In this century, broadly human-level artificial intelligence may be created[1][2][4][7][12]. Shortly thereafter, we may see what I.J. Good termed an “intelligence explosion” -- a chain of events by which human-comparable artificial intelligence leads, fairly rapidly, to artificially intelligent systems whose capabilities far surpass those of biological humanity as a whole. [5]
We aim to sketch, as briefly as possible: (1) what assumptions might make an “intelligence explosion” plausible; (2) what empirical evidence supports these assumptions; and (3) why it matters.
I. Precedents for radical speed-ups
The case for an “intelligence explosion” does not rely on any particular model of how technological change is progressing, what specific technologies will be available at specific dates, or whether Moore’s law will continue. It also does not rely on analogies between artificial intelligence and evolution or human technological change. Nevertheless, before considering the future, it is worth taking a quick look at past rates of change.
On an evolutionary timescale, the rise of Homo Sapiens from our last common ancestor with the Great Apes happened rather swiftly, over the course of a few million years. Some relatively minor changes in brain size and neurological organization led to a great leap in cognitive ability. Humans can think abstractly, communicate complex thoughts, and culturally accumulate information over the generations far better than any other species on Earth.
These capabilities let humans develop increasingly efficient productive technologies, which made it possible for our ancestors to greatly increase population densities and the total population. More people meant more ideas; greater densities meant that ideas could spread more readily and that some individuals could devote themselves to developing specialized skills. These developments increased the rate of growth of economic productivity and technological capacities. Later developments, related to the Industrial revolution, brought about a comparable step change in the rate of growth.
Such changes in the rate of growth have important consequences. A few hundred thousand years ago, in early human (or humanoid) prehistory, growth was so slow that it took on the order of one million years for human infrastructure to grow to sustain an additional one million individuals living at subsistence level. By 5000, B.C., after the agricultural revolution, it took just two centuries to add that quantity of output. Today, after the Industrial Revolution, the world economy grows on average by that amount every ninety minutes.
These previous speed-ups do not show that artificial intelligence will act on a faster time-scale than technology moves today. But they do tell us that the fact that technology today takes years or decades to improve is no guarantee that innovation will proceed similarly slowly after the invention of digital intelligence.
II. Digital intelligence may improve rapidly, once it gets started
Humans are the first intelligences sophisticated enough to produce technological civilization; it seems unlikely that we are near the ceiling of possible intelligences, rather than simply being the first such intelligence that happened to evolve.
How rapidly might AI design proceed, after the first human-level AIs are invented? While the unknowns are large, there are four major factors suggesting that the transition may be fairly rapid:
1. Software can be run at high serial speeds. While biological minds run at a fixed rate, software minds could be ported to any available hardware, and can therefore think more rapidly when faster hardware becomes available. This fact is particularly important if, as most current AI researchers suspect, the bottleneck in AI is software rather than hardware. If AI is indeed software-limited, then, by the time the software is designed, there may be hardware sufficient to run AI’s at far more than human speeds. [11]
2. Software can be copied. Designing the first AI requires research, but once the software has been built, creating additional AI instances is a matter of copying software. The population of digital minds can thus expand to fill the available hardware base, either through purchase (until the economic product of a new AI is less than the cost of the necessary computation) or through other means, e.g. hacking.
3. AIs can be edited and tested with the speed and precision of editing software. Significant variation in scientific research ability is probably seen even within humans; AI variants can be developed and tested more rapidly, and from a larger space of possibilities.
4. Recursive self-improvement. Once an AI becomes better at AI design work than the team of programmers that brought it to that point, a positive feedback loop may ensue. Now when the AI improves itself, it improves the thing that does the improving. Thus, if mere human efforts suffice to produce AI this century, a large population of sped-up AIs may be able to create a rapid cascade of self-improvement cycles, enabling a rapid transition.
Thus, AI development need not proceed on the time-scale we are used to from human technological innovation. In fact, the range of scenarios in which takeoff isn’t fairly rapid appears comparatively small, although non-negligible.
III. AIs as optimizers
A wide range of initial AI designs, if developed to the point of recursive self-improvement, will act to optimize some ‘utility function’.
There are two basic reasons for this. The first is that optimizing systems are useful; even today, we build many systems that evaluate the predicted outcome of potential actions, and select an action to minimize expected cost, or maximize expected reward/utility. As systems become smarter, the range of actions and considerations that can usefully be handed off to the systems expands. The second reason is that powerful optimizing systems form a stable attractor, in the sense that a system that is selecting actions to increase ‘X’ will, all else equal, choose to build more powerful systems that optimize for ‘X’. [9][10]
Moreover, since many different goals may be useful proxies for initial AI designs, and since AIs with many different goals would actively mimic safe benevolence while weak, it is plausible that the first powerfully self-improving AI will have goals that are somewhat haphazard.
IV. Risks
From plastics to skyscrapers, humans have rearranged much of our surroundings. We paved over forests, not because we hated trees, but because we had other uses for the land.
The more powerful an artificial intelligence is, the better it will be able to imagine rearrangements of matter, and paths from its current abilities to those rearrangements, that may better suit its goals. Absent careful AI-design to avoid such an outcome, a sufficiently powerful AI optimizer would therefore be likely to re-arrange our world so drastically that it can no longer support human life -- not because it hates us, but simply because humans, and the natural resources we require to live, are unlikely to be the arrangement of matter that best suits its goals. AI thus poses significant existential risks.[3][4][12][13]
V. Upside potential
Intelligence is the bottleneck on a huge number of problems that affect human welfare, such as disease, long-term nuclear and other technological risks, education, and the ability to lead rich, fulfilling lives. We should not overlook the upside potential of stable, smarter-than-human intelligence. [13]
VI. The need for research
We close with a brief review of the (many) avenues by which theoretical and empirical research today can improve our understanding of how to reduce long-term catastrophic risks from artificial intelligence. We note both avenues for technical research into stably human-friendly AI designs, and social science research into methods for decreasing the odds of a safety-impairing “arms race” during the development of artificial intelligence.
References:
[1] Bainbridge, W. (2005). Managing nano-bio-info-cogno innovations: Converging technologies in society. Washington, D.C: Springer.
[2] Baum, S., Goertzel, B., & Goertzel, T. “How long until human-level AI? Results from an expert assessment”. Technological Forecasting and Social Change (forthcoming). <http://sethbaum.com/ac/fc_AI-Experts.pdf>.
[3] Bostrom, N. “Existential risks: analyzing human extinction scenarios and related hazards” (2002). Journal of Evolution and Technology, 9, <http://www.nickbostrom.com/existential/risks.html>
[4] Chalmers, D. (2010). “The singularity: a philosophical analysis”. <http://consc.net/papers/singularity.pdf>.
[5] Good, I. J., “Speculations concerning the first ultraintelligent machine”, Franz L. Alt and Morris Rubinoff, ed., Advances in computers (Academic Press) 6: 31–88, (1965) <http://www.acceleratingfuture.com/pages/ultraintelligentmachine.html>.
[6] Hanson, R. “Long-term growth as a series of exponential modes” (2007). <http://hanson.gmu.edu/longgrow.pdf>
[7] Legg, S. (2008). Machine super-intelligence. PhD Thesis. Lugarno, Switzerland: IDSIA.
[8] International Technology Roadmap for Semiconductors, “International technology roadmap for semiconductors, 2007 edition” 2007. <http://www.itrs.net/Links/2007ITRS/Home2007.htm>
[9] Omohundro, S., “The basic AI drives.” (2008) Proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008. <http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/>
[10] Omohundro, S., “The nature of self-improving artificial intelligence.” <http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/>
[11] Shulman, C. & Sandberg, A. “Implications of a software-limited singularity” (2010) Proceedings of the European Conference of Computing and Philosophy.
[12] Vinge, V. “The coming technological singularity”, Whole Earth Review, New Whole Earth LLC, March 1993. <http://www.accelerating.org/articles/comingtechsingularity.html>
[13] Yudkowsky, E. (2008). “Artificial intelligence as a positive and negative factor in global risk” In Bostrom, Nick and Ćirković, Milan M. (eds.), Global catastrophic risks, pp. 308–345 (Oxford: Oxford University Press). <http://singinst.org/upload/artificial-intelligence-risk.pdf>
Singularity Hypotheses: A Scientific and Philosophical Assessment contains authoritative essays and critical commentaries on central questions relating to accelerating technological progress and the notion of technological singularity, focusing on conjectures about the intelligence explosion, transhumanism, and whole brain emulation
The only way of discovering the limits of the possible is to venture a little way past them into the impossible (Arthur C. Clarke's 2nd law)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment