Extendible AI: the economic perspective
Many formulations of the singularity argument are contingent on the existence of “extendible” methods for Artificial Intelligence (AI) [Chalmers, 2010]. One of the most promising extendible AI methods is that of multi-agent systems [Weiss, 1999] a field of distributed AI [Minsky, 1988]. Superficially this approach appears to be directly extendible since in some cases we can increase the intelligence of the system by increasing the number of agents. However, in order to make these methods truly extendible we will need to be able to scale up these systems to many billions of components. Building multi-agent systems on this scale, however, entails solving many complex economic problems [Clearwater 1996].
These problems relate to the difficulty of solving coordination problems, that is achieving global cooperation between rational agents who attempt to solve local optimisation problems. This appears to have been one of the central themes in the evolution of life on our planet. Complexity in nature emerged from a series of “major transitions” in the units of selection: genes cooperated to form regulatory networks; similarly cells emerged from networks, multi-cellular organisms from cells and societies from organisms [Maynard Smith and Szathmary, 1995]. There is an economic aspect to these major transitions from lower-level to higher-level selection:
“These transitions in the units of selection share two common themes: the emergence of cooperation among the lower-level units in the functioning of the new higher-level unit, and the regulation of conflict among the lower-level units.” [Michod, 1999, p. 7]Of particular to the acceleration debate is how the likelihood of cooperative outcomes depends on the size of the entity. For example, many social species living in groups have limits on the size of their group. In some cases these optimal group sizes emerge from competing tradeoffs in an ecological niche: that is, there are both costs and benefits to living in a large group and the exact trade-offs determine an optimal group size [Charnov and Orians, 1979]. Primates are particularly interesting because the mean group size is strongly correlated with neocortex ratio [Dunbar, 1998]: the larger the neocortex relative to the rest of the brain, the larger the species’ mean group size. Dunbar [1996] conjectures that this is because living in large groups requires animals to solve complex economic problems, for example negotiation over the spoils of cooperative hunting, and the larger the group the more complex these problems become. Historically, our own species had the largest neocortex ratio of the primates as well as the largest group size (estimated from hunter-gatherer studies). However, our group size was still relatively small by today’s standards: the mean size of a hunter-gatherer tribe was approximately 150 people – a figure which has become known as “Dunbar’s Number” in the popular literature.
One of the most interesting aspects of our species is that we have overcome this group size limit, partly through economic innovation – the invention of trade, markets and speculation – and partly from other social institutions such as law enforcement. In fact both types of infrastructure are interrelated (you cannot have property rights without law enforcement). De Long [1998] estimates world GDP from One Million years B.C. to the present day and finds an explosion of exponential growth coinciding with the industrial revolution. I conjecture that this explosive increase in GDP was not just about technical innovation but also about socio-economic innovation: for example, the invention of limited-liability companies and the rise of capital markets. These socio-economic innovations have enabled humans to live in vast interconnected groups (societies) which dwarf the group sizes of other social species on our planet [Seabright, 2010].
If this conjecture is true, and moreover infrastructure for enabling cooperation turns out to be a necessary condition for extendible distributed AI, then the conditions for acceleration of both artificial and natural species are aligned. Artificial higher-intelligence will not be able to produce high-tech innovation and world-class research without solving the same fundamental economic problems that we face in our own society: viz., the allocation of scarce resources (*) amongst distributed agents. In other words, explosive growth in intelligence cannot be achieved without an explosive growth in wealth; both types of acceleration may be tantamount to the same thing.
In this chapter, I will review models of the evolution of cooperation which demonstrate the necessary conditions for extendible group sizes and hence extendible distributed AI.
(*) Note that resources in distributed computing are subject to the same laws of supply and demand that govern tangible commodities in our own society. For example, information-processing capacity is constrained by available energy per unit time (power). Since energy is conserved, processing power is finite and has to be distributed carefully to competing nodes in a distributed computing architecture whose individual requirements for processing cycles may be boundless in the absence of constraints.
References
D. J. Chalmers. The Singularity: A Philosophical Analysis. Journal of Consciousness Studies, 17:7–65, 2010.
E. L. Charnov and G. H. Orians. Optimal foraging: some theoretical explorations. 1979.
S. H. Clearwater, editor. Market-Based Control: A Paradigm for Distributed Resource Allocation. World Scientific Publishing Company, March 1996. ISBN 9810222548.
J. B. De Long. Estimating World GDP, One Million B.C. to Present, 1998.
R. Dunbar. Grooming, Gossip and the Evolution of Language. Faber and Faber, 1996.
R. Dunbar. The Social Brain Hypothesis. Evolutionary Anthropology, 6:178–190, 1998.
J. Maynard Smith and E. Szathm´ary. The Major Transitions In Evolution. Oxford University Press, 1995.
R. E. Michod. Darwinian Dynamics: Evolutionary Transitions in Fitness and Individuality. Princeton University Press, 1999.
Marvin Minsky. The Society of Mind. Simon & Schuster, pages bent edition, March 1988. ISBN 0671657135.
P. Seabright. The company of strangers. Princeton University Press, 2010.
G. Weiss, editor. Multi-Agent Systems: A Modern Approach to Distributed Artificial Intelligence. MIT Press, 1999.
No comments:
Post a Comment