The only way of discovering the limits of the possible is to venture a little way past them into the impossible (Arthur C. Clarke's 2nd law)

Saturday 2 July 2011

The slowdown hypothesis (extended abstract)

Alessio Plebe and Pietro Perconti, University of Messina

The slowdown hypothesis 
 
The so-called singularity hypothesis embraces the most ambitious goal of Artificial Intelligence: the possibility of constructing human-like intelligent systems. The intriguing addition is that once this goal is achieved, it would not be too difficult to surpass human intelligence. A system more clever than humans should also be better at designing new systems as well, leading to a recursive loop towards ultraintelligent systems (Good, 1965), with an acceleration reminiscent of mathematical singularities (Vinge, 1993).

Back when AI suffered from a significant lack of results with respect to the claims put forth by some of its most fervid enthusiasts, and faced strong philosophical criticism (Searle, 1980; Dreyfus & Dreyfus, 1986), skepticism about the possibility of it achieving its main goal spread, leading to a loss of interest in the singularity hypothesis as well. Our opinion is that, despite the limited success of AI, progress in the understanding of the human mind, coming especially from current neuroscience, leaves open the possibility of designing intelligent machines. We also believe that none of the philosophical objections against strong AI are really compelling.

This however, is not our main point. What we will address instead, is the issue of a singularity scenario associated with the achievement of human-like systems. With this respect, our view is skeptical. Reflection on the recent history of neuroscience and AI suggests to us instead, that trends are going in the opposite direction. We will analyze a number of cases, with a common rate pattern of discovery: important achievements in simulating aspects of human behavior become on one hand, examples of progress, and on the other, a point of slowdown, by revealing how complex the overall functions are of which, they are just a component. There is no knockdown argument for posing that the slowdown effect is intrinsic to the development of intelligent artificial systems, but so far, there is good empirical evidence for it. Furthermore, the same pattern seems to characterize the recent inquiry concerning the core notion of intelligence.
  • The discovery of receptive cells in V1 cortical area (Hubel & Wiesel, 1959) was a major breakthrough in the understanding of the visual system, and have been successfully simulated in mathematical models (Marr & Hildreth, 1980). There was confidence that this achievement would be a first step towards artificial vision comparable to that of humans. In the forty years that have followed there has been no similar discovery, moreover, the effect of this achievement has been to focus research mainly on V1. Today, it is clear that the computation done by V1 is but a small fraction, and the simplest, of that involved in the whole vision process, beyond V1 almost nothing is known nor been rigorously simulated (Plebe, 2008).
  • A puzzle in the early era of neural computation was the simulation of language, requiring syntactic processing. Elman (1990) made another breakthrough with his recurrent network, that exhibited syntactic and semantic abilities. It was a toy-model, with a vocabulary of just a few words, however, it was then presumed that it would open the road to fast progress in simulating language. In the twenty years that have followed, no other model has achieved results that are comparable to Elman’s. Minor improvements were gained at the price of much more complex systems (Miikkulainen, 1993).
  • The biggest success in mathematical modeling of brain functions has been the H-H model of neural polarization (Hodgkin & Huxley, 1952). Decades later a powerful simulator became available, based on the core equations of the HH model (Wilson & Bower, 1989). Oddly enough, no mathematical model of similar importance for the brain has been developed since, and all the most important phenomena at a cellular level, like synaptic transmission or dendritic growth, lack a mathematical model.
Another difficulty for the singularity hypothesis comes from the social nature of intelligence and the role played in it by consciousness. It seems that the kind of intelligence typical of the singularity hypothesis is inspired by methodological solipsism, a widespread tenet in classical cognitive science. According to this way of thinking, the mind is considered as something that pertains to a given individual and consists of a set of skills that can be gradually amplified so as to exceed those of humans. But, if we accept the major claims of current cognitive science, we are also driven to consider the mind as a social and ecological creature. Externalists argue that to specify the content of many mental states one must take into consideration its reference rather than the manner in which, it is given to the mind (Menary, 2010). According to theorists of embodied cognition, mental contents are determined by the way the body acts in the environment (Shapiro, 2011). Moreover, the success of the account of social cognition, with its idea of the core ability to interpret behavior as a consequence of the mental states of its performer, has finally shown the limits of solipsism (Tomasello, 2009). In sum, intelligence is no longer conceived as a mere individual property.

What impact do these considerations have on the singularity hypothesis? Here we have another instance of the above mentioned slowdown effect. This is particularly evident in the case of consciousness. Considering our understanding of how consciousness works, it seems that more and more new and difficult problems arise, such as the subjective quality of conscious experience and the first-person perspective of aware psychological states (Chalmers, 2010). On one side, consciousness is something that, intuitively speaking, an ultraintelligent and individual machine should have. In other words, consciousness is necessary for the singularity hypothesis. But, on the other side, it also serves as the basis for social cognition. When we engage in inner speech, silently drifting in our stream of consciousness, we reason in the same way as when we simulate another individual in order to predict his actions. High level simulation is an activity of projection that, to take place, must have an inner space upon which to be based and from which to operate. Reflexive reasoning is the inner space from which high level simulation proceeds in its attribution of intentions, and in its behavioral predictions. Since social cognition is indispensable for a human-like intelligence, and it requires the inner space of consciousness to take place, then a sort of social consciousness is a fundamental characteristic of the intelligence we suppose future machines should develop.

References

Chalmers, D. (2010). The character of consciousness. Oxford (UK): Oxford University Press.
Dreyfus, H. L., & Dreyfus, S. E. (1986). Mind over machine: The power of human intuition and the expertise in the era of the computer. New York: The Free Press.
Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14, 179–221.
Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. In F. L. Alt & M. Rubinoff (Eds.), Advances in computers (Vol. 6, pp. 31–88). New York: Academic Press.
Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of ion currents and its applications to conduction and excitation in nerve membranes. Journal of Physiology, 117, 500–544.
Hubel, D., & Wiesel, T. (1959). Receptive fields of single neurones in the cat’s striate cortex. Journal of Physiology, 148, 574–591.
Marr, D., & Hildreth, E. (1980). Theory of edge detection. Proceedings of the Royal Society of London, B207, 187–217.
Menary, R. (Ed.). (2010). The extended mind. Cambridge (MA): MIT Press.
Miikkulainen, R. (1993). Subsymbolic natural language processing: and integrated model of scripts, lexicon and memory. Cambridge (MA): MIT Press.
Plebe, A. (2008). The ventral visual path: Moving beyond V1 with computational models. In T. A. Portocello & R. B. Velloti (Eds.), Visual cortex: New research (pp. 97–160). New York: Nova Science Publishers.
Searle, J. R. (1980). Mind, brain and programs. Behavioral and Brain Science, 3, 417–424.
Shapiro, L. (2011). Embodied cognition. London: Routledge.
Tomasello, M. (2009). Why we cooperate. Cambridge (MA): MIT Press.
Vinge, V. (1993). The coming technological singularity: How to survive in the post-human era. In Proc. vision 21: Interdisciplinary science and engineering in the era of cyberspace (pp. 11–22). Lewis Research Center: NASA.
Wilson, M. A., & Bower, J. M. (1989). The simulation of large-scale neural networks. In C. Koch & I. Segev (Eds.), Methods in neuronal modeling (pp. 291–333). Cambridge (MA): MIT Press.

2 comments:

  1. Your observations are interesting; but I disagree with the final one about consciousness. You wrote,

    "On one side, consciousness is something that, intuitively speaking, an ultraintelligent and individual machine should have. In other words, consciousness is necessary for the singularity hypothesis."

    The logic here is that consciousness is hard, and therefore having to implement it will slow down implenting intelligence. That makes no sense. If consciousness is required for intelligence, then implementing intelligence will necessarily implement consciousness. If it is not, then it is not. Either way, it can't make building intelligent machines any harder.

    (I also doubt that Elman's "Finding structure in time" has the importance you credit it with. It appears to be a connectionist method for constructing a Markov model of language - which is not at all the same as syntax. The development of a semantic similarity metric is interesting, but is probably equivalent to clustering words by alphabet reduction or other methods. But I've only glanced at the paper.)

    ReplyDelete
  2. Nowhere does Vinge define the Singularity as reminiscent of a mathematical singularity.

    ReplyDelete