The only way of discovering the limits of the possible is to venture a little way past them into the impossible (Arthur C. Clarke's 2nd law)
Showing posts with label intelligence-explosion. Show all posts
Showing posts with label intelligence-explosion. Show all posts

Friday, 1 July 2011

The Hard Takoff Hypothesis (extended abstract)

Ben Goertzel, Novamente LLC

The Hard Takeoff Hypothesis

Vernor Vinge, Ray Kurzweil and others have hypothesized the future occurrence of a “technological Singularity” -- meaning, roughly speaking, an interval of time during which pragmatically-important, broad-based technological change occurs so fast that the individual human mind can no longer follow what’s happening even generally and qualitatively. Plotting curves of technological progress in various areas suggests that, if current trends continue, we will reach some sort of technological Singularity around 2040-2060.

Monday, 28 March 2011

Can machine learning bring about an intelligence explosion? (Extended abstract)

Itamar Arel, Department of Electrical Engineering and Computer Science, The University of Tennessee

Reward-Driven Learning and the Threat of an Adversarial Artificial General Intelligence Singularity 

A myriad of evidence exists in support of the notion that mammalian learning is driven by rewards. Recent findings from cognitive psychology and neuroscience strongly suggest that much of human behavior is propelled by both positive and negative feedback received from the environments with which we interact. The notion of reward is not limited to indicators originating from a physical environment. It also embraces signaling generated internally in the brain, based on intrinsic cognitive processes. Artificial General Intelligence (AGI), coarsely viewed as human- level intelligence manifested over non-biological platforms, is commonly perceived as one of the paths that may lead to the singularity. Such a path has the potential of being either beneficially transformative or devastating to the human race, to a great extent depending on the very nature of the AGI. 

Monday, 31 January 2011

The Intelligence Explosion (extended abstract)

Anna Salamon, Singularity Institute of Artificial Intelligence
Nick Bostrom, Future of Humanity Institute, Oxford University

The Intelligence Explosion: Evidence and Import

In this century, broadly human-level artificial intelligence may be created[1][2][4][7][12]. Shortly thereafter, we may see what I.J. Good termed an “intelligence explosion” -- a chain of events by which human-comparable artificial intelligence leads, fairly rapidly, to artificially intelligent systems whose capabilities far surpass those of biological humanity as a whole. [5]

We aim to sketch, as briefly as possible: (1) what assumptions might make an “intelligence explosion” plausible; (2) what empirical evidence supports these assumptions; and (3) why it matters.

Wednesday, 29 December 2010

Human intelligence, superintelligence, and intelligence explosion (Alan Turing)

Jack Copland holds (Alan Turing and the Origins of AI) that the earliest substantial work in artificial intelligence was done by the Turing. His vision of machine learning, articulated as early as 1951, is a case in Copeland's point:
If the machine were able in some way to 'learn by experience'... there seems to be no real reason why one should not start from a comparatively simple machine, and, by subjecting it to a suitable range of experience, transform it into one which was more elaborate, and was able to deal with a far greater range of contingencies. ['Intelligent Machinery, A Heretical Theory' BBC 1951]
May we also find in Turing's work his opinion about the the prospects of 'human-level' machine intelligence, superintelligence, and even a process akin to an intelligence explosion?
I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. ['Computing Machinery and Intelligence' Mind 1950]