The only way of discovering the limits of the possible is to venture a little way past them into the impossible (Arthur C. Clarke's 2nd law)

Wednesday 29 December 2010

Human intelligence, superintelligence, and intelligence explosion (Alan Turing)

Jack Copland holds (Alan Turing and the Origins of AI) that the earliest substantial work in artificial intelligence was done by the Turing. His vision of machine learning, articulated as early as 1951, is a case in Copeland's point:
If the machine were able in some way to 'learn by experience'... there seems to be no real reason why one should not start from a comparatively simple machine, and, by subjecting it to a suitable range of experience, transform it into one which was more elaborate, and was able to deal with a far greater range of contingencies. ['Intelligent Machinery, A Heretical Theory' BBC 1951]
May we also find in Turing's work his opinion about the the prospects of 'human-level' machine intelligence, superintelligence, and even a process akin to an intelligence explosion?
I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. ['Computing Machinery and Intelligence' Mind 1950]
My contention is that machines can be constructed which will simulate the behaviour of the human mind very closely. They will make mistakes at times, and at times they may make new and very interesting statements, and on the whole the output of them will be worth attention to the same sort of extent as the output of a human mind. [1951]
Let us now assume ... that [machines which simulate human minds] are a genuine possibility, and look at the consequences of constructing them. ... There would be plenty to do .... trying to keep one's intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers.  ... At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler's 'Erewhon'. [1951]
The last comment refers to a novel by Samuel Butler, the Foreword to which may explain Turing's comment:
There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusc has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.

No comments:

Post a Comment