the ability to create new explanations

14:30 | 05-10-2012 | AI, Literature, Philosophy | No Comments

или — вот, что думает об искусственном интеллекте Дэвид Дойч:

Remember the significance attributed to the computer system in the Terminator films, Skynet, becoming “self-aware”?

That’s just another philosophical misconception, sufficient in itself to block any viable approach to AGI[1]. The fact is that present-day software developers could straightforwardly program a computer to have “self-awareness” in the behavioural sense – for example, to pass the “mirror test” of being able to use a mirror to infer facts about itself – if they wanted to.

<...>

Indeed, Richard Byrne’s wonderful research into gorilla memes has revealed how apes are able to learn useful behaviours from each other without ever understanding what they are for: the explanation of how ape cognition works really is behaviouristic.

и дальше основное:

The lack of progress in AGI is due to a severe log jam of misconceptions. Without Popperian epistemology, one cannot even begin to guess what detailed functionality must be achieved to make an AGI. <...> Thinking of an AGI as a machine for translating experiences, rewards and punishments into ideas (or worse, just into behaviours) is like trying to cure infectious diseases by balancing bodily humours: futile because it is rooted in an archaic and wildly mistaken world view.

Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose “thinking” is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity.

ну да.

в статье, кстати, вообще много интересного — например, о вечном противостоянии:

The battle between good and evil ideas is as old as our species and will continue regardless of the hardware on which it is running. The issue is: we want the intelligences with (morally) good ideas always to defeat the evil intelligences, biological and artificial; but we are fallible, and our own conception of “good” needs continual improvement.

или об устройстве:

Some have suggested that the brain uses quantum computation, or even hyper-quantum computation relying on as-yet-unknown physics beyond quantum theory, and that this explains the failure to create AGI on existing computers. Explaining why I, and most researchers in the quantum theory of computation, disagree that that is a plausible source of the human brain’s unique functionality is beyond the scope of this article.

думаю, вряд ли кого-то удивлю, однако Нил Стивенсон в “Anathem” пишет ровно о том же.


  1. Artificial General Intelligence.  ↩

  

Leave a Reply