Tag Archives: kurzweil

On Consciousness and the Animal Kingdom

There is a lot that Kurzweil has to say that I find suspect – his rapturous belief in the imminent arrival of meaningful machine intelligence in particular is something I find it hard to believe, as neat as that might be.  But his discussion of consciousness, and in particular his reference to the ‘consciousness’ of whales and elephants did get me thinking about the way we tend to define a potential machine consciousness.

There are a couple of ways that we tend to define things being, at least, alive – as Andrew discusses, there’s a will to survive and reproduce, which has been the subject of probably every pulp-y thriller written about machine intelligence.  But beyond those basic attributes, we tend to pretty much couch our conception of an intelligent machine in explicitly human terms.  Why would this need to be the case?  In this regard, the Turing test has always struck me as a little ridiculous.  If we were to make extraterrestrial contact with a race of space-faring slug-people, would the only sign of intelligence we accept be an aliens ability to convincingly mimic a human in what amounts to a game played at a dinner party?  I’d assert probably not.

I understand that the concept of machine vs. alien intelligence is a little different – we have designed the machine, so the behavior of machines is (maybe) more obvious in its predetermination than any other thing we have cause to interact with.  But I wonder, what evidence it would take for us to conclude that, say, dolphins were conscious?  Do signing gorillas count as conscious beings?  Do signing machines?  Where do we draw the line, and why?

What Kurzweil misses

I like the contrast of this week’s readings. I have read Kurzweil before and always felt he was too overwhelmingly positive about technological advancement. Particularly his content belief in the possibilities of AI. I simply don’t buy it. Atleast in my lifetime, we will never question whether we’ve achieved artificial intelligence or not. The goal post is much farther than he supposes.

There are three aspects of life Kurzweil overlooks in his analysis of consciousness and intelligence. The first is the will to survive. Human beings, in fact all life, share a will to survive and postpone death for as long as possible. This will is more than just a programmable response, it overrides everything else and in many ways defines life.

The second is the will to reproduce. All organisms desire to reproduce themselves, and without this will to reproduce they themselves would cease to be. This could possibly be programmed into a machine, but I would not consider a machine conscious until it desired its own reproduction.

The third is the journey, the training, or the learning mechanism. Even if a machine could write this blog article, it would not have become a writer of blog articles in the same way as I. Its journey would have been different. Whether this matters or not in determining consciousness or intelligence is possibly immaterial, but it is a difference that will never be reconciled between humans and machines.

What do folks think of these three aspects? Are they irrelevant since we can program a machine to emulate them? Can they never be reproduced in machines?