On Consciousness and the Animal Kingdom

There is a lot that Kurzweil has to say that I find suspect – his rapturous belief in the imminent arrival of meaningful machine intelligence in particular is something I find it hard to believe, as neat as that might be.  But his discussion of consciousness, and in particular his reference to the ‘consciousness’ of whales and elephants did get me thinking about the way we tend to define a potential machine consciousness.

There are a couple of ways that we tend to define things being, at least, alive – as Andrew discusses, there’s a will to survive and reproduce, which has been the subject of probably every pulp-y thriller written about machine intelligence.  But beyond those basic attributes, we tend to pretty much couch our conception of an intelligent machine in explicitly human terms.  Why would this need to be the case?  In this regard, the Turing test has always struck me as a little ridiculous.  If we were to make extraterrestrial contact with a race of space-faring slug-people, would the only sign of intelligence we accept be an aliens ability to convincingly mimic a human in what amounts to a game played at a dinner party?  I’d assert probably not.

I understand that the concept of machine vs. alien intelligence is a little different – we have designed the machine, so the behavior of machines is (maybe) more obvious in its predetermination than any other thing we have cause to interact with.  But I wonder, what evidence it would take for us to conclude that, say, dolphins were conscious?  Do signing gorillas count as conscious beings?  Do signing machines?  Where do we draw the line, and why?

4 thoughts on “On Consciousness and the Animal Kingdom

  1. One (subjective) definition – a conscious being is any for whom we can reasonably feel empathy

  2. You made me think of this quote from Dijkstra, who blames von Neumann for the introduction of the computer anthropomorphic metaphor.

    “… the anthropomorphic metaphor —for whose introduction we can blame John von Neumann— is an enormous handicap for every computing community that has adopted it. I have now encountered programs wanting things, knowing things, expecting things, believing things, etc., and each time that gave rise to avoidable confusions. The analogy that underlies this personification is so shallow that it is not only misleading but also paralyzing.

    It is misleading in the sense that it suggests that we can adequately cope with the unfamiliar discrete in terms of the familiar continuous, i.e. ourselves, quod non. It is paralyzing in the sense that, because persons exist and act in time, its adoption effectively prevents a departure from operational semantics and thus forces people to think about programs in terms of computational behaviours, based on an underlying computational model. This is bad, because operational reasoning is a tremendous waste of mental effort.”

    http://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD1036.html

  3. I found Kurzweil’s discussion of consciousness also very interesting. I think the idea of “consciousness” itself is muddy because it is human-centric, as is many of Kurzweil’s discussions of consciousness. I found myself reading about and reviewing the definition of consciousness, yet I am still not clear and exactly what it is. A being that can, while eating, think to himself “I am eating right now” or “I am thinking about myself eating right now”, is perhaps conscious, and this visualization is something I use to anchor myself back into an understanding of consciousness. I wonder, now, can a machine ever truly be self-aware?

    1. Tapan gives a reasonable definition in terms of empathy, but I think that the issue is that when people describe consciousness, they are often doing so with the specific goal of describing precisely what it is that sets humans apart from the animal kingdom. This makes for somewhat of a moving goalpost, and often (I think) ends up just off-loading ambiguity onto the terms used to define it. For example, what does self awareness really involve? I can certainly write code that jots a note in a log file every time it calls Belly.fill(food), but does that mean that it’s really aware of eating? When do we cross the line into awareness?

Comments are closed.