There is a lot that Kurzweil has to say that I find suspect – his rapturous belief in the imminent arrival of meaningful machine intelligence in particular is something I find it hard to believe, as neat as that might be. But his discussion of consciousness, and in particular his reference to the ‘consciousness’ of whales and elephants did get me thinking about the way we tend to define a potential machine consciousness.
There are a couple of ways that we tend to define things being, at least, alive – as Andrew discusses, there’s a will to survive and reproduce, which has been the subject of probably every pulp-y thriller written about machine intelligence. But beyond those basic attributes, we tend to pretty much couch our conception of an intelligent machine in explicitly human terms. Why would this need to be the case? In this regard, the Turing test has always struck me as a little ridiculous. If we were to make extraterrestrial contact with a race of space-faring slug-people, would the only sign of intelligence we accept be an aliens ability to convincingly mimic a human in what amounts to a game played at a dinner party? I’d assert probably not.
I understand that the concept of machine vs. alien intelligence is a little different – we have designed the machine, so the behavior of machines is (maybe) more obvious in its predetermination than any other thing we have cause to interact with. But I wonder, what evidence it would take for us to conclude that, say, dolphins were conscious? Do signing gorillas count as conscious beings? Do signing machines? Where do we draw the line, and why?