Category Archives: Kurzweil and Fukuoka

Peas in a Pod

I was actually as struck by the similarities between Kurzweil and Fukuoka as the differences.

In particular, many of the algorithmic techniques that Kurzweil seemed most optimistic about were emergent in some way – algorithms that learned based on many iterations of “experience”.

In some sense, isn’t the ideal farmer also an example of such a learning mechanism? That can appreciate the delicate and subtle interactions of many insects, weeds and natural species?

Or, is there something missing? Humility perhaps?

Im also reading Jane Jacobs at the moment, and I was struck by some of the overlaps between her thinking and that of Fukuoka – of respecting “natural” diversity, of “optimal” “solutions” arising out many individual micro-interactions, and of the best heuristics as those being learned over many iterations of trial and error.

Is that really so different from what Kurzweil is proposing? Or, does the very idea of databases, computation and the abstraction that it entails reduce and simplify in a way that loses the essence?

“Ideal” science plants

I was talking with a friend about the readings for this week, and he was defensive about scientific farming, saying that plants evolve in nature to survive their environment, not to be ideal for their environment – something that science is just now making possible. If that’s true, does it change the conversation at all?

I wonder if this issue is less about the health of the plants and more about scale. When I imagine even the family farmers I know reading this work I think they would find it impractical not because of their plants’ and soil’s well being but because they wouldn’t be able to produce as much without their equipment and chemicals. Do you think we would have more of a reason to be hopeful if it were the other way around?

I think the affinity for nature is common in these alternative visions. What is the place of technology if our goal is living closer to nature?

Kurzweil and Memory

Kurzweil notes that a Jack with neural implants to enhance his memory is still the “same Jack”. I paused after reading this bit to mull over the idea a bit more thoroughly. One’s entire archive of memories, which are deeply personal to that individual, are seemingly what makes that person that person. I am inclined to believe that memory is intricately intertwined with not only decision making but the individual’s personality as well. My own memory of a TV documentary, albeit very a cheesy one, brings forth the story of a person with such powerful recall that her memory tormented her daily routines; she wished she could forget the mindless things her memory forced her to remember.

The hypothetical Jack seems to suffer a similar fate – encountering memories he would have preferred to stay dim. Therefore, the act of “forgetting” seems to be as much a pillar of human behavior as the act of remembering (I’ve never read Proust’s “Remembrance of Things Past”, though its theme of involuntary memory seems to be particularly applicable). Is Jack the “same” person if his memory is significantly enhanced? I’m not so sure, but I would argue that his personality would not be the same.

To complicate things further, the act of mis-remembering seems to be a very human characteristic. I don’t think memories, in and of themselves, are ever truly “objective” experiences. Personal memories may ride the wave of subjectivity so far as to attain the level of “false memories”. Our brains make things up. Perhaps this is evolutionarily driven, a mechanism that arose out of our want to create meanings and symbols. How (and should) we design technologies to help us “forget”? (I, for one, have too many mobile photos, thousands, and I am still trying to figure out how to habitually and efficiently curate them).

It seems like our flaws and mental blemishes contribute to “what makes us human”. As a result, the idea of human “consciousness”, if uniquely human, should perhaps account for the (using ideas from the Japanese aesthetic, “Wabi-Sabi) imperfections and transience of human mental faculties. To address Kurzweil’s question; If by 2030, machines claim Descartes’ dictum, “I think, therefore I am”, should we believe them? How do we distinguish human “consciousness” and machine “consciousness”, and how can we escape our human-centric perspectives on consciousness?

what the human goal is

I do not care about qualitatively distinguishing humans from other machinery that thinks. Can someone explain to me a concrete outcome we stand to gain from making this comparison?

When we are overloaded with information, tech provides us filtering algorithms; nature provides us intuition.

I find this comparison problematic. To compare intuition to a filtering algorithm is to imply that intution does nothing but prune possibilities. Why can’t intuition be a process that generates possibilities? Given what we know about the brain’s general tendancy toward feedback & interconnectivity, it seems more likely that hte phenomenon of “intuition” is better accounted for by an iterative feedback loop between the generation and assessment of possible interpretations, until a few trickle up to our conscious mind’s consideration.

How do we contrast algorithms and intuition?  Put another way, can all problems be solved?  Is a death a problem to be solved?

What is the relationship between these three questions? Am I to believe that intuition is what makes problems solvable? And what does it matter to contrast algorithms and intuitions? We could say that an intuition can be modeled algorithmically, and we could also say that an algorithm could be conceptualized as an intuition. Both might miss crucial points, and I am missing what we stand to gain from the comparison.

Doctors have a Hippocratic Oath.  What about the twenty-something engineers, who are just trying to do ‘cool things’, and affecting how trillions of people connect?

This is an interesting question. Are there cases in which ethically motivated policies similar to the hippocratic oath have been embeddeed in software code? (DRM comes to mind, though there is an economic side to it as well)

We tend to view labor as toil.  What if labor wasn’t something to be done away with, but rather an instrument for transformation that sustained a gift ecology?

I reject the jump from “labor as toil” to “labor as something to be done away with.” I’m sure Gandhi would too. Regardless, Huizinga’s idea of Homo ludens has some bearing on this idea, and could be framed in the idea of a gift ecology.

Should I accomplish nothing today?

‘To think that the one who is smart and can look out
for himself is exceptional, and that it is better to be
exceptional, is to follow “adult” values. The one who goes
about his own business, eats and sleeps well, the one with
nothing to worry about, would seem to me to be living in
the most satisfactory manner. There is no one so great as

the one who does not try to accomplish anything.’

Would society be better if we stopped trying to accomplish? Fukuoka argues that “progress” and “accomplishment” has not driven much positive change in the modern world. Although GDP has grown, happiness hasn’t necessarily (or so he says).

I think I agree to a carefully qualified version of this argument. I don’t think accomplishments are always bad. The end of Jim Crow is an undeniable “accomplishment” for civil rights advocates. The end of Polio in many countries seems to be a pretty undeniable accomplishment for world health. But I do think that people driven solely by a desire to achieve a particular, pre-determined result often wreak unintentional havoc. The problem is not really with accomplishments themselves, but the act of trying to rack them up. Perhaps a better way to live is being driven by values, not things you want to accomplish?

On Consciousness and the Animal Kingdom

There is a lot that Kurzweil has to say that I find suspect – his rapturous belief in the imminent arrival of meaningful machine intelligence in particular is something I find it hard to believe, as neat as that might be.  But his discussion of consciousness, and in particular his reference to the ‘consciousness’ of whales and elephants did get me thinking about the way we tend to define a potential machine consciousness.

There are a couple of ways that we tend to define things being, at least, alive – as Andrew discusses, there’s a will to survive and reproduce, which has been the subject of probably every pulp-y thriller written about machine intelligence.  But beyond those basic attributes, we tend to pretty much couch our conception of an intelligent machine in explicitly human terms.  Why would this need to be the case?  In this regard, the Turing test has always struck me as a little ridiculous.  If we were to make extraterrestrial contact with a race of space-faring slug-people, would the only sign of intelligence we accept be an aliens ability to convincingly mimic a human in what amounts to a game played at a dinner party?  I’d assert probably not.

I understand that the concept of machine vs. alien intelligence is a little different – we have designed the machine, so the behavior of machines is (maybe) more obvious in its predetermination than any other thing we have cause to interact with.  But I wonder, what evidence it would take for us to conclude that, say, dolphins were conscious?  Do signing gorillas count as conscious beings?  Do signing machines?  Where do we draw the line, and why?

What Kurzweil misses

I like the contrast of this week’s readings. I have read Kurzweil before and always felt he was too overwhelmingly positive about technological advancement. Particularly his content belief in the possibilities of AI. I simply don’t buy it. Atleast in my lifetime, we will never question whether we’ve achieved artificial intelligence or not. The goal post is much farther than he supposes.

There are three aspects of life Kurzweil overlooks in his analysis of consciousness and intelligence. The first is the will to survive. Human beings, in fact all life, share a will to survive and postpone death for as long as possible. This will is more than just a programmable response, it overrides everything else and in many ways defines life.

The second is the will to reproduce. All organisms desire to reproduce themselves, and without this will to reproduce they themselves would cease to be. This could possibly be programmed into a machine, but I would not consider a machine conscious until it desired its own reproduction.

The third is the journey, the training, or the learning mechanism. Even if a machine could write this blog article, it would not have become a writer of blog articles in the same way as I. Its journey would have been different. Whether this matters or not in determining consciousness or intelligence is possibly immaterial, but it is a difference that will never be reconciled between humans and machines.

What do folks think of these three aspects? Are they irrelevant since we can program a machine to emulate them? Can they never be reproduced in machines?