Monthly Archives: March 2014

Kurzweil and Memory

Kurzweil notes that a Jack with neural implants to enhance his memory is still the “same Jack”. I paused after reading this bit to mull over the idea a bit more thoroughly. One’s entire archive of memories, which are deeply personal to that individual, are seemingly what makes that person that person. I am inclined to believe that memory is intricately intertwined with not only decision making but the individual’s personality as well. My own memory of a TV documentary, albeit very a cheesy one, brings forth the story of a person with such powerful recall that her memory tormented her daily routines; she wished she could forget the mindless things her memory forced her to remember.

The hypothetical Jack seems to suffer a similar fate – encountering memories he would have preferred to stay dim. Therefore, the act of “forgetting” seems to be as much a pillar of human behavior as the act of remembering (I’ve never read Proust’s “Remembrance of Things Past”, though its theme of involuntary memory seems to be particularly applicable). Is Jack the “same” person if his memory is significantly enhanced? I’m not so sure, but I would argue that his personality would not be the same.

To complicate things further, the act of mis-remembering seems to be a very human characteristic. I don’t think memories, in and of themselves, are ever truly “objective” experiences. Personal memories may ride the wave of subjectivity so far as to attain the level of “false memories”. Our brains make things up. Perhaps this is evolutionarily driven, a mechanism that arose out of our want to create meanings and symbols. How (and should) we design technologies to help us “forget”? (I, for one, have too many mobile photos, thousands, and I am still trying to figure out how to habitually and efficiently curate them).

It seems like our flaws and mental blemishes contribute to “what makes us human”. As a result, the idea of human “consciousness”, if uniquely human, should perhaps account for the (using ideas from the Japanese aesthetic, “Wabi-Sabi) imperfections and transience of human mental faculties. To address Kurzweil’s question; If by 2030, machines claim Descartes’ dictum, “I think, therefore I am”, should we believe them? How do we distinguish human “consciousness” and machine “consciousness”, and how can we escape our human-centric perspectives on consciousness?

what the human goal is

I do not care about qualitatively distinguishing humans from other machinery that thinks. Can someone explain to me a concrete outcome we stand to gain from making this comparison?

When we are overloaded with information, tech provides us filtering algorithms; nature provides us intuition.

I find this comparison problematic. To compare intuition to a filtering algorithm is to imply that intution does nothing but prune possibilities. Why can’t intuition be a process that generates possibilities? Given what we know about the brain’s general tendancy toward feedback & interconnectivity, it seems more likely that hte phenomenon of “intuition” is better accounted for by an iterative feedback loop between the generation and assessment of possible interpretations, until a few trickle up to our conscious mind’s consideration.

How do we contrast algorithms and intuition?  Put another way, can all problems be solved?  Is a death a problem to be solved?

What is the relationship between these three questions? Am I to believe that intuition is what makes problems solvable? And what does it matter to contrast algorithms and intuitions? We could say that an intuition can be modeled algorithmically, and we could also say that an algorithm could be conceptualized as an intuition. Both might miss crucial points, and I am missing what we stand to gain from the comparison.

Doctors have a Hippocratic Oath.  What about the twenty-something engineers, who are just trying to do ‘cool things’, and affecting how trillions of people connect?

This is an interesting question. Are there cases in which ethically motivated policies similar to the hippocratic oath have been embeddeed in software code? (DRM comes to mind, though there is an economic side to it as well)

We tend to view labor as toil.  What if labor wasn’t something to be done away with, but rather an instrument for transformation that sustained a gift ecology?

I reject the jump from “labor as toil” to “labor as something to be done away with.” I’m sure Gandhi would too. Regardless, Huizinga’s idea of Homo ludens has some bearing on this idea, and could be framed in the idea of a gift ecology.

Should I accomplish nothing today?

‘To think that the one who is smart and can look out
for himself is exceptional, and that it is better to be
exceptional, is to follow “adult” values. The one who goes
about his own business, eats and sleeps well, the one with
nothing to worry about, would seem to me to be living in
the most satisfactory manner. There is no one so great as

the one who does not try to accomplish anything.’

Would society be better if we stopped trying to accomplish? Fukuoka argues that “progress” and “accomplishment” has not driven much positive change in the modern world. Although GDP has grown, happiness hasn’t necessarily (or so he says).

I think I agree to a carefully qualified version of this argument. I don’t think accomplishments are always bad. The end of Jim Crow is an undeniable “accomplishment” for civil rights advocates. The end of Polio in many countries seems to be a pretty undeniable accomplishment for world health. But I do think that people driven solely by a desire to achieve a particular, pre-determined result often wreak unintentional havoc. The problem is not really with accomplishments themselves, but the act of trying to rack them up. Perhaps a better way to live is being driven by values, not things you want to accomplish?

On Consciousness and the Animal Kingdom

There is a lot that Kurzweil has to say that I find suspect – his rapturous belief in the imminent arrival of meaningful machine intelligence in particular is something I find it hard to believe, as neat as that might be.  But his discussion of consciousness, and in particular his reference to the ‘consciousness’ of whales and elephants did get me thinking about the way we tend to define a potential machine consciousness.

There are a couple of ways that we tend to define things being, at least, alive – as Andrew discusses, there’s a will to survive and reproduce, which has been the subject of probably every pulp-y thriller written about machine intelligence.  But beyond those basic attributes, we tend to pretty much couch our conception of an intelligent machine in explicitly human terms.  Why would this need to be the case?  In this regard, the Turing test has always struck me as a little ridiculous.  If we were to make extraterrestrial contact with a race of space-faring slug-people, would the only sign of intelligence we accept be an aliens ability to convincingly mimic a human in what amounts to a game played at a dinner party?  I’d assert probably not.

I understand that the concept of machine vs. alien intelligence is a little different – we have designed the machine, so the behavior of machines is (maybe) more obvious in its predetermination than any other thing we have cause to interact with.  But I wonder, what evidence it would take for us to conclude that, say, dolphins were conscious?  Do signing gorillas count as conscious beings?  Do signing machines?  Where do we draw the line, and why?

What Kurzweil misses

I like the contrast of this week’s readings. I have read Kurzweil before and always felt he was too overwhelmingly positive about technological advancement. Particularly his content belief in the possibilities of AI. I simply don’t buy it. Atleast in my lifetime, we will never question whether we’ve achieved artificial intelligence or not. The goal post is much farther than he supposes.

There are three aspects of life Kurzweil overlooks in his analysis of consciousness and intelligence. The first is the will to survive. Human beings, in fact all life, share a will to survive and postpone death for as long as possible. This will is more than just a programmable response, it overrides everything else and in many ways defines life.

The second is the will to reproduce. All organisms desire to reproduce themselves, and without this will to reproduce they themselves would cease to be. This could possibly be programmed into a machine, but I would not consider a machine conscious until it desired its own reproduction.

The third is the journey, the training, or the learning mechanism. Even if a machine could write this blog article, it would not have become a writer of blog articles in the same way as I. Its journey would have been different. Whether this matters or not in determining consciousness or intelligence is possibly immaterial, but it is a difference that will never be reconciled between humans and machines.

What do folks think of these three aspects? Are they irrelevant since we can program a machine to emulate them? Can they never be reproduced in machines?

Tools for changing institutions?

“Seeking salvation through tools alone is no more viable as a political strategy than addressing the ills of capitalism by cultivating a public appreciation of arts and crafts.”

Do you agree?

If so, are all tools that focus on individual self-empowerment while ignoring broader, social systematic problems missing the point? Are there tools that address broader “ills” in their design, or does a broader campaign always need to accompany a tool to make it a successful change agent?

A different sort of “human scale” technology

Bookchin’s ideas parallel those of Schumacher in some ways. Both call for ‘local’ technology, suitable to the needs of smaller-scales than the massive factories they mention. Bookchin seems to, at times, take his argument to a different end than Schumacher, though. At once, he proposes adapting large scale technologies to smaller-scale needs (as opposed to more appropriate technologies) and/or adopting smaller-scale technologies at the local level. He does this in a way I find difficult to imagine Schumacher promoting:

“Some of the most promising technological advances in agriculture made since World War II are as suitable for small-scale, ecological forms of land management as they are for the immense, industrial-type commercial units that have become prevalent over the past few decades. Let us consider an example. The augermatic feeding of livestock illustrates a cardinal principle of rational farm mechanization— the deployment of conventional machines and devices in a way that virtually eliminates arduous farm labor… This type of mechanization is intrinsically neutral: it can be used to feed immense herds or just a few hundred head of cattle… In short, augermatic feeding can be placed in the service of the most abusive kind of commercial exploitation or of the most sensitive applications of ecological principles.”

 I also appreciate his strong belief in the ability of communities and ‘communitarians’ to self-regulate and maintain their own ecosystems while adopting new technologies. I’m not sure this has come to pass fully, but perhaps continues to be an ideal to strive toward.

Finally, I’m reminded of our conversation a couple of weeks ago about the hobby-ification of certain once-tedious tasks. This comes up in both the Bookchin and Morozov pieces. Bookchin puts it interestingly with reference to food cultivation:

“Relieved of toil by agricultural machines, communitarians will approach food cultivation with the same playful and creative attitude that men so often bring to gardening. Agriculture will become a living part of human society, a source of pleasant physical activity and, by virtue of its ecological demands, an intellectual, scientific and artistic challenge.”

Seems somewhat prescient, though again perhaps not at the scale of enterprise Bookchin imagined.

Finally, an optimist… but…

Bookchin’s optimism shines through every page. And indeed, in the half-century that has passed since his essay was written, we have achieved several of the technological advances that he describes. Yet the political and social change that are also critical to the achievement of Bookchin’s vision (as noted by Morozov in his reference to Bookchin) have arguably not come about. Why is this? What are we missing? What would be your vision for a manner or method in which we could radically shift political and social organization toward one that would be more conducive to using technology for our own liberation? Or do you believe that such shift is an impossible dream?

“Human Wholeness”

“Human wholeness” seems to be a central theme in Bookchin’s “Towards a Liberatory Technology”. He deems the true issue in new technology as to whether it can help “humanize” society. As I discussed in an earlier blog post, it seems as if our readings can be divided into two general lines of thinking: those of “inventors” and those of “humanists”, and Bookchin seems to fall into the latter category. Tagore, Schumacher, and Gandhi all touch upon this idea of human “wholeness” and what it means to be “human”.

In addition to the theme of “humanism”, Bookchin often discusses “creativity”. He writes that technology can play a significant role in the formation of personality, and that every art has its “technical side”. Further, he mentions that “Art would assimilate technology by becoming social art, the art of the community as a whole”. Thus, Tagore may view such an idea as productive to conquering “limited reality” (which I found similar to the idea of Bookchin’s formation of “personality”) and achieving the “Creative Ideal”. Schumacher notes (from a Buddhist point of view) that it is erroneous to consider consumption more important than creative activity, and that “the less toil there is, the more time and strength is left for artistic creativity.” In a similar vein, Bookchin notes that, in a liberated society, technology will not be negated, but used to “remove the toil from the production process, leaving its artistic completion to man”. I found his quote — “The machine, in effect, will participate in human creativity” particularly interesting, as it gives notable agency and “human-ness” to technology.

While I do believe that Bookchin has a significantly more optimistic view than of new technology than Gandhi, they both share an interest in the “liberation” of man and its connection to technology. I enjoyed how Buckchin symbolized “liberatory technology” as the abolition of mining, as mining is “man’s image of hell”. Perhaps more than the other authors we’ve read, Bookchin places much emphasis on the natural world, which he claims we must reintroduce into our human experience to achieve “human wholeness”.

1) Bookchin’s vision is wrong in a sense that, while much toil has been removed from the production process with the introduction of new technology, more work has been required of workers (technology workers). He envisions that artistic creativity is, to the delight of its workers, added after the toil of work has been completed. How is this view problematic?

2) Creativity seems to be a distinctly “human” trait and a theme shared amongst many of the authors we’ve read. Can machines really participate in human creativity? Can machines be “creative”, and can they replace human creativity? (In other words, what the hell does “creativity” even mean?)

ooh, shiny!

One of the problems with hackathons is that they place too much focus on speed and novelty, leading to a hacker culture which expects us to build something quickly, then move on to the next new thing. The maker culture isn’t that much different: makers build something based on a new idea, just to try it out. In many cases, this is a good thing: innovation thrives when we push the boundaries of what we know. But when we focus more on “newer” and “sooner” we tend to skip steps, or make sacrifices which result in poor long-term quality.

It would be false to assume, however, that this mentality is restricted to hackers and makers; I think this is systemic across most human activities. We touched on some topics of power and dominance when we read Freire, and I think those themes are especially relevant here, where “newer” is assumed to be “better” and “sooner” really means “before anybody else.”

I think I am much more pessimistic than Bookchin, and I really liked this line: “We are still the offspring of a violent, blood-soaked, ignoble history–the end products of man’s domination of man. We may never end this condition of domination.” While Bookchin felt that it was possible to end that cycle through anarchy, I am not so certain.

Am I wrong to think so pessimistically of the human race? Give me some hope that we as a society can eventually move beyond “Ooh, shiny!” to some state of mind based on a better good.