What’s wrong with stories?

Many of Nelson’s concepts for future information retrieval allow us to non-linearly and non-predictably traverse text. In “Some Premises Relevant to Teaching,” he says claims “there is no natural order or sequence” of teaching. Hypertext, hypermaps, hypergrams and his other IR suggestions give the user a tremendous amount of choice about where they move.

Where do stories, as a tool for teaching, fit into this view? Isn’t it sometimes valuable that to force ourselves to listen to facts, presented in a certain order? Is it possible to have stories that don’t imply a “natural order or sequence”?

I’m skeptical that our brains can handle as much “on the fly” connecting and arranging as Nelson claims they can. I personally find clear sequences of facts and texts very helpful in learning a subject. I also know that there are fundamental limits on our short-term working memory that make thinking *harder* when faced with complex information, like the ones Nelson describes.

 

Thirteen years he dedicated to these heterogeneous tasks, but the hand of a stranger murdered him–and his novel was incoherent and no one found the labyrinth.

the dreamfile: the file system that would have every feature…

in many ways, Nelson’s writing reminds me of the 19th and 20th century utopian thinkers who would carefully design (on paper) their brave new cities, occasionally making forays into building of them. much like Nelson’s experiments with links and filestructures, none of these utopias were lastingly used.

computers, once a branch of mathematics, are now their own field (but the development of fluid logic indicates a possible merger with the art of wind instruments).

like with the pile of discarded utopias, it’s worth wondering why Nelson’s ideas never found lasting traction. his idea of two-way links had very broad critical appreciation (and it’s still championed by thinkers like Jaron Lanier, who also happens to love wind instruments). yet the system we all use is Berners-Lee’s haphazardly-constructed HTTP, done as a contract job during his tenure at CERN.

yes, ARPA took up the ENQUIRE project and promoted it, yadda yadda, but why did they take up that system?

The author of an atrocious undertaking ought to imagine that he has already accomplished it, ought to impose upon himself a future as irrevocable as the past

perhaps writing about plans is the whole problem. to my knowledge, Berners-Lee never published anything about hypertext systems. he simply built one. perhaps the HTTP/HTML-industrial complex gained traction simply because it existed.

why do you think Nelson’s ideas about two-way links didn’t catch on? why do you think they haven’t been implemented by someone else?

A Garden of Forking Paths & You- What Technology Is, Is Not, and Might (Never) Be

Ed: I kind of just kept writing for longer than I expected.  Apologies for the long post! 

The tech community has taken and run with more than one Borges short story over the years (for another example of an influential story, see The Library of Babel).  And I can’t blame them: Borges writes some pretty cool stuff.  But in a world where the Web is ubiquitous, it’s easy to overlook the the sweeping influence that The Garden of Forking Paths and the system of hypertext that it inspired has had on the world.  By the same token, it’s easy to miss the distinctions between what hypertext is, what it was intended to be, and what it might (not) become.

The Garden of Forking Paths, itself, is far from a technical specification for hypertext.  It is a meditation on possible futures, forking timelines, agency and an unfinished novel of infinite possibilities.  But it is precisely that novel, branching into all possible futures rather than flowing toward one or another, that is often credited as the inspiration for hypertext.  Indeed, Nelson’s proposed system, underneath the technical description, pretty closely mirrors the book in Borges’s story.  An arbitrarily large collection of ordered lists of text (paths), arbitrarily linking to one another (branching), allowing for lists to link or converge, and for a user to trace a relatively ordered path through a series of interconnected pieces of text.  Nelson’s vision is optimistic – writers will be able to drastically cut their workload, annotations built upon annotations will allow us to disambiguate our work to any desired level of precision, and changes made in one list will propagate effortlessly into anything related.

The reality of HTML and the world wide web is an echo of what Nelson envisioned.  Many of the pieces are there- The web is an enormously complex series of linked pages, the freedom to interconnect disparate pages laying the foundation for the unprecedented flow of information that the internet has allowed around the world.  Yet for all the convenience the web has allowed us, it is difficult to say that much has significantly shifted about the way we compose or consume information.  As far as I’m aware, (long-form) authors have move mostly from writing linear paper manuscripts to writing linear digital manuscripts.  Wikipedia, perhaps the poster child for a source of robust, interconnected information, feels more like a traditional encyclopedia with conveniently locate-able citations than anything thoroughly different than its ink and paper forebears.  Though I might be fooled by my dearth of pre-internet experience, I cannot help but feel like I’m waiting for the other shoe to drop.

Nelson’s Dream Machines, as tangled and confusing and immature as it can seem, provides a good counterpoint the the very same optimism which Nelson expresses in his earlier paper.  Much of the text is spent just describing what we do and don’t know about computing, and largely leaves the reader to draw conclusions.  But the overarching slant of the piece pushes back on the sort of technological futurism that we saw with Engelbart and Licklider.  Many ‘prophets’ of technology have forecasted drastic shifts in humanity and technology as time passes – from Licklider’s human-computer symbiosis to Kurzweil’s singularity.  Technology and its supposedly relentless advance have at times been charted as some sort of linearly or exponentially accelerating force of nature (See this ridiculous graph, which equates the development of Eukaryotic life and personal computers).  Similarly,  Nelson points to what remains a chorus among many computer scientists: “We can’t do that yet.”  Early in the reading, Nelson makes a simple and obvious observation: perhaps the potential of technology is not so limitless as it is so often portrayed.

In many of the readings we have done to date, industrialization and the rise of technology have been portrayed as an inevitable and fundamentally transformative force.  Reactions to this vary – Gandhi argues for a return to the Charkha, Tagore points toward a loss of fundamental wholeness, Licklider and Engelbart excitedly imagine unlimited possibilities for human evolution.  Always there is this implication of fundamental change – a basic shift from some (previous) model of human experience to some new and {improved/different/worse} model. Yet it is this inevitable change itself which I find questionable.  I accept that technology is a magnifier of certain parts of the human experience – a greedy man can exploit more people through the mechanisms of a modern corporation than he may have been able to as the head of a guild;  the ubiquity of the internet allows people to spread thoughts or ideas faster than ever before; the mass production of goods has vastly widened the potential proliferation of certain products.  But I fail to see what has been fundamentally altered.  We are born, we live, we die.  Somewhere in between we probably love, and laugh, and hurt, and do all of those other things that people tend to do.

It’s undeniable that the advent of technology has come concomitant with sweeping changes in the structure of our societies and our interactions with our environment.  But I fear that we are missing the tree for its leaves.  It isn’t the factories that are polluting the Ganges – it’s the people who built them.  As far as I can tell, technology hasn’t much changed us – it has simply changed the scale at which we operate.   I remain unconvinced that the a shift in the use of technology can redress the fundamental issues of human nature.  I don’t mean to imply that we should just give up and let what happens happen – there is something to be said for preventing the proliferation of nuclear weapons, or other dangerous technology.  Exerting human agency over the use of the tools we build is not just important, it is essential.  But it seems disingenuous to frame the ills of human nature in the tools we use to accomplish those ends.  At one point in Dream Machines, Nelson asserts that we might not want computers that talk back to us.  I’d ask: “Does it really matter?”

Perhaps I’m missing something important.  Over the course of the semester, we have read a variety of views on what implications technology holds for the human race.  But what does it take to really, actually, change the human experience?  What does that mean? Are we fundamentally different than we were in centuries past?  Are we counting down the days to the technological singularity, or are mass production, industrialization, and personal computing simply a few more incremental changes to the environment in which we continue to be human?

The Inevitability of Robot Teachers

I really want to engage when writers talk about computers in education. I really want to believe that their opinions regarding computers in the classroom, and the use of computers in education are relevant and matter. However, I just can’t get past my predeliction to view it through the lens of Marxist critique. Every time I read someone talk about education and computers, especially MOOCs, I know the only reason this subject is being explored is because we don’t like paying teachers.

I’m reminded of a quote from Schumacher from Buddhist economics:
“From the point of view of the employer, [Labour] is in any case simply an item of cost, to be reduced to a minimum if it cannot be eliminated altogether, say, by automation.”

There are amazing possibilities available if we could discover the perfect mixture of computers and education. But as a society we never will. The motivation behind deploying computers in education is always about money, not quality. As long as that motivation pervades, the best intentions of all are pointless.

The ‘why’ of deploying computers in education outranks the ‘how’ in matters of public policy. It doesn’t matter how we go about deploying technology in classrooms, we’re going to deploy technology in classrooms because teachers/professors cost too much. We require the ‘how’ to only placate our misgivings of the ‘why’.

If the MIDS program at the I School is unprofitable it will be cancelled in a year. If it turns out to produce incompetent graduates, not that we’ll ever really know this, it will not be cancelled. This is what we need to understand the underlying motivations of programs like MIDS, and this motivation underlies most attempts at justifying the deployment of computing technology in classrooms. It’s the motivation that matters. Once we accept the ‘why’ as inevitable, we just start inventing and concocting narratives and plans to address the ‘how’ until something sticks and we’re placated.

So when I read Ted Nelson’s critique of CAI it seems pointless. Arguing about ‘how’ we deploy technology in classrooms is just splitting hairs. The real question we should have as a society is; if we consider education so important, why are we constantly trying to do it on the cheap? Or to again go back the Schumacher; if we value our education so much, why are we constantly trying to rid ourselves of educators?

Deep Thoughts

Some really great reflections here! I think we are finally getting to the heart of it, if thats even possible.

Zack asks whether there is any absolute measure of “value”, of “goodness”. Have we become less human, or has the experience of being human simply changed? What does  it even mean for something to be human – and isn’t the definition of that constantly in flux, especially given our ability to use tools and to change our environment to suit us? Isn’t that the very basis of civilization?

Jordan asks why all of the focus on the poor? Given that it is the wealthy that are the ones that are mainly using (and abusing) technology, aren’t they the ones that really need this wisdom?

Andrew asks whether it is possible to decouple economic growth from ecological devastation. Is increased efficiency the answer? Or do we need to change what we measure when we consider growth in the first place?

Nick connects Schumacher’s design values to the augmentation vs. automation discourse of Engelbart, and asks which of these modern HCI and computing technologies have achieved.  Andrew and Mike echo some similar thoughts regarding the distinction between tools and machines. Mike also wonders whether his views can be decoupled from religious and spiritual thought, or whether even an atheist could agree.

Ajay considers the tension between technocratic, utilitarian goals versus scale and local empowerment. Is it possible to achieve both at the same time? And if not, which should we prioritize. I would add to this discussion a reflection on why the Appropriate Technology movement, with all of its philosophical appeal, has not really taken off in the way Schumacher would have hoped. Is this a failure of economics, or of human values, or both?

Excited for tomorrow’s discussion!

Use that Greed

Schumacher references Gandhi as saying “[e]arth provides enough to satisfy every man’s need, but not for every man’s greed.” Greed is looked at through a materialistic lens, here the primary concern is for natural resources, but in doing so Schumacher denounces all greed. Similarly, he espouses the goal of having “enough.” Not only do these values conflict with human nature but they do not recognize how greed and dissatisfaction can contribute positively to existence.

 

I don’t think greed always translates to material wealth and always has to create envy. There can be a greed for knowledge, and truth and understanding. Our intense curiosity of the world is fueled by our tendency to never being satisfied, which compels us not to settle with our current understandings and continue to gather knowledge. This nature contributes deeply towards the process of finding meaning as deeper and wider progress towards truth is obtained.

 

With this in mind, and an attitude of acceptance of the duality of human nature, are there ways of cultivating our greed that results in a happier, spiritual and more balanced life?

 

It makes me think of the not so inaccurate stereotype of environmentalist trying to out green each other in their greed for prestige. Possibly gamification could play a role. There are applications that directly contribute to development like freerice.com, tugging at our greed to win and fill up our rice bowl. Could this work in more standard industries? Other ideas?

Tools v. Machines

1) Schumacher’s economic ideals, and his views on mechanization, are intertwined with principles taken from, or inspired by Buddhist ideas. He states that “[t]he choice of Buddhism for this purpose is purely incidental; the teachings of Christianity, Islam, or Judaism could have been used just as well as those of any other of the great Eastern traditions.” Is religion just one means of getting to Schumacher’s end or is it necessary? Could atheist principles lead us there too, or does atheism lack something? Certainly Schumacher’s economics relies on a form of mild asceticism that is consistent with certain religious teachings, but what about his views on mechanization and the distinction between a tool and a machine?

2) Some of Schumacher’s arguments start to appear like they rely on a form of technological determinism. See for example this paragraph at page 101:
<blockquote cite=””>Strange to say, technology, although of course the product of man, tends to develop by its own laws and principles, and these are very different from those of human nature or of living nature in general. Nature always, so to speak, knows where and when to stop. … [T]he system of nature, of which man is a part, tends to be self- balancing, self- adjusting, self-cleansing. Not so with technology, or perhaps I should say: not so with man dominated by technology and specialisation. Technology recognises no self-limiting principle – in terms, for instance, of size, speed, or violence. It therefore does not possess the virtues of being self-balancing, self- adjusting, and self-cleansing. In the subtle system of nature, technology, and in particular the super-technology of the modern world, acts like a foreign body, and there are now numerous signs of rejection.</blockquote>

Yet underlying Schumacher’s main thesis and main prescription for the future is an assumption that we do, in fact, have control over the development of technology and a normative conclusion that we should aim to develop it in a certain way. How can this apparent contradiction be resolved?

Polanyi and Schumacher

Google just announced they will try and buy back their brand loyalty from the bay area after being targeted as the symbol of gentrification with their latest .org venture.  The judging criteria is:

  1. Community impact
  2. Innovation
  3. Scalability
  4. Feasibility

 

How would Schumacher lay out the four criteria?  (Choose your own)

  1. Community obviousness
  2. Cheapness
  3. Un-scalable
  4. Creativity inducing

Why must an idea scale to be valuable?  Isn’t too big of scale just the problem?  Karl Polanyi, a political economist who is in vogue right now, suggests that the problem with capitalism is that our social values have been stripped from economic relations.  And in turn that they can be re-embedded , therefore protecting the eventual ruin of all natural resources in a self regulated market.

Can Schumacher values provide a roadmap for this re-embedding?

 

 

Carpet Looms, Power Looms

I found a common notion between Schumacher and Gandhi regarding their perspective on technology. In describing Buddhist economics, Schumacher differentiates the “carpet loom” and “power loom”. The carpet loom is a “too”l because it supplements the use of our own body for work. The power loom is a “machine” because it replaces our body for work; it is a “destroyer of culture” because it does the “human” part of work. In light of Buddhist economics, we need this “human” part of work because work and leisure are complementary parts of the same living process and must not be separated.

In a similar vein, Gandhi notes that “The machine should not be allowed to cripple the limbs of man”. He frowns upon cars because they do not satisfy “the primary wants of man”. True happiness, instead, arose from the proper use of “hands and feet”. He cautions in becoming a slave to the machine and losing one’s “moral fibre”.

While Schumacher describes how the carpet loom is seen as a helpful “tool”, Gandhi similarly compliments the sewing machine as a fine technology. On the other end of the technological spectrum, Schumacher notes how (under the microscope of Buddhist economics) the power loom is a harmful “machine”, and Gandhi deems cars unnecessary. Both discuss how machines can negatively affect culture and society by replacing the valuable work of people. The proper use of the human body and the value of good work seem to be the common denominators.

I know I discussed it above, but I thought this quote was too good to not post below:

“The carpet loom is a tool, a contrivance for holding warp threads at a stretch for the pile to be woven round them by the craftsmen’s fingers; but the power loom is a machine, and its significance as a destroyer of culture lies in the fact that it does the essentially human part of the work.”

1) When does a positive “tool” become a negative “machine”?
2) What is the “human” part of our work as technology workers? How can we make it more “human”?

scientists never tire telling us that the fruits of their labors are ‘neutral’

The Buddhist point of view takes the function of work to be at least threefold: to give a man a chance to utilise and develop his faculties; to enable him to overcome his egocentredness by joining with other people in a common task; and to bring forth the goods and services needed for a becoming existence.

this is the closest i’ve read to a qualification similar to what we developed last class. or, at least, i’m tempted to say that our qualifications apply.

the first qualification – “a chance to utilise and develop his faculties” – reminds me of englebart and licklader. he continues,

there are therefore two types of mechanisation which must be clearly distinguished: one that enhances a man’s skill and power and one that turns the work of man over to a mechanical slave, leaving man in a position of having to serve the slave.

i’m wondering which of these, if not both, modern human-computer interaction has achieved?