Tag Archives: Nelson

A Garden of Forking Paths & You- What Technology Is, Is Not, and Might (Never) Be

Ed: I kind of just kept writing for longer than I expected.  Apologies for the long post! 

The tech community has taken and run with more than one Borges short story over the years (for another example of an influential story, see The Library of Babel).  And I can’t blame them: Borges writes some pretty cool stuff.  But in a world where the Web is ubiquitous, it’s easy to overlook the the sweeping influence that The Garden of Forking Paths and the system of hypertext that it inspired has had on the world.  By the same token, it’s easy to miss the distinctions between what hypertext is, what it was intended to be, and what it might (not) become.

The Garden of Forking Paths, itself, is far from a technical specification for hypertext.  It is a meditation on possible futures, forking timelines, agency and an unfinished novel of infinite possibilities.  But it is precisely that novel, branching into all possible futures rather than flowing toward one or another, that is often credited as the inspiration for hypertext.  Indeed, Nelson’s proposed system, underneath the technical description, pretty closely mirrors the book in Borges’s story.  An arbitrarily large collection of ordered lists of text (paths), arbitrarily linking to one another (branching), allowing for lists to link or converge, and for a user to trace a relatively ordered path through a series of interconnected pieces of text.  Nelson’s vision is optimistic – writers will be able to drastically cut their workload, annotations built upon annotations will allow us to disambiguate our work to any desired level of precision, and changes made in one list will propagate effortlessly into anything related.

The reality of HTML and the world wide web is an echo of what Nelson envisioned.  Many of the pieces are there- The web is an enormously complex series of linked pages, the freedom to interconnect disparate pages laying the foundation for the unprecedented flow of information that the internet has allowed around the world.  Yet for all the convenience the web has allowed us, it is difficult to say that much has significantly shifted about the way we compose or consume information.  As far as I’m aware, (long-form) authors have move mostly from writing linear paper manuscripts to writing linear digital manuscripts.  Wikipedia, perhaps the poster child for a source of robust, interconnected information, feels more like a traditional encyclopedia with conveniently locate-able citations than anything thoroughly different than its ink and paper forebears.  Though I might be fooled by my dearth of pre-internet experience, I cannot help but feel like I’m waiting for the other shoe to drop.

Nelson’s Dream Machines, as tangled and confusing and immature as it can seem, provides a good counterpoint the the very same optimism which Nelson expresses in his earlier paper.  Much of the text is spent just describing what we do and don’t know about computing, and largely leaves the reader to draw conclusions.  But the overarching slant of the piece pushes back on the sort of technological futurism that we saw with Engelbart and Licklider.  Many ‘prophets’ of technology have forecasted drastic shifts in humanity and technology as time passes – from Licklider’s human-computer symbiosis to Kurzweil’s singularity.  Technology and its supposedly relentless advance have at times been charted as some sort of linearly or exponentially accelerating force of nature (See this ridiculous graph, which equates the development of Eukaryotic life and personal computers).  Similarly,  Nelson points to what remains a chorus among many computer scientists: “We can’t do that yet.”  Early in the reading, Nelson makes a simple and obvious observation: perhaps the potential of technology is not so limitless as it is so often portrayed.

In many of the readings we have done to date, industrialization and the rise of technology have been portrayed as an inevitable and fundamentally transformative force.  Reactions to this vary – Gandhi argues for a return to the Charkha, Tagore points toward a loss of fundamental wholeness, Licklider and Engelbart excitedly imagine unlimited possibilities for human evolution.  Always there is this implication of fundamental change – a basic shift from some (previous) model of human experience to some new and {improved/different/worse} model. Yet it is this inevitable change itself which I find questionable.  I accept that technology is a magnifier of certain parts of the human experience – a greedy man can exploit more people through the mechanisms of a modern corporation than he may have been able to as the head of a guild;  the ubiquity of the internet allows people to spread thoughts or ideas faster than ever before; the mass production of goods has vastly widened the potential proliferation of certain products.  But I fail to see what has been fundamentally altered.  We are born, we live, we die.  Somewhere in between we probably love, and laugh, and hurt, and do all of those other things that people tend to do.

It’s undeniable that the advent of technology has come concomitant with sweeping changes in the structure of our societies and our interactions with our environment.  But I fear that we are missing the tree for its leaves.  It isn’t the factories that are polluting the Ganges – it’s the people who built them.  As far as I can tell, technology hasn’t much changed us – it has simply changed the scale at which we operate.   I remain unconvinced that the a shift in the use of technology can redress the fundamental issues of human nature.  I don’t mean to imply that we should just give up and let what happens happen – there is something to be said for preventing the proliferation of nuclear weapons, or other dangerous technology.  Exerting human agency over the use of the tools we build is not just important, it is essential.  But it seems disingenuous to frame the ills of human nature in the tools we use to accomplish those ends.  At one point in Dream Machines, Nelson asserts that we might not want computers that talk back to us.  I’d ask: “Does it really matter?”

Perhaps I’m missing something important.  Over the course of the semester, we have read a variety of views on what implications technology holds for the human race.  But what does it take to really, actually, change the human experience?  What does that mean? Are we fundamentally different than we were in centuries past?  Are we counting down the days to the technological singularity, or are mass production, industrialization, and personal computing simply a few more incremental changes to the environment in which we continue to be human?