Keyboard, Mouse, and “Jaedong’s Hands”

“Besides manual dexterity,” McCullough writes of becoming skilled on a computer, “you may feel some intellectual agility. You will learn to build mental models, and to switch frames of reference when necessary. You alertly monitor feedback from a variety of sources, and recognize and recover from errors before they compound themselves” (McCullough 26). In “Abstracting Craft,” McCullough identifies a problem of definition: to what extent can one consider “working on a computer,” or, better, “making work on a computer,” a kind of craft? The chapter concludes by suggesting that even though computation began as a discipline (seemingly) divorced from the hands, much of modern computer labor demands and fosters skills related to both physical dexterity and mental alacrity. The keyboard-mouse interface, often touted as less “natural” than other potential user interfaces like touchscreens (forgetting that there is no “natural” HCI, but only “naturalized” HCIs), can be mastered in such a way as to elevate the computer user to extraordinary levels of prestige and admiration.

Among the most extraordinary examples of wage-earning computer laborers who depend on mouse-keyboard dexterity are competitive gamers. “Jaedong,” a professional South Korean gamer who makes his living playing the game Starcraft (which conveniently has “craft” in its name), is featured in a YouTube video entitled: “Jaedong’s Hands,” which boasts around half a million views and a 96% “like” ratio (demonstrating the affinity its viewers feel for its subject). The camera hovers on the gamer’s hands as he plays the game, never drifting up to show either the game or the gamer’s face, reveling in his speed and coordination. The meaning of the video is clear: it is an homage to something happening; a culmination of years of rigorous training that manifests itself in a series of clicks and taps that register as entirely illegible when separated from the context of the game. Most of his clicks are the result of a kind of distal memory, where the fingers seem to act of their own accord.

But what is being created when Jaedong plays Starcraft? Is it a performance piece? An athletic event? (Are athletic events craft?) Is it merely media content? Do two players play to yield a culminating image of victory and defeat, like a picture of a Go board after a difficult game?

Finally, there seems to be a medium-specific relationship in gaming between “skill-based games” and “casual games,” where each successive move from keyboard-mouse to game controller, from game controller to touch screen, from touch screen to VR, yield lower and lower skill ceilings and potentials for at least one kind of mastery—the kind demonstrated by professional gamers. How will the most “skilled” VR pioneer behave? What will their movements look like? Is “richness” lost in the move to more “natural” methods of interacting with computers?

https://www.youtube.com/watch?v=glSiSoAounY

Beyond the canvas: Apple iPad

McCullough argues that hands are all about skilled learning – an underrated form of knowledge developed out of habit and practice. It’s a physical and experiential knowledge, found in actions like drawing or typing. In these moments, particularly when the hand is the tool of the mind to create forms of art like drawings, paintings or written works, it becomes clear that the two are connected. He also discussed the deskilling that has taken place as a result of the advancement of technology (ex: data entry jobs). 

With the advent of TUIs like the iPad, the pendulum has begun to swing in the opposite direction. The connection between mind and hand has been reestablished to allow skilled actions and processes (like drawing with one’s finger or the Apple Pencil), thereby transforming its surface into a digital piece of canvas. The goal of such devices, evidenced in their iconography, naming conventions and app designs, is to replicate the real world in a digital environment. Given these affordances and new UIs, the hand has once again become a necessary and skillful tool.

apple drawing

Fiber Arts and VR

Creativity can come from different experiences, both tangible and imaginable, but I believe that how one physically interacts with the world greatly shapes how one imagines the world in their mind. One needs the experience of touching and interacting with physical objects in order to shape what can be imagined. McCullogh stated in “Hands” that “learning through the hands shapes creativity itself” and I feel this to be true.

In my experience working with fiber arts, I know each type of fiber to be different. Even when I feel like I know the material, each type of fiber still brings its own character to the piece to shape it – something that my mind could never predict until I physically interact with the fiber. My hands are guiding my mind through the experience, and shaping what is physically possible for my art (for example, thick wool with a tight knit yielding more of a structure than a stockinette with a more elastic yarn).

I feel the same is true with the latest advancements in Virtual Reality, where we try to capture different textures and physical experiences in the virtual space. I recently watched a Paper Fashion artist design a virtual reality dress (Video Demo Link, ~1:12:00), and it made me realize how fabric and fibers are more difficult to predict in both the physical and virtual world. As the artist created a dress in virtual reality with her hands, one could see her learning with each virtual textile she created with her hands. It was a new instantiation of “learning through the hands shapes creativity itself,” not the computer inherently being a tool for the mind. It was in this VR space that she was creating a new physical interaction through the computer to expand what is imaginable and capable for her art.

 

 

 

Siri, Voice UI, Mind UI and to Be Continued…

McCullough argued that “the computer is inherently a tool for the mind and not for the hands.” And 20 years after, we can add on to his argument by saying that natural interface can be as close to our mind as our hands do.

Thinking about Siri as an example. We don’t necessarily need to use our hands in the middle of our interaction with Siri, because it is based on another part of our body, which is voice, not hand-based interaction. It’s created beyond the gesture-based conceptual constraints of touch screen, by strengthening a long-existing interaction paradigm – voice input based on natural language. We don’t need to translate our mind signals to hand to interact with Siri’s UI, but we can simply think out loud, and read or listen to what Siri outputs, and interact with Siri similarly again.

Besides Siri, an even more extreme example which also aligns with the illustration of Siri is mind UI, where your neural signals can be literally translated as electronic signals to give an interaction system inputs.

Now imaging 10 years after, what if Siri could talk to you as naturally as your classmates could in TUI? Then probably we can push McCullough’s argument even further, by arguing that the computer is inherently a mimic system of human mind, built in the way that could logically think and communicate with its users, which are us, human beings. At the end, there is no boundary in terms of the great potentials of technology, we simply need to take the belief.

imgres

TiltBrush

Hands, in their ability to touch — fingertips serving as input data points to the brain — feed a constant stream of data to the brain that is enriched per the homunculus’s illustration. Not only are they sensitive, but they are the most adaptable limb, quick to contort or reposition themselves as reflexes or deliberate actions call for. They are particularly useful in novel situations. In art-making, they study tangible landscapes or tools that can be manipulated, reshaped after feedback from the brain. They are part of a feedback loop, cracial (arguably necessary) to the creative process. Without hands, it is very difficult to transform the mind’s abstract thought to a concrete reality. Regardless of the medium. This point is also why a computer is inherently a tool for the mind, as it deals with abstract concepts and reshaped interactivity that doesn’t necessarily require hands to explore  the unknown.

I think a great example of McCullough’s argument is TiltBrush, a 3D drawing environment within virtual reality. The computer, in this situation, is the software that in a literal, warped reality, allows you to execute otherwise traditional tasks (sculpting, drawing) with your hands via the controllers. Feedback from how the “ink” dispenses from the controller to paint/sculpt your work of art is rendered before you in this reality, allowing you to reprocess and alter your actions to either explore things differently or repeat.

Wacom Cintiq

McCullough’s point is well-taken, but I wonder how much of it would apply to tools before the computer.

Many technological inventions have imposed implicit affordances and limitations of their own. These have also had a large impact on the initial design and how user interfaces of computers evolved.

Text, for instance, can be reproduced by with a finite number of symbols, which lends themselves to technologies from printing blocks to typewriters to modern keyboards. Even the act of putting pen to paper involves, arguably, a curtailment of the full range of motion of hands. Displays also impose natural affordances and limitations of their own, being essentially a 2D representation. The same might be said for paintings or drawings. There is an argument that tools like the computer simply carry on that longstanding tradition, that differentiated symbols on a page and 2-D representations, or texts and pictures, have seeped deeply into our social world over centuries and now ubiquitous and unobserved, play their part where we construct our social worlds.

That tradition can be argued to limit, but also expand human horizons and potentialities. The ambiguity of text, for example, allowed interpretation and imaginations by each individual reader.

In interface design, there has been learning and accommodating to tools (speed typists and hotkeys and general efficiency, even creativity), and now, perhaps more of an adaptation of tools to human metaphors and capabilities. I am curious about approaches that accede that neither might be ideal or preferential, but in a neutral sense both humans adapting to tools and tools to humans might be used in different contexts or in combination to fulfill specific objectives.

For instance, the Wacom Cintiq drawing tablet seeks to combine some digital affordances while leveraging off artist dexterity and subtleties. Programmable buttons on the side panel allow the artist to quickly call common functions, even adapt them over time to their specific work flow, while a matte surface and pressure-sensitive surface allows more of the affordances of paper. Given such tablets are largely used to produce 2-D representations, this might be a natural fit (etc. vs. gesture or 3-D motion tracking). An interface to create sculpture or 3-D environments might benefit from tools that utilize the full capabilities of our hands, as well as spatial positioning, such as in an immersive environment.

The handoff

The first thing that came to my mind when I read McCullough’s assertion that the computer is inherently a tool for the mind and not for the hands was a comment made by Steve Jobs during the early years of Apple – he said that the computer is the equivalent of a bicycle for our minds. While both statements reiterate that using computers help us accomplish tasks more efficiently, even these personalities, with a constant finger on technology’s pulse, couldn’t have predicted the rate at which the boundaries between tactility and cognition would begin to blur. Our hands continue to move in accord with the mind, but there’s

The idea of our mental and tangible facilities converging has existed in theory for a while, but only recently have we begun to experience it with virtual reality apps like TiltBrush. It is easy to get excited about such technology because it clearly represents a radical leap forward. However, I feel that studying what’s on the horizon is always helps put the current landscape in perspective. For instance, 3D TVs were all they rage when they came out, but it wasn’t until Oculus Rift that the user really began feeling the sense of ‘presence’ that content developers were aspiring for. Drawing an analogy, I would like to put VR in perspective by bringing in Project Soli.

Project Soli by Google utilizes an advanced radar sensor which is capable of sensing complex gestures and adept at translating them into computer input. For instance, twisting a virtual knob in mid-air can actually increase or decrease the volume of a pair of Soli-enabled speakers. Similarly, rubbing the index finger against the thumb can trigger the grab and scroll interaction on a Soli-connected screen.

Sensors like these open up an unprecedented world of interactions and allow for richer, more authentic user experiences. A technology like Soli can actually untether us from bulky VR headsets and enrich our surroundings with more natural interactions. Going back to McCullough’s assertion, I reckon that the computer is slowly becoming an appendage for the hand. Going ahead, we may not have to consciously retrieve steps from memory to accomplish day-to-day tasks. With a seemingly natural gesture of our hands, the computer shall do our bidding.

The Shift Towards Natural Body Movement

McCullough’s argument that computers are made for the mind and not for the hands is accurate enough to apply to most of computing devices today. However, increasing awareness and improvement have also happened in the past 10 years to make computers more adaptable and friendly to different parts of human organs. Let’s take iPhone as an example, Steve Jobs’s insistence that iPhone should be used easily with our fingers is a huge step forward to force software designers to develop their applications according to how users physically interact with them. Old palmtop computers are considered as failures because human are not used to holding a pen when they interact with computers. So the claim that computers are made just for the mind should be considered as rather outdated since physical interaction with computer is quite different now thanks to touchscreen technology.
On the other hand, there has also been movement of shifting the focus of making computers just for the mind in the past decade as well. Take google glass, virtual reality, and self-driving cars as example. These technologies are created so our hands can do less labor. The interaction is shifted away from the hands to the voice or to other bodily movement. Even though these movements have yet to be widely adopted, the focus of computers right now is leaning towards being a tool for human’s most natural movement. For example, if a task can be completed the most easily with our mouth, then the designer should strive forward to design the interaction with the mouth. Hands are definitely very subtle and sensitive, but the affordance of hands should also be considered because not all tasks should rely on hands. It’s interesting to note the creation of iPad pro has actually bring back the pen just because our hands aren’t precise enough for certain art creation process. So McCullough’s point of view on the relationship of hands are art is also being constantly challenged as technology develops. 

Musical Instrument and Drawing Apps

McCullough’s argument that “the computer is inherently a tool for the mind and not for the hands” resonates deeply with me and I have been reminded of it while encountering several UIs. Musical instrument apps and drawing apps are a few examples that come to mind immediately.

 

Upon trying to play music on a guitar app vs. the real guitar and keyboard app vs. real keyboard, I have experienced first hand that hands are differentiated and very closely connected to the mind. The mind and hand perceive guitar strings very differently from the screen interaction with the guitar app. Similarly, the mind and hand have a very different experience when a keyboard key is pressed on a real keyboard vs. on the app. As McCullough says, “ The knowledge is not only physical, but also experiential.”

 

A gutarist will tell you how losing a fingernail will make a difference, why they obsess so much over adjusting the strings till it is just right and they probably won’t be able to explain very well why a given pick is their favourite for outdoor performance and why another pick is their favourite for an indoor performance. It exemplifies McCullogh’s argument that “ The way of hands is personal, contextual, indescribable.” While the string adjustment is scientifically explainable, the fact that a particular pick feels just right to their hand is personal, contextual and very hard to explain or like McCullogh says ‘indescribable’.

 

A very similar argument is applicable to apps used for drawing. While they come very close in emulating strokes from different angles and pressure levels, the experience is very different from drawing on canvas. While the digital canvas can emulate the look of various canvas textures, the hand cannot feel the differences as it rests against the touch screen. Also, the drawing instrument (pen/pencil/brush etc) cannot experience the friction of different surfaces and the effect such differences are digitally created which is experientially quite different from how things work in the physical realm.

 

Deconstructing the lines between skills and GUI interaction?

McCullough brings a challenging and thought-provoking discussion of the line (be it defined or rather blurry) between 1.) skills, connected strongly to the hands, sharpened only by practice, and 2.) the simplified, “spoonfed” use of computers and their mice, keyboards and screens. To me, the most interesting complications with McCullough’s efforts to explain fundamental differences between the two lie in the ambiguities. Where does the line between manual skills and mind-driven computer interaction become less relevant, or less obvious?

In the example of the “computer graphics artisan,” the fact that this person’s eye is not on their hand, as it makes small and fast movements with the mouse or the keyboard, but instead on the screen, is the distinguishing factor. Sure, it is clear that the graphical and two dimensional feedback delivered by a computer screen is different than the textural feedback and sensual expertise developed by a sculptor or a painter. But what about the times when the graphical and sensual feedbacks are integrated in a symbiotic fashion?

A skill (that is most definitely a skill of both the hands and the body in the ways that McCullough has defined) that is near and dear to me is that of rowing a scull. Just as much an endeavor in art as in sport, my rowing experience (both coaching and as an athlete) came to mind. The rower, similarly to the piano player, cannot have the luxury of using his or her mind in full to complete the actions of the stroke (or the keystrokes). Over the course of a stroke, there are simply too many fine details in the movements, pressures and feelings in the fingertips and the soles of the feet, to be conscious of every action at once. As a result, much of the stroke must be committed to muscle memory, and based on sensation rather than cognition.

The art and skill of crew begins to creep across the line that McCullough drew when we add some of the newer technologies in the sport. For instance, modern rowing machines have instant feedback via GUI that provide graphs of the rower’s “power curve” over the course of a stroke. The shape, size, and duration of this curve can be used as a graphical representation of the feel-based skills of the rower. Even more recently, technologies like Rowing in Motion https://www.rowinginmotion.com/ have begun to bring this sort of graphical instant feedback from the machine to the water. The RiM app, for Android and iOS, uses a rower’s phone in the boat with them and diagnoses not just a power curve, but also other quantifications like check factor (a measure of how quickly the rower changes direction from moving their body toward the stern, away from the finish line, to beginning to apply force to the bladeface), and an acceleration curve, all delivered to the phone screen in real time.

In this case, the mind-oriented and the skill-based work together to improve each other. The rower can more effectively self-diagnose ways to improve his or her skills, and also use the digital feedback to better distinguish the sensations of habits that add to boat speed from those that would take away from it.

Screen Shot 2016-09-11 at 11.48.57 PM