The first thing that came to my mind when I read McCullough’s assertion that the computer is inherently a tool for the mind and not for the hands was a comment made by Steve Jobs during the early years of Apple – he said that the computer is the equivalent of a bicycle for our minds. While both statements reiterate that using computers help us accomplish tasks more efficiently, even these personalities, with a constant finger on technology’s pulse, couldn’t have predicted the rate at which the boundaries between tactility and cognition would begin to blur. Our hands continue to move in accord with the mind, but there’s
The idea of our mental and tangible facilities converging has existed in theory for a while, but only recently have we begun to experience it with virtual reality apps like TiltBrush. It is easy to get excited about such technology because it clearly represents a radical leap forward. However, I feel that studying what’s on the horizon is always helps put the current landscape in perspective. For instance, 3D TVs were all they rage when they came out, but it wasn’t until Oculus Rift that the user really began feeling the sense of ‘presence’ that content developers were aspiring for. Drawing an analogy, I would like to put VR in perspective by bringing in Project Soli.
Project Soli by Google utilizes an advanced radar sensor which is capable of sensing complex gestures and adept at translating them into computer input. For instance, twisting a virtual knob in mid-air can actually increase or decrease the volume of a pair of Soli-enabled speakers. Similarly, rubbing the index finger against the thumb can trigger the grab and scroll interaction on a Soli-connected screen.
Sensors like these open up an unprecedented world of interactions and allow for richer, more authentic user experiences. A technology like Soli can actually untether us from bulky VR headsets and enrich our surroundings with more natural interactions. Going back to McCullough’s assertion, I reckon that the computer is slowly becoming an appendage for the hand. Going ahead, we may not have to consciously retrieve steps from memory to accomplish day-to-day tasks. With a seemingly natural gesture of our hands, the computer shall do our bidding.
McCullough’s argument that computers are made for the mind and not for the hands is accurate enough to apply to most of computing devices today. However, increasing awareness and improvement have also happened in the past 10 years to make computers more adaptable and friendly to different parts of human organs. Let’s take iPhone as an example, Steve Jobs’s insistence that iPhone should be used easily with our fingers is a huge step forward to force software designers to develop their applications according to how users physically interact with them. Old palmtop computers are considered as failures because human are not used to holding a pen when they interact with computers. So the claim that computers are made just for the mind should be considered as rather outdated since physical interaction with computer is quite different now thanks to touchscreen technology.
On the other hand, there has also been movement of shifting the focus of making computers just for the mind in the past decade as well. Take google glass, virtual reality, and self-driving cars as example. These technologies are created so our hands can do less labor. The interaction is shifted away from the hands to the voice or to other bodily movement. Even though these movements have yet to be widely adopted, the focus of computers right now is leaning towards being a tool for human’s most natural movement. For example, if a task can be completed the most easily with our mouth, then the designer should strive forward to design the interaction with the mouth. Hands are definitely very subtle and sensitive, but the affordance of hands should also be considered because not all tasks should rely on hands. It’s interesting to note the creation of iPad pro has actually bring back the pen just because our hands aren’t precise enough for certain art creation process. So McCullough’s point of view on the relationship of hands are art is also being constantly challenged as technology develops.
One UI—or rather, tool—that stands out for me is that of the Cintiq monitor. It’s a combination of a monitor and a large touch-sensitive screen that graphic designers can use with a stylus. I came across it when I was working with motion graphic designers and animators, and they all wanted one. Our company had one, so the sketch artist got to use it. It’s large—I think the screen was about 20 or so inches diagonal—but it wasn’t so large that it couldn’t be nestled into the sketch artist’s lap when he was working carefully. Thus, the stylus was effectively a pen or a pencil, and the screen became the sketch artist’s paper. This tool created a process that aligned much more closely with historical hand-drawn art than anything that had come before. It allowed the sketch artist to draw without having to translate movements from his eyes to his hand. He could sketch directly on the surface and see results immediately, just as when sketching on paper.
This experience is entirely different than using a mouse. Although those of us here at the I School are intimately familiar with using a mouse or a trackpad, such that it is almost second nature, it still remains difficult to draw or create digitally to the degree that skilled artists can on paper. However, the Cintiq display offers many of the affordances of paper, and thus differentiates itself from other methods of using the computer for visual creation. In addition, because it’s a digital medium, there are other benefits: the sketch artist could zoom in on his work and correct minor errors, or easily erase mistakes, and also create vector output of his sketches. This allows for subtlety in one’s work, as McCullough calls for.