Deconstructing the lines between skills and GUI interaction?

McCullough brings a challenging and thought-provoking discussion of the line (be it defined or rather blurry) between 1.) skills, connected strongly to the hands, sharpened only by practice, and 2.) the simplified, “spoonfed” use of computers and their mice, keyboards and screens. To me, the most interesting complications with McCullough’s efforts to explain fundamental differences between the two lie in the ambiguities. Where does the line between manual skills and mind-driven computer interaction become less relevant, or less obvious?

In the example of the “computer graphics artisan,” the fact that this person’s eye is not on their hand, as it makes small and fast movements with the mouse or the keyboard, but instead on the screen, is the distinguishing factor. Sure, it is clear that the graphical and two dimensional feedback delivered by a computer screen is different than the textural feedback and sensual expertise developed by a sculptor or a painter. But what about the times when the graphical and sensual feedbacks are integrated in a symbiotic fashion?

A skill (that is most definitely a skill of both the hands and the body in the ways that McCullough has defined) that is near and dear to me is that of rowing a scull. Just as much an endeavor in art as in sport, my rowing experience (both coaching and as an athlete) came to mind. The rower, similarly to the piano player, cannot have the luxury of using his or her mind in full to complete the actions of the stroke (or the keystrokes). Over the course of a stroke, there are simply too many fine details in the movements, pressures and feelings in the fingertips and the soles of the feet, to be conscious of every action at once. As a result, much of the stroke must be committed to muscle memory, and based on sensation rather than cognition.

The art and skill of crew begins to creep across the line that McCullough drew when we add some of the newer technologies in the sport. For instance, modern rowing machines have instant feedback via GUI that provide graphs of the rower’s “power curve” over the course of a stroke. The shape, size, and duration of this curve can be used as a graphical representation of the feel-based skills of the rower. Even more recently, technologies like Rowing in Motion https://www.rowinginmotion.com/ have begun to bring this sort of graphical instant feedback from the machine to the water. The RiM app, for Android and iOS, uses a rower’s phone in the boat with them and diagnoses not just a power curve, but also other quantifications like check factor (a measure of how quickly the rower changes direction from moving their body toward the stern, away from the finish line, to beginning to apply force to the bladeface), and an acceleration curve, all delivered to the phone screen in real time.

In this case, the mind-oriented and the skill-based work together to improve each other. The rower can more effectively self-diagnose ways to improve his or her skills, and also use the digital feedback to better distinguish the sensations of habits that add to boat speed from those that would take away from it.

Screen Shot 2016-09-11 at 11.48.57 PM

Fingernail UIs

I was recently at the Berkeley Center for New Media Open House and there was a student who was working on fingernails that can show digital information, such as dots for numbers or ambient light. When asked how the fingernails might be used in the world, she gave the example of holding your hand up to a dishwasher and the fingernails lighting up as ambient information indicating whether or not the dishes are clean. Essentially using fingers to physically touch the world around us to gain additional digital information.  The fingernails immediately came to mind when reading McCullough’s piece, because it is such a beautiful extension of the hand.

As McCullough describes, the hand is so closely connected to the mind so to place a digital object on the hands which extends its ability to sense and perceive the world around us seems like a logical place. I like that the fingernails require placing one’s hand on physical objects to sense this additional digital information, which is in line with McCullough’s arguments. Although, the idea of ambient light on the fingernails is requiring a secondary sense, sight. To align more with McCullough’s argument, it might be more fitting to have the digital fingernails output another form of information, maybe some sort of buzz or change in temperature. This would leverage our body’s innate ability to feel and sense more through our fingers (ex: some sort of information conveyed when the left thumb nail changes in temperature which our brains may pick up faster/distinguish more than our left thumb nail lighting up).

Being the machine

While I was reading the chapter the first thing that came to mind  was the work of Laura Devendorf, “being the machine”. This work is about an alternative way of 3D printing, in which a person plays the role of the 3D machine. By doing so, not only the intimate relationship human-material is preserved, but also the pleasure that goes along with the process of creating a handcrafted object (McCullough, pp. 10) In Laura’s own words :

“Being the Machine is an alternative 3D printer that operates in terms of negotiation rather than delegation. It takes the instructions typically provided to 3D printers and presents them to human makers to follow – essentially creating a system for 3D printing by hand with whatever tools and materials one deems necessary. It works like 3D version of a game of connect the dots: a 3D model is uploaded and sent to the printer, the printer draws a single laser point where the user should lay down their material, and as the laser point moves, the user follows, manually drawing the paths and layers until their model is complete. The system makes no attempt to guide the make or tell them how to be more precise or accurate, it simply presents the moves the machine would preform and asks the maker to take it from there.” (http://artfordorks.com/2014/06/being-the-machine/).

I think this one of many other examples that shows that advancement of abstraction and the improvement of human (hand) skill are not necessarily mutually exclusive, as McCullough might have feared.

Tactile texture display: what you see is what you feel

As McCullough argued that “the computer is inherently a tool for the mind and not for the hands” 20 years ago, this argument is still valid today. Despite computer has become more versatile in sizes and powerful in computing speed, the interaction between computer and human is mainly achieved through traditional graphical user interface(GUI). Even with the ubiquitous presence of touch-screen phones all over the world, the touch technology is still underdeveloped, limited to only point and drag. As McCullough pointed out, “the much ballyhooed ‘look and feel’ of contemporary computing is almost all look and hardly any feel.”

I had a chance to visit the Human-Computer Interaction Institute at Carnegie Mellon University (CMU) early this year. At CMU, a tactile texture screen project addresses McCullough’s point on enriching the touching experience of human-computer interaction. They developed a new technology that could bring rich, dynamic tactile feedback to the touchscreens. The technology allows the users to actually “feel” virtual elements through touch. It is based on the electrovibration principle, which generates friction between the conductive touch interface and skin of the moving fingers across the screen, creating a variety of tactile sensations such as sticky, rubbery, or bumpy to the fingers. As the project’s webpage indicates, with combination of an interactive graphical display, this technology enables touch experiences with rich textures and physical affordances.

Touch brings a unique experience when we interact with the world. We enjoy the feeling of different tactile textures, and we explore the properties of objects that cannot be perceived otherwise. CMU’s electrostatic vibration technology comes very close to McCullough’s vision that “human-computer interaction is evolving toward much more satisfactory haptic engagement”. In particular, the technology could potentially benefit visually impaired people who are unable to enjoy the touch-screen technology. I truly hope that this kind of technology could be implemented in our daily lives soon.

Diagramming with both hands

My computing experience that came closest to fully utilizing my hands was working in OmniGraffle while using both a mouse and a MacBook Pro trackpad. OmniGraffle, a diagramming software application, was probably never intended to be used with both a mouse and a trackpad. However, both panning and zooming require holding extra keys while moving the mouse. Instead, I end up using one hand to pan with the trackpad while the other hand zooms and manipulates objects with the mouse.

This works well for moving around a canvas, but fails to add feedback when working on an item. The newest MacBook Pros have haptic feedback used to give a clicking sensation, but this could be used to give other feedback. A similar feature would be needed in the mouse, and is now available from some gaming mice. Using these controls, it would be possible to give force feedback when items in a design are properly aligned, complementing the existing visual cues. Tactile feedback has been available for mobile devices for a long time, but tactile feedback in mice has never become a feature with enough hardware or software support.

Sketch, a similar software which has made extensions easy to build, could probably have an extension written to add this force feedback. Now I just need an expensive mouse and time to program the extension.

Autodesk Fusion 360

Autodesk Fusion 360 is the closest UI that comes to mind, bridging the tangible and the digital through 3D printing. It still shares a great deal of its UI with purely digital programs like Paint or Illustrator; users utilize the select, resize, and click interface and can quickly undo changes if they don’t like them. However, it also asks users to pretend that they are working within a 3d realm. Clicking extrude simulates the effect of building layers. Toggling effects like bevel smooth out corners into rounded edges. By inputting a radius and a click, we can stamp a hole in our work.

After compiling the code and as we print, we can see the extension of our creativity through the hot tip of the extruder. It acts not in lieu of us, but with us, applying a mechanical hand driven by digital instructions originating from our mind. In a sense this UI both complements and goes against McCullough’s arguments. Just as industrial design fused artistic expression with technology, so do we when we 3D print. However at the same time, his argument—that software currently is merely the labor of our minds—is not disproven by Autodesk Fusion 360.

If I could redesign aspects of the software and if I had the technical/scientific chops, I would consider adding a holographic projection portion to the software. This way, instead of dragging the mouse around to “camera/zoom” view my model, I would be able to circle around it and observe it as if it were physical. Perhaps, the software could be improved by functional phicons, like a knife that could digitally slice the model. This knife would be analogous to the ActiveLens from metaDesk that emulated a jeweler’s magnifying glass.

The last thing I attempted to make on Autodesk Fusion 360-Stitch from Lilo & Stitch. Obviously I wasn’t getting too far, haha.

Multimodal Interactions and Passwords

To me, passwords are no longer just an ordered list of characters. My increasing proficiency with a standard US-keyboard has tied in the motion of my hands with the position and combination of the keys on the keyboard to generate the password. The muscle memory in reference here held me in good stead when remembering passwords across various websites and typing them out effortlessly. However, when I enter a need to type in a password to log into a service using my smartphone, the position and the combination of the keys on the touch keyboard are now altered, breaking my flow. What would follow is my visualizing the password as a string of keys and how that often works is reimagining my the hand movements on the standard physical keyboard.

Alluding to McCullough’s point about computers being a tool for the mind and not for the hands, this resonates strongly with me in almost an ironic way as while the keyboard-based computers are accused of restricting what our hands could do and touch-based devices seem to have freed the range of motion for our hands, touch devices in this example force a cognitive responsibility back on the mind and the hands play a subservient role again. This however, as a trend may not last long as technologies like replacing your password with your brainwaves can bypass the hands altogether and workarounds like Slack sending a ‘magic link’ to your inbox to log in are emerging and extant respectively. This is also ideal as typing a password isn’t exactly pleasurable, but it illustrates the importance that hands have in letting the mind not be burdened by mundane tasks.

In so far as an experience goes in terms of reutilizing the notion of touch and movements of hands, Virtual Reality apps like BowSlinger in The Lab (played on an HTC Vive) and Google’s TiltBrush are on the right track to revive the usage of hands in a more generative way. The former uses a multimodal interaction coupling the pull of the arrow on the bow with the sound of the string stretching to give you an illusion of tension in the string; and it certainly engages the hands more actively than writing out passwords. While the technology may be far too advanced to do away with the Vive hand controllers which hold the hand in a semi-closed position and include non-intuitive motions like having a trackpad near the thumb, we might be able to witness devices which provide feedback to various positions on our palms without it having to be in one type of position and also create resistance if needed. For example, would you be able to feel the weight of a heavy object if you tried to lift it in virtual reality? Or would the sensors on your palms recreate the feel of petting a dog?

Creating experiences the mind desires: HTC Vive

I believe that hands are an extension of the mind and not just closely connected to the mind as stated by McCullough (1996). A lot of experiences that our mind creates are based on our senses and touch is major aspect of them. Not only does the mind use our sense of touch to make sense of our environment but when our mind is wandering and creating experiences, it is our hands that extend first which closely ties in with the concept talked about by McCullough.

In a way, our hands let us create whatever the mind desires. A beautiful point that comes across in the readings is that ‘Hands also discover and are themselves led into exploration’. So though the computer is inherently a tool for the mind, hands are the medium that helps us explore the world through the tool. A perfect example such an interface is the HTC Vive (particularly with applications like Google Tilt brush).

Tilt Brush lets you paint in 3D space with virtual reality. The world is set up as our canvas and the only limitation is what our mind can imagine. The way the interface has been created it has been adapted to be as fluent and natural as we can be. Only a few limitations exist to this interface which needs the user to use a gesture or input method that might not be intuitive when using one’s hand naturally. As a user, I am allowed to use my hands to draw and manipulate in a 3D environment. Also since it is virtual reality, the input and output feel more connected. Think of those times, when you want to create something and on unconsciously are waving your hand around trying to imagine how it would get created and how the output would be, the Vive gets you there. It lets the connection between the mind and the hand do what it was meant to do.

Preserving the Human Element

In 1996, McCullough argued that computers are “inherently a tool for the mind and not for the hands” (McCullough, 7). And, 20 years later, his theory doesn’t deviate far from the truth. While computer technologies have advanced substantially since his article was published, a majority of the work we do using computers still remains an intellectual task.

Currently, I’m in a position where I’m able to interact with new UIs, tools, and applications and I agree with McCullough’s prescient statement. Photoshop for example, is a tool that many graphic artists and designers use to bring their visions to life. However, using a product like Photoshop begins as and remains an intellectual endeavor. The only role hands play during the creation of digital art remains in the clicking and moving of the mouse.

Imagine creating a sphere or a cube using Photoshop. First, you need to create the basic shape. But then, you would need to figure out how to make a 2D object appear to have three dimensions. To do this, you would need to shade the image properly and create perspective using lines or curves to create the 3rd dimension. This entire process is an intellectual pursuit, relying solely on thinking through the problem.

However, if you were using clay to make a sphere, you would be able to feel corners or spikes in the clay and you would round them out by smoothing the edges with your thumb. You would also feel the cool temperature and the slippery personality of the clay and you would be able to feel the roundness and the weight of the shape. Furthermore, you would also be able to feel and guide the formation of the a sphere thereby creating an intimate connection with the piece. This experience of designing with clay or any other physical medium engages one’s senses and creates an emotional connection with the final product.

While digital tools may afford us various advantages like speed, efficiency, and the ability to make changes quickly, there is a sterility in the perfection of the final product. Perhaps the most ironic part about hand-made, tangible products is that their unique characteristics, the ones McCullough wants to preserve, are actually intangible qualities: a human’s touch and a human’s mastery of a skill. As McCullough said, “…hands bring us knowledge of the world. Hands feel. They probe. They practice. They give us sense, as in good common sense…” (McCullough, 1).

 

Digital Sphere   Sphere by clay

(Photo source: https://i.ytimg.com/vi/nZsMgWnyhf4/maxresdefault.jpg, https://i.ytimg.com/vi/HWvxrzQGtA0/hqdefault.jpg)

David Hockney’s iPad drawings

In my personal experience with drawing app for tablets, the response to touch on the surface of the tablet (the would-be stand-in for traditional drawing substrate) is quite impressive. The drawn markings reflect the amount of pressure I use: darker, more opaque, and thicker, compared to areas where I use a lighter touch. I can also layer colors in varying transparency, mimicking different materials such as watercolor, versus pastels or markers. The result is not too unlike using actual materials to make a drawing.

Drawing is an exercise of coordinating the hand to the eye, so while the first sketch tends to be less dependent on pressure, being mostly lines, but in order to develop the story within this sketch, emotions need to be evoked, through color or a sense of physical evidence left by the drawer.

Although I have not personally used a drawing app for tablets to any great extent, I have had the opportunity to witness the result of extensive exploration of the medium at an exhibit of David Hockney’s work in San Francisco’s de Young Museum a few years back. What distinguishes this body of work, titled “A Bigger Exhibition” is that the several of the pieces are entirely created digitally, utilizing a humble drawing app on his iPhone, and later, his iPad.

I refer here to David Hockney’s work because his time investment in the medium reveals an ownership of a unique level of craftsmanship. He started doodling in 2008 with the Brushes app when he discovered it on his iPhone. Since then he had graduated to using an iPad, which had larger surface area, and allowed him to use “more of his fingers.”

This is an example of a medium changing the end product of a craft and altering the possibilities of storytelling experience. The artwork can be presented in two forms: digital or traditionally printed as ink on paper. These two forms differ greatly in their experiential quality — the colors are different because they are based in different technologies: RGB for lights, and CMYK for ink on paper. The works shown digitally have an inherent luminosity because of the nature of the media; the screens themselves are backlit.

The other experiential quality of the works is the ease of enlargement in scale of presentation of these works. They could be enlarged digitally in computer and printed on large scale fabrics, or they could also be mapped onto several monitors mounted on a wall. With traditional substrates such as paper of canvas, the physicality of the object does not allow for change in scale unless if the drawing itself is painstakingly replicated utilizing the grid method on another object. This object would also have to be the size of the intended final presentation.

  In the digital medium we have such a limitation called “pixels,” which is the fundamental unit of any visual information.  All evidences of scaling of the work is dependent on the finite resolution of the file itself; the more the work is enlarged, the more visible these pixels will be. The experience of the viewer varies according to obviousness of these building block units. A viewer standing at a greater distance would have a greater illusion of continuity of form and color, whereas standing up close, the viewer will see the evidence of this illusion created with blocks of solid colors. There is a degree of revelation of the medium itself involved in the viewing experience. The viewer is reminded of the medium itself, but it is not much different from seeing witnessing gobs of paint on a canvas surface up close. The medium may have changed, and the process of craftsmanship may have also changed but the larger concept of the crafting a story utilizing available means remains the same.

One of the digital qualities that Hockney chose to embrace is the playback function of the Brush app. Each drawing stroke is recorded as a separate layer of information, so the iPad is able to replay every layer individually, revealing an animation of time and process. With traditional substrates, this replay of time would be impossible. The viewer, and the artist himself, are able to “travel back in time” so to speak.

While my experience with the table recreates a remarkable mimicry of visual possibilities achievable with physical media, what is conspicuously absent is the friction of contact of actual objects. Hockney, evidently, noticed this as well in his interview: “You miss the resistance of paper a little, but you can get a marvellous flow. So much variety is possible. You can’t overwork this, because it’s not a real surface. In watercolour, for instance, about three layers are the maximum. Beyond that it starts to get muddy. Here you can put anything on anything. You can put a bright, bright blue on top of an intense yellow.”

In response to the present limitations of recreating the physical world, companies such as Fujitsu has developed a prototype “haptic sensory tablet” that can convey a sense of contrasting textures such as slipperiness or roughness. While this simulated texture had been achieved previously existing technology by generating static electricity, Fujitsu’s haptic sensory technology utilizes ultrasonic vibrations to vary the friction between the touchscreen display and the user’s fingertip.

People from the village come up and tease me: “We hear you’ve started drawing on your telephone.” And I tell them, “Well, no, actually, it’s just that occasionally I speak on my sketch pad.”—David Hockney

slide_3 How to enlarge a picture using a home made drawing grid or view finder