Computing for Hands?

I think this perspective of computers being tools “for the mind and not the hands” is a valid one. We create tangible user interfaces, at least partially, to make the experience of using computing power closer to that of using our bodies in the physical world. But interfaces are just that – they are extensions, add-ons to the computer to create input and output experiences. The computing part does not happen with our hands or bodies; it happens on a piece of hardware with mechanisms too small to see or imagine at anything but an abstract level. Even an abacus, which performs much simpler calculations using hand-power, is governed not by the constraints and affordances of hands but by abstractions (beads representing numbers, columns representing units) organized and maintained by culturally transmitted practices.

And I don’t think the departure from the abilities of hands is by any means a damning one. If we relied only on hands for calculations, we might hardly be able to count past 10 (or 20 if you want to include toes). As soon as we start working with abstractions and representations, we leave the world of what hands alone are capable of. Hands are incredible tools, both for sensing our world and affecting changes in it. However, we do not function as disembodied hands (think Thing from the Addams Family), and it would be limiting, perhaps even impossible to design and create tools exclusively for hands.

All that said, interfaces have come a long way in allowing more naturalistic use of hands when working with computers. Touch screens are one example: instead of training our hands to a somewhat arbitrary input device (the keyboard) we can point, select, and drag our way around a tablet or phone. More toward the augmented reality side of the spectrum, systems that leverage visual indicators (QR codes, visual tags – don’t know the technical term for this category of things) or adjacency sensors (blocks that “know” when they’re stacked or connected – again, don’t know the technical term) allows us to naturalistically manipulate physical objects where the computer captures those interactions and makes sense of them. In the case of touch screens, we still have to learn which hand movements perform what functions (I think about my mom first using an iPhone). Visual indicators and physical sensors seem to have less of a learning curve, elevating intuitive interactions and backgrounding the computer (per our other reading).

Leave a Reply