Globes and Maps

A tangible user interface not discussed in the paper is a map, or a globe, both related but different in interaction. They qualify as a Token in the taxonomy, in that they physically resemble the information they represent to a degree. Globes arguably represent the Earth more realistically than maps do, as maps are flat (even though there may be curvatures to denote roundness); however, people are able to impose their own information upon these TUIs in different ways to more easily extract different data. For example, a cartographer may have an easier time visualizing trajectories on a map, but may have a better perception of time it may take to travel that trajectory on a globe due to how it spins on its axis and representations of wind channels.

Digital maps, such as Google maps, have taken these more physical representations of the Earth to a different level. It is neither more related to a globe nor a map, but somewhere in between due to the skew morphism that has recently been added. And while a user may interact with this interface with a mouse (or a finger) to drag and zoom, or even dictate a trajectory, I’m unsure how to categorize the Street Map view of Google Maps. At that point, I wonder if the TUI becomes no longer a Token, but maybe an Object as Reconfigurable Tool.

Fishkin’s Taxonomy and the Rain Room

One UI that I wondered about (and struggled with how to characterize), is that of the Rain Room. The Rain Room is an embodied, immersive art exhibit that allows a visitor to enter a room and be surrounded by falling water, without getting wet. The room itself responds to the presence of humans and adjusts the rainfall accordingly, so that each human has a small envelope within which they can stand and not be caught “in the rain.”

RainRoom

When reading Fishkin, the Rain Room seemed initially like an example of calm computing. My main rationale for classifying it this way was because the human participants don’t use their hands for controlling the rain—something Fishkin emphasizes as one of the requirements of a TUI rather than calm computing. However, there are many TUIs that do not rely on the hands for the input (someone else’s example of Dance, Dance Revolution comes to mind, where input comes from participants’ feet, mainly). Thus, classification of the Rain Room seemed somewhat problematic: the human participants do control the rain, in the sense that their physical motion is the input that the computer system detects, to then alter output accordingly. However, they do not control the rain with their hands, which seems to be a requirement. Yet, all of Fishkin’s examples of calm computing seem to revolve around systems that do not take any kind of direct input from humans, and the Rain Room most definitely does. If it were an example of calm computing, I would posit that it fits into the “nearby” category of embodiment (which Fishkin couldn’t find offer an example of), and perhaps the verb level of metaphor (after all, moving to avoid the rain in the rain room is like avoiding the rain in real life). Yet, in some ways it seems like “full” metaphor to me, because there is none. To avoid the rain in the Rain Room is to avoid the rain. You are physically being acted upon, experiencing the computers output, in the same manner as in real life.

However, in the end this doesn’t seem to me to fit clearly into any of the taxonomical categories—I would suggest modifying the taxonomy to include a broader definition of input than just hand manipulation, so that the Rain Room could be classified as a TUI rather than calm computing. Much like some of the other students mentioned, it might be helpful for us to discuss more deeply where to draw the line between “what is a TUI” and “what isn’t a TUI,” especially as our computing devices rely more and more on physical manipulation to accomplish our goals.

Reflection on Fishkin’s Taxonomy

I think Fishkin’s taxonomy is helpful in understanding the UIs mentioned in the paper, in terms of embodiment and metaphor. However, there are a lot of other types of UIs left out of the taxonomy that Fishkin’s taxonomy could not be very helpful to help people understand.

For example, his taxonomy on the embodiment axis didn’t discuss the distance between human body and the input device at all. TUI doesn’t need to be literally using hands to do input. For example, we could have Siri or Echo as part of the TUI input system.

Another issue about Fishkin’s taxonomy is his metaphor axis, which only considers the shape and the motion. I think the metaphor concept could be pushed much further. For example, the taxonomy on the metaphor axis could also include all the other human senses other than sight, such as taste, touch, smell, and sound.

Moreover, I would suggest the metaphor axis could also include personality, for people to understand more robot-type TUI systems, such as Baymax in the movie of Big Hero 6.

Taxing Taxonomy

I’m not sure how useful I found Fishkin’s taxonomy. Both axis I found only nominally fitting into some kind of spectrum – “Full”, “Nearby”, “Environmental”, “Distant”, and “None”, “Noun”, “Verb”, “Noun and Verb”, “Full”. Why does Verb trump Noun? In addition, the swapping of the object, between verb and noun, suggests that perhaps the best organization may not be on an axis. I found the taxonomy to take some time to understand, but not particularly explanatory.

Can interfaces really be broken into neat categories dividing full, nearby, environmental and distant (such as AR, VR, a driving dashboard, etc)? What about interfaces that combine these affordances, or purposefully seeks to exploit them in concert? As with any taxonomy, there is some attempt to organize items into some organized fashion, sometimes at the cost of obscuring or glossing over their differences. Is this a more appropriate taxonomy for describing specific features of an interface? Was a two-axis taxonomy the most appropriate way to express some of the insights in his piece?

Two-axis taxonomies are typically useful when they offer strong intuition for how different features/designs/types may function better in a range of environments. The need to unpack the categories somewhat, especially along the metaphor category, limited its usefulness to me. Perhaps the context-dependence of interfaces (rather than Fishkin’s taxonomy itself) limited its usefulness for me. A “Full” Embodiment and Metaphor may be extremely useful in some contexts (perhaps for a tangible toy for early education or designing for young, tactile learners), but not so in others, and these axis swap depending on the context.

I would have appreciated a few more concrete examples of how in different contexts, different ends of the axis might likely be more compelling, in a way a mapping of axises to real-world contexts, such as technical familiarity, goal of the interface (educational vs. a production tool or a tool of analysis). Was this a way simply to organize existing interfaces based on their use and need, or a way to position and explore designs on both spectrums (with potential to modify based on the theory), or some combination of both? What would be interesting for a taxonomy would be for the designer to be able to explore moving further to the ends of either axis, such as by adding different outputs, moving outputs, or changing the object design to fulfill both noun and verb embodiment. While there was some potential here, I feel like it was incompletely worked out, though perhaps that was a goal for a later work.

TUI not in Taxonomy

Fishkin’s taxonomy does not capture TUIs where the input is remote from the output. For example, the Materiable TUI that we saw in class. In this TUI, the user gives input to the square pegs and then the output is reflected on the square pins, but in a remote location. Fishkin’s taxonomy, on the embodiment axis includes ‘distant’, but the Materiable input/output doesn’t fit in this category because the user conducting the input isn’t shifting their gaze to the output at some distant location, but rather a second user is receiving the output in an un-seeable, remote location. I would add ‘remote’ to the embodiment scale in the taxonomy.
Additionally, Fishkin claims that TUIs that are at higher levels of embodiment and metaphor are ‘more tangible’. Yet, a TUI can be multiple levels on the embodiment scale at once (ex: ‘environmental’ and ‘full’), indicating that they the levels are not hierarchical. This flaws the logic of thinking about the two axes as scales. I agree that a TUI can be multiple of these levels, so I wouldn’t change that in the taxonomy, but I would more simply talk about them as categories and a TUI can fall into multiple categories along one axis.

Fishkin and Wearable Devices

Fishkin’s taxonomy provides a framework to think about different TUIs. The paper was published in 2004, more than 10 years ago, some examples are a little bit outdated, while many of them are still cutting edge today. I keep thinking about wearable devices while reading the paper. Some wearable devices such as Fitbit and Apple Watch have become ubiquitous and affordable in the past 5 years. As the name Apple Watch indicated, most of wearable devices are designed like watches that people can wear on wrists. According to Fishkin’s definition, they have a full embodiment, as the output device is the input device. The level of metaphor is also full: it has the metaphor of noun, as the physical shape and look of the wearable devices is analogous to the shape and look of traditional watches; it also has the metaphor of verb, as once you lift up your wrist as a gesture to read your watch, the digital screen will be activated automatically and display your biological stats as well as the real time. As people are used to “watch” time, now they can “watch” their biological time, such as heart rates, sleep, steps, etc.

Fishkin wrote at the end of the article that “the trend has been to increase the levels of embodiment and metaphor…as that occupied by the intersection of the Holmquist ‘tokens’ and the Holmquist ‘containers’.” The popularity of wearable devices today verifies this projection. The more that a TUI artifact resembles an ordinary object in appearance and in motion, perhaps the better its user’s experience could be. Minimizing cognitive overhead by associating the new learning with what people have already known is imperative, especially in industrial design.

Human Sensing Spectrum

I was on board with Fishkin’s embodiment and metaphor axes in trying to categorize TUIs by using the two spectrums. But when I came across the example of a greeting card as a TUI, I felt a certain lacking in their model. Since TUIs are by definition meant to be handled physically by humans, wouldn’t it make sense to somehow include human perception into their model? I am suggesting a third spectrum which measures how aware humans are that the TUI they are using employs some form of computation to produce the output. This is somewhat a measure of the effect the embodiment and metaphor produces in the users.

I have never given any thought to audio greeting cards. The written message being like an audio messages…I never connected the dots there. It’s just a pleasant singing card to me. On the other hand, I am very aware of how programmed the clay sculpting mechanism is. By creating the third spectrum, we can assess how well or poorly the TUI has managed to blend into our lives, whether we are fooled into thinking it’s like a mechanical door knob or can plainly see that between our input and output lies a world of code.

Going beyond the tangible

The first ‘UI’ I thought of after reading Fishkin’s article was an interactive art installation. I visited the PACE gallery in Menlo Park earlier this summer and witnessed a beautiful exhibit titled “Flowers and People – Dark”. In this exhibit, the computer powering it is continually rendering blossoming flowers in real time. What makes this work a TUI is that the interaction between the user and installation drives the animation. As the user approaches the wall, flowers will begin to blossom in greater quantities in front of them. As a result, the animation is always unique.

This is a atypical TUI as it is not tangible. Yet according to Fishkin, it would be considered an environmental form of embodiment. It was only after reading his four levels of the embodiment characteristic that I realized the installation was in fact a TUI, helping me rethink my entire interaction with it.

His explanation of metaphor where he asks “is the system effect of a user action analogous to the real-world effect of similar actions” (p. 349) also added to the fantasy of what I experienced in the exhibit. Approaching a flower and it blossoming is something fantastical, but when thinking of it in terms of metaphor, it managed to further enhance the work. The artist has pushed beyond what is real and expected, and crossed over into whimsy and fairy tale. By thinking of how the work fits into Fishkin’s taxonomy and recognizing the elements of metaphor and embodiment, I was able to better appreciate it and the artist.

PACE

Virtual/Augmented Reality – Does Fishkin’s taxonomy still hold up

Going through numerous examples discussed in the readings and how the taxonomy applies to them, I could not help but think about upcoming virtual/augmented reality interfaces and devices. For example looking at Oculus Rift, HTC Vive, and other similar devices and the many interactions they supports through numerous programs it seems a bit difficult to identify a strong place for them within the 2D taxonomy.

Given how one interacts within an augmented/virtual reality setting it seems a bit problematic when thinking about embodiment and metaphor. Let’s look at a tangible interface that mimics clay sculpting – Illuminating Clay. Illuminating clay is an example of Full embodiment and Full metaphor – this is because in this case user needs to make no analogy when looking at the interface – it looks and acts like the same thing. Now what happens if we were to create a similar augmented reality application for clay sculpting. The way the oculus rift would position itself on the 2D taxonomy could be completely different. We can no longer see our output being the same as our input device since there is no tangible element associated to it. But this could taxonomy could work if we were to add a tangible aspect to the augmented reality application, but the output still remains in the reality created by the application.

This is no way means that the taxonomy is wrong. The taxonomy provides a great way for designers to think about their TUI’s and help make design decision that would support the interface. But this taxonomy might be limited when it comes to virtual/augmented reality interfaces. What we see is that it becomes difficult to restrict a single device/interface within one section of the embodiment vs metaphor graph.

A bit confused…

I must admit that my thoughts while I was reading Fishkin’s taxonomy align with those of Leah. I’m trying to understand the nature of the input event, the sophistication of the processing of the input signal, and the complexity of the output event (probably this comes to mind because I’m continuously thinking about the midterm project. What would qualify as a TUI for this class? Could it be as simple as a greeting card or should it be as complex as a topobo?). One of the examples provided is the ‘‘Platypus Amoeba’’ which responds to touching and petting, what would happen if we change this input for only pressing a “On/off” switch? The embodiment would not change, it will be still full because the output device is the input device, but maybe the metaphor would change from noun and verb to none? Then, is this second dimension capturing in some way the complexity of the system?

Additionally, (and as Leah mentions) if the input of the system is merely gestures from a user (that, for example, will be sensed through a Kinect) that will control the output of a screen or projection, will this system be considered as a TUI or simply a UI? The reading mentions the Graspable Display, in which the ‘‘push away’’ gesture maps to ‘‘reduce text point size’’, would this be the equivalent to what I am asking?

Finally, I am just wondering about what a user interface is? Are we assuming that a computational system must be part of it? If that is the case, how would we classify a door knob for example?