Develop the taxonomy further in order to describe and relate tangible UI experiences

As I read through the paper, the UI that popped up on my mind is Wii U. Since I have not personally interacted with many tangible UIs, it’s hard to recall and relate to examples when I try to apply the framework provided in the paper. However, I really like the simple, two-axes taxonomy provided in the paper to help us better classify, understand, and observe the field of Tangible UIs. The Will U offers an interactive gaming experiences where the controller/joystick can acts as multiple objects when paired up with sensors that detect the movement of the users and the position of the controller. The controller can act as tennis racket or table tennis paddled or baseball bat. To apply the metaphor, the controller X can become multiple Xs in different settings. Also, the controller can perform actions that simulate Xing in real world. Will U offers both verb and noun metaphors. To apply embodiment, Wii U qualifies as a distant embodiment since it’s similar to the example given in the paper where a controller interacts with television. The framework is a useful starting point to start analyzing the interaction provided by tangible UIs, however, it doesn’t cover the nature of the interaction in depth such as the input intensity or different types of output responses. It focuses more on the relationship between the user and the UI and lacks analysis on the experience itself. If I were to modify the taxonomy, I would probably work on coming up with ways to categorize or relate different types of input/output that can fit into the metaphor/embodiment framework.

RR03

While reading Fishkin (2004), I was most struck by examples like Virtual Reality and the tiltbrush. I really liked Mithen’s argument that ‘‘the most powerful [metaphors] are those which cross domain boundaries, such as by associating a living entity with something that is inert or an idea with something that is tangible” – and I feel like aspects of tiltbrush cross into that multi-modal metaphor spectrum, like the weight of similar textiles in real life having weight in the VR space.

Fishkin’s taxonomy is also helpful in understanding the UIs because it attempts to create a framework in order to discuss different types of embodiment. I feel like tiltbrush falls into both Environmental and Full experience. I can see aspects of certain environments that could fall into one category, like Distant/None (keyboard interface), but change as a game evolves or player levels up (like Eviro/Verb), so it is difficult to classify a TUI as one category for the duration of the interaction.

Uncertainty on definition of TUI, particularly on input events

This line of thought started from the response prompt, I promise.

For the first reading response, I mentioned Swype typing available on touch screens. Seeing Fishkin’s categorization of traditional keyboards as distant (along the embodiment axis) and none (along the metaphor axis), I though that typing on a touch screen might change the embodiment level to full (the input device is the output device) or maybe near (if we consider the display part of the touchscreen to be different from where the keyboard appears).

But then I started to wonder if a touch screen is even a TUI according to their definition, my main point of uncertainty being the nature of the input event. Fishkin defines an input event as “a physical manipulation performed by a user with their hands on some ‘everyday physical object,’ such as tilting, shaking, squeezing, pushing, or, most often, moving.” Does moving a finger on a touch screen count as a physical manipulation if no object is moved around? Would writing with a stylus be considered more of an input event because, in that case, the user is moving an object? What about gesture-capture systems where there is no “everyday physical object” other than our own bodies? Maybe that’s the center of my question: do our own bodies count as “everyday physical objects” around which TUIs can develop (without a non-corporeal intermediary)?

RR03 Dance Dance Revolution

As Fishkin introduced the taxonomy of TUI, Dance Dance Revolution was a UI that came to mind. As input, players stomp their feet on pressure-sensitive mats to the rhythm of the surrounding music. As output, the computer calls out commentary and generates a color bar that visualizes your performance.

I personally have always found the embodiment a bit lacking, because dance is reduced to patterns of four directional arrows. In some recent versions, they have also added more affordances, such as remote controls that can be utilized to incorporate hand movements into choreography. The metaphor of DDR is one of a verb, the mat translates synchronized feet and hand motions into a graded dance.

I support the taxonomy. When applied to something like DDR, it creates a neat conceptual bin for the UI to fall into. This conceptual bin is demarcated by concepts such as “metaphor” and “nearby”. It’s impressive that he casts this wide net over just about every human-computer interaction (Urp and greeting cards being under one umbrella) and still manages to define a vocabulary for them all. Holmquist’s description of tokens, containers, and tools is also helpful, because he further refines what it means to be a TUI artifact: is the input generic, or is it more contextual? (DDR, I’ve determined, is a token.) I also appreciated how he clarified the distinction between phicons and tokens and containers.

RR03

Hello! When will the prompt for RR03 be updated? It still says TBD, I believe.