Going through numerous examples discussed in the readings and how the taxonomy applies to them, I could not help but think about upcoming virtual/augmented reality interfaces and devices. For example looking at Oculus Rift, HTC Vive, and other similar devices and the many interactions they supports through numerous programs it seems a bit difficult to identify a strong place for them within the 2D taxonomy.
Given how one interacts within an augmented/virtual reality setting it seems a bit problematic when thinking about embodiment and metaphor. Let’s look at a tangible interface that mimics clay sculpting – Illuminating Clay. Illuminating clay is an example of Full embodiment and Full metaphor – this is because in this case user needs to make no analogy when looking at the interface – it looks and acts like the same thing. Now what happens if we were to create a similar augmented reality application for clay sculpting. The way the oculus rift would position itself on the 2D taxonomy could be completely different. We can no longer see our output being the same as our input device since there is no tangible element associated to it. But this could taxonomy could work if we were to add a tangible aspect to the augmented reality application, but the output still remains in the reality created by the application.
This is no way means that the taxonomy is wrong. The taxonomy provides a great way for designers to think about their TUI’s and help make design decision that would support the interface. But this taxonomy might be limited when it comes to virtual/augmented reality interfaces. What we see is that it becomes difficult to restrict a single device/interface within one section of the embodiment vs metaphor graph.