I’m not sure how useful I found Fishkin’s taxonomy. Both axis I found only nominally fitting into some kind of spectrum – “Full”, “Nearby”, “Environmental”, “Distant”, and “None”, “Noun”, “Verb”, “Noun and Verb”, “Full”. Why does Verb trump Noun? In addition, the swapping of the object, between verb and noun, suggests that perhaps the best organization may not be on an axis. I found the taxonomy to take some time to understand, but not particularly explanatory.
Can interfaces really be broken into neat categories dividing full, nearby, environmental and distant (such as AR, VR, a driving dashboard, etc)? What about interfaces that combine these affordances, or purposefully seeks to exploit them in concert? As with any taxonomy, there is some attempt to organize items into some organized fashion, sometimes at the cost of obscuring or glossing over their differences. Is this a more appropriate taxonomy for describing specific features of an interface? Was a two-axis taxonomy the most appropriate way to express some of the insights in his piece?
Two-axis taxonomies are typically useful when they offer strong intuition for how different features/designs/types may function better in a range of environments. The need to unpack the categories somewhat, especially along the metaphor category, limited its usefulness to me. Perhaps the context-dependence of interfaces (rather than Fishkin’s taxonomy itself) limited its usefulness for me. A “Full” Embodiment and Metaphor may be extremely useful in some contexts (perhaps for a tangible toy for early education or designing for young, tactile learners), but not so in others, and these axis swap depending on the context.
I would have appreciated a few more concrete examples of how in different contexts, different ends of the axis might likely be more compelling, in a way a mapping of axises to real-world contexts, such as technical familiarity, goal of the interface (educational vs. a production tool or a tool of analysis). Was this a way simply to organize existing interfaces based on their use and need, or a way to position and explore designs on both spectrums (with potential to modify based on the theory), or some combination of both? What would be interesting for a taxonomy would be for the designer to be able to explore moving further to the ends of either axis, such as by adding different outputs, moving outputs, or changing the object design to fulfill both noun and verb embodiment. While there was some potential here, I feel like it was incompletely worked out, though perhaps that was a goal for a later work.