McCullough argued that “the computer is inherently a tool for the mind and not for the hands.” And 20 years after, we can add on to his argument by saying that natural interface can be as close to our mind as our hands do.
Thinking about Siri as an example. We don’t necessarily need to use our hands in the middle of our interaction with Siri, because it is based on another part of our body, which is voice, not hand-based interaction. It’s created beyond the gesture-based conceptual constraints of touch screen, by strengthening a long-existing interaction paradigm – voice input based on natural language. We don’t need to translate our mind signals to hand to interact with Siri’s UI, but we can simply think out loud, and read or listen to what Siri outputs, and interact with Siri similarly again.
Besides Siri, an even more extreme example which also aligns with the illustration of Siri is mind UI, where your neural signals can be literally translated as electronic signals to give an interaction system inputs.
Now imaging 10 years after, what if Siri could talk to you as naturally as your classmates could in TUI? Then probably we can push McCullough’s argument even further, by arguing that the computer is inherently a mimic system of human mind, built in the way that could logically think and communicate with its users, which are us, human beings. At the end, there is no boundary in terms of the great potentials of technology, we simply need to take the belief.