Midterm Project Proposal – Daniel, Safei, Michelle

Tangible User Interface C262 – Mid-semester Project Proposal

Group: Daniel, Safei, Michelle

Date: 20160921


Project Title

Melodic Memento


Observation and Goal

People have always had intimate relationships with music. Music provides powerful and long-lasting impact to the users during the experience of interaction. However, the impact of the interaction dwindles after the experience. Our group thinks that every music interaction is unique even if the song being played to the user is the same. We want to create a memento that will help the user to relive and share the experiences with his or her friends. We are also interested in the various ways user responds to the music experience, be it verbal description, physical movement, or emotional responses shown on their face. We think all of the responses and reactions from the users are what make the experience unique. We are interested in exploring how we use words for physical texture to describe acoustic sound, and similar synesthetic experiences.


Input Output
User’s description Geometric shape (generative design)
Emotion Lights
Physical movement Color
Brainwave A digital diary through data visualization
Heat, force music (chords, timbre, rhythm, genre)
Music note
Music tone


Possible Implementation

We create a space where one or a group of people can have an interactive experience with music, and:

  • after the experience, they will be able to have something to remember in the form of a token that can capture their experience, eg. 3D printed art piece representing their own memory of the music
  • Or, after the experience, they will be able to share a digital art piece which contains their collective emotional roadmap along with the music itself, and they can share it with other people through social networks.
  • Or, after the experience, they will have an air piece and they will be reminded to listen to the same music every month, every year, or every 10 years, and the series of tangible art pieces could remind them of how differently their response to the same music evolve over the time.
  • Or, during the experience, our interactive system could create responsive real-time feedback with emotional data and/or heart beat as another layer of atmosphere along with the music. Example: https://www.youtube.com/watch?v=k8FraCD6VAg (from 2:46’ how an ocean of the LED lights wearing by the audience changed according to the live music)



None Noun Verb Noun and Verb Full
Full x
Nearby x
Environment x
Distant x


Vertical = Embodiment, Horizontal = Metaphor


Since we are not sure what our output is, everyone row on the embodiment is highlighted. On the metaphor side, since the user will be enjoying and interacting with the music normally, we don’t think there’s a noun/verb metaphor at this point of time. However, since we might instruct the user to interact with the music in a certain way, there could be some interaction that mimic real world behavior. We will update the table when the time comes.


Leave a Reply