Behavior Incentivizer

Project :
BIP! [Behavior Incentivizer Project] (Final Name TBD)

Team :
Adam Hutz, Mudit Kakkar, Neera Grover

Objective :
The key objective of this project is to build a Tangible User Interface for children who are in the early phases of language development. It should embody a rewards system,
and require actions from the child to achieve the reward. We aim to design intuitive
triggers to hook them into the interaction loop while also guiding behavior through cues and variable rewards to help inculcate positive habits.

Background :
It’s a challenge to communicate with kids who are in early linguistic or prelinguistic
phases of language development and do not always understand causality of everyday
activities. For instance, it is difficult to explain to a 2yearold that brushing their teeth and bathing everyday are essential for keeping hygiene. Similarly, sleep is an acquired behavior as well, something that children learn over a passage of time. Over the years,experts have delved deep into how children learn – and advanced techniques for patterns of potty training and sleep training. Most of these techniques aim at guiding the child’s behavior through cues, routines and rewards.
We feel that this project is at a unique intersection of behavior change, communication (non or early verbal), and education, all while serving a very specialized user group. It is particularly well suited to be a TUI as it is aimed at kids who aren’t savvy enough to use complicated apps.

Proposal :
BIP is placed in the child’s sleeping area with the intent to cue a sleep routine
1) The parent gives their child a prompt: say, to brush their teeth. Upon successful
completion of the task the adult might give the child a token .
2) The child feeds the token to the pet and the pet does a happy dance.
3) This can be repeated for as many tasks as needed and the tokens keep
collecting inside the toy.
4) The parent can remove them later and start over the next day.

habitcuereward-001-e1447002533148:

Constraints:

Hard to follow a user centred design process as it’s hard to get feedback from users of
this age.

Midterm Proposal : Kitchen Choreography (Nisha ,Sam , Sandeep)

Smart cutting Board

Sam Meyer

Sandeep Pal

Nisha Pathak

­

“Kitchen Choreography”

 

We aim to make cooking for everyone a straightforward, simple job. We will do this by revolutionizing the way people have thought about cutting boards. We will bring to you a smart cutting board that will tell you how to chop or slice vegetables, meat etc. Not just will it show you how to cut but it will tell you if you are doing it right! It’s always an issue to have the exact weight of ingredients for your recipe, so our cutting board will also weigh it for you. Remember the last time you made a dish while looking at your iPad and you had dirty hands, so you had to wash your hands everytime you wanted to go to the next step. You can forget that because we will provide you with a display on your cutting board that will display content starting from recipes to various timers that you have set for multiple dishes you are going to cook. Want to know how you have been performing over time? We will tell you that too!

This smart board consists of a hard glass panel underneath which lie an array of sensors (pressure, temperature). These sensors are also used to give feedback to the user indicating how their chopping/cutting methods correlate to proper technique and to the preparation requirements of the recipe. For example, chopping and slicing will have different pressure patterns, and users can be given a message telling them that their chopping is more like slicing. For extra guidance on chopping technique, the screen under the knife can project width of slices for each ingredient. All of these features will allow the users get quick feedback regarding how their cooking is progressing.

The board will be divided into sections where one area is the cutting board and the remaining area will digitally display relevant information. The chopping area will be able to display widths of the knife’s cuts along with angles at which the ingredients should be chopped. The idea here is to enable the user to prepare their ingredients using proper methods. This smart cutting board will also have a unique timer based around preparing multiple dishes at the same time. A user can input a list of recipes and the cutting board will keep track of preparation times for each step. The cutting board will then take this information and work backwards to schedule when the user should begin preparing certain items so they are all finished at the same time (e.g. when to start thawing meat, when to make the salad, when to set the table, etc). Notification sounds will let users know when to move on to the next step, possibly in a different recipe. In this way, the board will support more than one recipe at a time. Finally, the recipe section will also suggest alternate ingredients in the event that the user doesn’t have certain ones on hand. For example, replacing whole milk with another liquid or being able to substitute a certain type of cheese with another product.

 

Rough Product Specs

Cutting Board

  • Pressure sensor (for chopping/cutting technique, weighing food)
  • Display for recipe (multiple recipes being all shown)
  • Display for graph of force
  • Display of temperature (relevant for meat)
  • ⅓ -¼ of the board displays the recipe and the above numerical data
  • Screen under cutting surface
    • Project width of knife cuts
    • Project angle of knife
  • Timer for steps in recipe:
    • Play music
    • Notifications (audio) of things to prepare and when to start preparing them
    • It also can hold multiple timers for the various items that are being cooked (each is labelled with the item’s name) – beeps when it’s done
  • Suggests alternate ingredients in case you don’t have a particular one at home

 

Midterm Project Proposal

Experience 46
Andrew Chong, Vivian Liu & Owen Hsiao

Brief Concept:

Our midterm project proposal is an immersive installation piece exploring the following characteristics:

  • Altering specific musical traits to intensify their capacity to induce chills/elicit an emotional response
  • Eliciting agency and involvement of the “audience”, who becomes an active participant in the experience
  • Minimal “cool” media that is fully engaging but open to perceptual interpretation

One version of the experience is as follows.

The participant enters a dark, enclosed room alone. Faint lights signal hidden affordances. After a lull, Andrew Bird’s Yawny at the Apocalypse begins to play.

Movement towards each general direction induces at first subtle changes in the light/music. For instance, the participant’s position would alter the relative loudness of different voices in the piece. Stepping deeper into the room would intensify the volume (and perhaps clarity) of heavy strings in the piece, with birdsong and light strings fading somewhat, so the participant has the experience of being inside an environmental performance of the music.

Participants can thus “play” with the music by altering their position and exploring hidden affordances within the room.

The minimal conception is as above. Other planned/potential variations include:

  • Utilizing the full capabilities of music software to create other effects that tie directly (sudden shifts in “volume, timbre or harmonic pattern” [1]), or tangentially (enhancing reverberation, altering beat) to eliciting chills
  • Using biosensors (etc. empatica watches measuring galvanic skin response, blood volume pulse, heart rate, heart rate variability) to measure effects of different variations/experimental arms, as well as a potential input into the environment

 

Background:

There has been some work on the specific musical traits that tend to induce a strong emotional response in the listener. One researcher, Martin Guhn, a psychologist who has run experiments with different musical pieces, provides an analysis of these traits. These pieces:

  • “began softly and then suddenly became loud”
  • “included the abrupt entrance of a new “voice”, either a new instrument or harmony”
  • “they often involved an expansion of the frequencies played”
  • “Finally, all the passages contained unexpected deviations in the melody or the harmony.”
In short, music is “most likely to tingle the spine… when it includes surprises in volume, timbre and harmonic pattern.” [1] We chose Andrew Bird’s Yawny at the Apocalypse as it contains all or most of these traits, but wanted to explore whether some kind of agency on the part of the listener, typically absent in most music and art (which tends to be one-directional), can evoke a more intense emotional experience.

 

Below is a spectrogram of Andrew Bird’s piece.

 

Some of the traits described by Guhn are visible here. One way of approaching the analysis is the unexpected resolution of dissonance, or unexpected concord between dissimilar objects, which includes what I describe as “spectral complexity.” Bird’s piece begins with bird-song at a high spectrum, before deep, lower-frequency saturation of sound is introduced around 0:10, until a shrill pitch (rather like whale-song) is introduced the 0.42 mark. A spectral band or gap is clearly visible stretching throughout the song after the high pitch is introduced. Each of these voices are highly dissimilar but produce unexpected concord.

 

Other possible pieces (with different corresponding actuation) are below. Many of these pieces display similar traits or interesting variations on Bird’s piece.

 

 

The project taps into past work in John Chuang’s biosensors course. Some of the work can be viewed here: http://musiconthebrain15.blogspot.com/

 

[1] http://www.wsj.com/articles/SB10001424052970203646004577213010291701378

Midterm Project Proposal – Leah, Elena, Ganesh

As a group we’re primarily looking at problem spaces related to storytelling for children, children’s education and games in general. Our objective is to create an enriching and playful environment that both adults and children can find use in and enjoy. After extensive brainstorming, we came up with 3 ideas that we are looking to explore further!

Idea 1 – Yuyay: #children #storytelling #education

A small container (according to Holmquist et al. definition) that preserves any of your thoughts and memories to share them with someone… or to keep them for your future self. With these devices we want to extol the value of our thoughts and memories by making a transition from the realm of abstraction to the world of tangibility. Additionally, by seizing this materiality, we aspire to foster meaningful human connections.

Imagine your family used these devices when you were growing up as a game to foster learning and conversation. Every container had a prompt or question associated with it, blue containers were science questions (e.g., why is the sky blue?), the purple ones were questions related to the family history (e.g., How did grandma and grandpa meet?), the red ones were personal questions (e.g., What was your favorite Christmas?), etc. Every other evening after dinner, you and your whole family would bring all the pieces to the table, select one of them, and then enjoy a very pleasant conversation. Once the questions were answered, your mom would give the devices to you and your siblings, so you could record new interesting and meaningful questions.

Now imagine that 15 years have passed by… Your mom brings an old box, opens it and there they are. You listen to your voice and the questions you used to ask. Your mom shares with you her beautiful memories of those days.

Given the unrestricted nature of these devices we envision many uses for them. You could use them during a game-night with your friends similarly to how you would play truth or dare, or as icebreakers in a meeting or conference (pairing up people and asking a question related to their common interests – based on simple questions asked during the registration process), or you could send a “secret” message to your significant other, or send pieces of a single message to different family members (the full message will only reveal itself when all of them are together), teachers could use them in the classroom to capture students’ doubts, etc.

Idea 2 – Pong Tribute: #games

Using a projector and your phone to run the projector, you can project a specialized Pong interface on a wall. This Pong interface is not run by conventional arrow keys but by each player throwing a ball onto the wall. As the ball hits the wall, the paddle for that player appears at that position helping you hit the digital ball on the wall. When the ball bounces back and the player catches it, the paddle disappears encouraging the player to throw the ball again. Using this simple concept, game mechanics can be further developed.

This is an AR concept and will require a projector and a camera to capture the location of the ball.

The idea is to also make this open-source so that grokkers and geeks come up with their own cool innovations from this basic building block. For example, you can introduce the notion of gravity on the game environment so that your paddle starts falling down as soon as it is cast. You can also think of using this kind of interface for not just pong but for solving grid-based 2D puzzles. You can even use physical manipulations such as combining pong with racquetball to form an altogether new game or as a training routine to practice your shot accuracy.

Idea 3 – Augmented Tools for Mathematics: #children #education

A ruler, protractor, compass, and perhaps other tools could be augmented as computational input devices. They could be used as a tangible interface, perhaps for LOGO or a digital geometry environment (DGE). A child could specify a distance on the ruler by pushing a sliding knob, set an angle on the protractor by rotating an arm, or set a radius and arc length on the compass. The computer could react in real time (or perhaps when the user pushes a “play” button) by moving a figure or on-screen stylus the corresponding distance, angle, or arc length. Sequences of moves could be stored on tokens on which users could draw identifying shapes or words.

Child motivations remain ill-defined. We could pose an initial challenge (navigating a maze, constructing a goal shape or scene) as a training task. How could we frame the interaction or display space to motivate further interaction and exploration?

Educational goals of these tools include:

  • Linking sometimes abstract virtual objects (distances and shapes) with the physical tools used to create those shapes in the real world
  • Promoting progressive quantification of children’s drawing/movement techniques with the goal that these movements and experiences could become resources for more formal work (as in math classes).
  • Comparing angles that are equal for example – making textbook figures talk by using gestures

Midterm Proposal – Reema and Yifei

The problem space we’re interested in exploring is occupied by the aging population, especially those in unassisted living arrangements. We aim to build something to impact and hopefully improve their quality of life. Quality of life means different things to different people and in this context could range from enabling them to do their daily activities or create a general sense of satisfaction.

Issues commonly faced by this segment of society include feelings of loneliness or isolation, difficulty in expanding the ever-shrinking social network, fading memories that make simple daily tasks (e.g. taking medication) considerably challenging, loss of purpose in life, in addition to decline in mobility.

We have not narrowed down on what the details of our solutions will look like but aim to design a tangible user interface in the form of a ‘game’, that can be spread out on the ground. Our target population is uncomfortable (sometimes lacking the dexterity) to use smartphones and tablets. So, such a game will tap into output signals such as lights and sound which are easily understood and promote a bit of physical movement, potentially have a social aspect (e.g. old people in a local context could play the game together in the physical space), and challenge the minds of the old in a gentle but intellectually provoking way to keep them sharp.

We are inspired by Mahjongg and its complexity that enables old minds to stay healthy and active. Instead of checkers though, we are considering using artifacts from the people’s lifetime e.g. ‘arrange these events in chronological order’, the events being landmark points in time such as World War II, a marriage etc. Perhaps the children could set the game up for their parents or grandparents. The idea is to make the game as personal as possible and make their rich past a part of their present.

Midterm Project Proposal – Daniel, Safei, Michelle

Tangible User Interface C262 – Mid-semester Project Proposal

Group: Daniel, Safei, Michelle

Date: 20160921

 

Project Title

Melodic Memento

 

Observation and Goal

People have always had intimate relationships with music. Music provides powerful and long-lasting impact to the users during the experience of interaction. However, the impact of the interaction dwindles after the experience. Our group thinks that every music interaction is unique even if the song being played to the user is the same. We want to create a memento that will help the user to relive and share the experiences with his or her friends. We are also interested in the various ways user responds to the music experience, be it verbal description, physical movement, or emotional responses shown on their face. We think all of the responses and reactions from the users are what make the experience unique. We are interested in exploring how we use words for physical texture to describe acoustic sound, and similar synesthetic experiences.

 

Input Output
User’s description Geometric shape (generative design)
Emotion Lights
Physical movement Color
Brainwave A digital diary through data visualization
Heat, force music (chords, timbre, rhythm, genre)
Music note
Music tone

 

Possible Implementation

We create a space where one or a group of people can have an interactive experience with music, and:

  • after the experience, they will be able to have something to remember in the form of a token that can capture their experience, eg. 3D printed art piece representing their own memory of the music
  • Or, after the experience, they will be able to share a digital art piece which contains their collective emotional roadmap along with the music itself, and they can share it with other people through social networks.
  • Or, after the experience, they will have an air piece and they will be reminded to listen to the same music every month, every year, or every 10 years, and the series of tangible art pieces could remind them of how differently their response to the same music evolve over the time.
  • Or, during the experience, our interactive system could create responsive real-time feedback with emotional data and/or heart beat as another layer of atmosphere along with the music. Example: https://www.youtube.com/watch?v=k8FraCD6VAg (from 2:46’ how an ocean of the LED lights wearing by the audience changed according to the live music)

 

Embodiment/Metaphor

None Noun Verb Noun and Verb Full
Full x
Nearby x
Environment x
Distant x

 

Vertical = Embodiment, Horizontal = Metaphor

 

Since we are not sure what our output is, everyone row on the embodiment is highlighted. On the metaphor side, since the user will be enjoying and interacting with the music normally, we don’t think there’s a noun/verb metaphor at this point of time. However, since we might instruct the user to interact with the music in a certain way, there could be some interaction that mimic real world behavior. We will update the table when the time comes.

 

Midterm Project Proposal — Collaborative Creation

Midterm Project Proposal

Dina Bseiso, Jake Petterson, and Andrea Gagliano

 

We are interested in collaborative art making. Collaborations simply for the enjoyment of experiencing the creative process with another person, to create a more inclusive environment for non-artists, or to redirect the value of art from finished masterpieces to the explorative and failure-driven process. We imagine that collaborative art making would harness the energy/movement/attitudes of your partner which could be extended to include remote collaboration. Here, we suggest two ideas along these veins.

 

Idea 1 – Orchestrated Ink

 

Similar to how an orchestra of musicians come together to carry out a composition, translating physical elements into art, so can artists come together to create a composition. Imagine a group of people going about their daily lives, each with a different device that measures a certain physical metric. One person in the group is near a canvas with a brush tool. The brush tool makes a mark depending on the physical metrics that are measured by the group, collectively.

 

For example:

  • Person A stretches a rubber band. The tension of the rubber band is an input to the thickness of the mark.
  • Person B wears a heat sensing armband while doing an activity of their choice (running / walking / jumping around / laying down). The hotter the armband the warmer or redder the color of the mark is, and the cooler the armband, the cooler the color of the mark.
  • Person C wears an emotion sensing wristband. When they are feeling particularly relaxed, the marks get wispy and light (perhaps slightly translucent). When they are feeling particularly agitated, the marks get more opaque, more intense, and more indelible.
  • Person D bends a potentiometer, controlling the amount of time that the marks will remain. For example, they could be ephemeral or lasting.
  • Person E squeezes a ball. The more force they use, the smaller the interval of time between the moments when the inputs of person A-D are sampled.

 

Through the assortment of input devices, each one held by an individual in the group, the group of artists work together by combining their creative energy and sharing in the responsibility for composing the aesthetic of the resulting mark on the canvas.

 

Idea 2 – Collaborative art-making

 

Imagine this scenario: You are in a beginning pottery class about to make your first bowl on the turntable. You watch the instructor at the front of the room move her hands up and down, in and out, and turn the table at different speeds to make a perfect bowl. But, as you try yours, you fail miserably. The teacher comes over to you and puts her hands over yours as you throw your bowl so that you begin to feel different pressures. But, what if there was something on the outside of your hands giving you haptic instructions instead of the instructor?

 

Idea: A device on the outside of your hands (one sided glove, rings, or something on the fingernail maybe?) that were able to detect where your hands were positioned in throwing your clay bowl, and then gave output in the form of pressure, tapping, or heat, indicating that part of the hand should move up/down or apply more/less pressure. When throwing on the turntable, you use your foot on a pedal to make the table spin faster/slower. A similar device could be on your foot (toe ring, or anklet, maybe?) instructing you through pressure/tapping/heat to go at a faster or slower pace. Detection on the instructor’s hands and foot would be the input that informs your output.

 

Such a device could be used for instruction, but could also be used for two people to feed off of each other’s actions in tandem to create art pieces they couldn’t have created on their own. For example, partner A and partner B are throwing bowls at the same time. Similar to how partner A and partner B might feed off of each other’s motions when dancing, partner B could feel partner A’s motions (via their hands) while throwing their bowl and respond with different speed/pressure/movement accordingly. Partner B’s responsive motion would be sent back to partner A continuing the process.

 

Other domains of application could be in playing musical instruments (helping someone learn the proper posture and fingering), calligraphy (and/or learning the script of a new language associated with the sounds), drawing, skiing, etc. It could also serve as dismantling barriers to entry to certain crafts with regard to accessibility.

 

Midterm Project Proposal, Molly/ Sasha

Emergency wearable device idea

Molly Mahar, Sasha Volkov

 

TUI Proposal:

We propose creating a small, wearable device to be used in emergency situations. The goal of the device is to have a means of contacting the authorities in the event of an act of violence or failure of health, even if one’s phone is inaccessible. The device would act as a small, independent token (with a distant embodiment) that could communicate with a user’s phone, presumably via bluetooth. We have identified 3 potential target users of this device: students, domestic abuse victims and the elderly.

 

Example Use Case:

A female student is walking home late at night. She has her device setup to send an alert with a location via text to her roommate if she clicks it twice. Someone mugs her, they take her phone and start hitting her. She taps her device twice and it starts recording (which 911 can hear somehow) and sends her location and an emergency message to her roommate. She is recovered shortly, and the muggers are potentially on camera and has audio evidence.

 

Things we may want the token to do:

  • Accept/recognize a pattern-based pressure or safe word input (a secret handshake)
  • Record sound (for evidence)
  • Send an emergency notification including the user’s GPS coordinates to 911 and others of their choosing (preset)
  • Emit an audio or visual output to scare someone off or attract attention (ex: car alarm)
  • Allow access to info stored in an app/controlled through an app

 

Questions this brings up:

  • Should it just be a token or a container? In other words, do we store the data locally, in the cloud, or does it auto-transmit to authorities/ a trusted person?
  • How does this work if you’re separated from your phone? How much distance would we have available?
  • Would information be routed through central servers which actually send the call/information?
  • What are the privacy concerns here, if any?
  • Where can you can wear it? Is it waterproof?
  • How do we account for battery life?
  • What does it look like? Noun metaphor for elderly, versus ‘none’ metaphor for the other two populations?
  • What is the shape/size of the token, given certain demographic interests?