Embodied Geometry – Final Project Proposal

Proposed Interaction: Think Twister meets Tetris.
We are creating a collaborative, embodied geometry game. This game could provide children with a collaborative learning environment in which to explore geometric properties of shapes as well as properties of symmetry and spatial reasoning. We are also exploring possibility of creating a shareable memento/artifact of their game play.

Figure 1 – Initial Setup

Interaction
For the scope of the current project, a pair of users will use their limbs to activate circles on an interactive mat (Figure 1). Users must always keep one limb on the center spot. From a geometric stand-point, this center point acts as a frame of reference for body movements (rotations, extensions, etc.). This is an intentional design constraint within the environment that allows for comparison of asynchronously created shapes. We intend for this point to guide their movements such that we believe they will more likely notice patterns and relationships.

Figure 2 – As the shape approaches towards the users, they are supposed to collaborate and create the shape before it hits the mat.

Users will coordinate their body positions to make composite shapes supplied by the system (Figure 2) paying special attention to the orientation of each component shape. A shape will be projected on the floor near the mat, moving slowly toward the mat. Users must create the shape before it reaches the mat (time runs out). For a shape to be recognized as successfully created, users must be touching all necessary points at the same time. The display will also include a small schematic of all the shapes completed so far. In Figure XXX, the triangle is the challenge shape (component), the image in the upper left represents shapes completed so far (composite). Users will also receive feedback on the mat regarding their touch positions.

The game will have different levels of difficulty that require users to use more limbs (easy: legs only, medium: legs and an arm, hard: both legs and both arms). All successfully created shapes will be overlaid to create a final image that users could take with them (Figure 3 below).

Fig 3: How the UI could possibly look like in terms of shapes created and to be created. This is an early sample

Enhancing VR experiences

As someone who spent more time observing others immersed in the VR world during the recent sessions, what I absolutely loved about the whole experience was the incredible capacity of the brain to complete unconnected stimuli and form a coherent impression of the immediate world. For example, standing on the edge of Vesper Peak was enough to trigger the fear of heights in one particular user, despite them being at no risk of falling down a cliff. When Kimiko slightly nudged another user off balance, their brain combined the vision of being on the edge of a mountain and being unstable to very dangerous and hence, they started flailing. This also reminded me of some of the VR rides in Universal Studios where the roller-coaster car is not actually moving anywhere but by tilting it and creating an impression of motion combined with synchronized virtual imagery, it conveys to the brain that you are actually in an animated roller-coaster. This begs the question as to why some rollercoasters (Six Flags) have VR headsets on riders as they seem to underestimate the capacity of the brain to connect these virtual stimuli.

What I liked least about the VR experience was there are still audio limitations where it becomes difficult to create a completely immersive experience. The trouble with headphones was that users would not be able to listen to the conductors of the experiment. How users may benefit from an immersive audio experience is that they might start to spatially relate the source of sound and this creates a greater validation of the virtual space as being something realistic. The most intuitive solution would be to include a small microphone for the moderator to lead the user through the experiment; however, this solution would require validation on whether users prefer to be more disconnected with the real world when navigating the virtual world.

The Moire Effect

Description
With use of a motor and images that have a subtle sense of alignment, I tried to create a lamp that has a Moire effect as it rotates. The motion effect would change depending on the speed. I had also designed the patterns to recreate the effect.

Components

• Arduino Uno
• DC Motor
• Design of static Moire patterns
• Transistor TIP120
• Semiconductor 1N4004
• Jumper wires
• Potentiometer
• 1k Ohm Resistor
``````
/*
* one pot fades one motor
* by DojoDave <http://www.0j0.org>
* Modified again by dave
*/

int potPin = A0;   // select the input pin for the potentiometer
int motorPin = 9; // select the pin for the Motor
int val = 0;      // variable to store the value coming from the sensor
void setup() {
Serial.begin(9600);
}
void loop() {
val = analogRead(potPin);    // read the value from the sensor, between 0 - 1024
Serial.println(val);
analogWrite(motorPin, val/4); // analogWrite can be between 0-255
}

``````

Stretching a Seat

While a simple chair has a structure that is very intuitive to adapt to, sitting (or even semi-standing) humans have a tendency to shape themselves in innovative ways that chair manufacturers might not have foreseen.

Some of the ones that caught my eye include a continuation of an activity, positioning the chair in such a way that another use is satisfied or a user alignment orthogonal to the intended one. As a continuation of an activity, students might carry a bag and in a hurry completely nullify the back-rest that the chair provides, by resting the bag and sitting on the edge themselves. Repositioning the chair so that a user can stretch a muscle or rest a knee while deeply engrossed in study or a standing conversation respectively are more cases in which a seat’s intention starts to become less rigid in how it must be used. While the intended back-rest is for the back, sometimes the lack of an arm rest allow the user to rest sideways on a back-rest allowing the muscles to be stimulated in that position.

Here are some of the images:

1. Continuation of the preceding activity

2. Repositioning the chair (stretch ankle and rest knee)

3. Orthogonal

Piezo Phase

Description:
Using a force sensor, an LED and a piezo I tried to control the famous 12-note melody of Steve Reich’s Piano Phase and tried to place it to the studio version. The effect that leads to certain parts of the phrase being highlighted was accentuated by the LED effect.

Components:

• Arduino Uno
• 1 Piezo Buzzer
• 1 Force Sensor Resistor
• 1 Green LED
• Green Scotch tape
• Ping pong ball to diffuse
• 10K and 220 Ohm resistors
• Jumper wires

Code:

``````
/* Piezo Phase - inspired by Steve Reich
*  (to be played along with Piano Phase)
* ------------
*
* Program to play tones depending on the data coming from the serial port.
*
* The calculation of the tones is made following the mathematical
* operation:
*
*       timeHigh = 1/(2 * toneFrequency) = period / 2
*
* where the different tones are described as in the table:
*
* Piano Phase notes:
*
* E4 F♯4 B4 C♯5 D5 F♯4 E4 C♯5 B4 F♯4 D5 C♯5
*
* Updated table below (thanks Tod Kurt):
*
* note   frequency   PW (timeHigh)
* E4     329.63 Hz   1516.85
* F#4    369.99 Hz   1351.38
* B4     493.88 Hz   1012.39
* C#5    554.37 Hz   901.92
* D5     587.30 Hz   851.35
*
* Transposed
* note   frequency   PW (timeHigh)
* E4     349.228 Hz   1432
* F#4    391.995 Hz   1276
* B4     523.251 Hz   956
* C#5    587.330 Hz   851
* D5     622.254 Hz   804
*
* Transposing to one-step lower
* Base code by Tod E. Kurt <tod@todbot.com> to use new Serial. commands
* and have a longer cycle time.
*/

int speakerPin = 13;
int ledPin = 7;

int fsrPin = A0;

int length = 12; // the number of notes

char notes[] = "EFBcdFEcBFdc";

int beats[] = { 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2};

//bool toggle = HIGH;

float tempo = 96.1538461538;

// everything good till here

void playTone(int tone, int duration){

for (long i = 0; i < duration * 1000L; i += tone*2){
digitalWrite(ledPin, HIGH);
digitalWrite(speakerPin, HIGH);
delayMicroseconds(tone);
digitalWrite(ledPin, LOW);
digitalWrite(speakerPin, LOW);
delayMicroseconds(tone);
}
}

void playNothing(){
//  toggle = LOW;
// this means that the melody is stopped
digitalWrite(ledPin, LOW);
digitalWrite(speakerPin, LOW);
}

void playNote(char note, int duration){
//  toggle = HIGH;
// this means that the melody is running

char names[] = {'C', 'D', 'E', 'F', 'G', 'A', 'B',

'c', 'd', 'e', 'f', 'g', 'a', 'b',

'x', 'y' };

int tones[] = { 1915, 1700, 1432, 1276, 1275, 1136, 956,

851,  804,  765,  593,  468,  346,  224,

655 , 715 };

int SPEED = 5;

for (int i = 0; i < 12; i++){
if (names[i] == note){
int newduration = duration/SPEED;
playTone(tones[i], newduration);
}
}
}

void setup() {
Serial.begin(9600);
pinMode(speakerPin, OUTPUT);
pinMode(ledPin, OUTPUT);
}

void loop() {
Serial.print("\n");
//
//  Serial.print("Toggle: ");
//  Serial.print(toggle);
//  Serial.print("\n");

//    toggle = !toggle;
//  }

for (int i = 0; i < length; i++) {
if (notes[i] == ' ') {
delay(beats[i] * tempo); // rest
}

else {
playNote(notes[i], beats[i] * tempo);
}

delay(tempo);
}
}
}
``````

Fun with Drawing Tools – Elena, Ganesh, Leah

Members:
Elena Lopez, Ganesh Iyer, Leah Rosenbaum

Vision:
To create a collaborative puzzle for children which is fun physically and can be digitally enhanced for new ideas.

Description:
We envision this project as closely embodied, full-metaphor tangible UI where our main targeted audience – children aged 5 and above – can use drawing tools like a compass, a ruler and maybe a few other daily objects to contribute to a narrative in a game environment on a table top interface. The intention of the team is to encourage children to think of geometric tools in a more intuitive and intimate way. This project can also be thought of something adults can engage in to communicate with each other as conversation starters or as a game to play with over a get-together.

The puzzle is tied into a narrative where the players will use their tools to help a protagonist navigate her way to a reward. This environment would be offset by the tangible constraints that the drawing tools themselves pose and digital constraints through virtual traps in the puzzle which help make the game more difficult and engaging with progress and learned skill. The narrative that we’re looking to tie in this also involves the use of materials – straw, wood and stone – and this led us to exploring possible narratives like Three Little Pigs and the Big Bad Wolf or getting a colony of hardworking ants to secure food amid obstacles.

As a starting point, the most basic game environment could be a fluid path of dots indicating a reward-filled path to the goal and the task of the users is to connect these dots using the drawing tools that they have. To make the learning process more coherent and make gravity more intuitive, we chose to go with a birds’ eye view of the environment, so we are looking at the protagonist from above. The dots can involve materials that stack up across levels and the users may use up materials to build their path to the goal, so they might be nudged to draw optimal paths.

The game dynamics physically would be the users drawing paths using soft tools (since the age bar is low, we also want to make sure that the compass and rulers are not sharp/pointed objects). Straight-line paths can be drawn using a ruler and circular-directed paths can be made using a compass for the protagonist to travel along the circumference. An additional advantage is that users can indicate direction of the movement from the way they draw these constituent paths.

Next steps:
The next step for us as designers in this particular project would be to scope the project so that our vision is achieved. This would involve detailing out the game narrative – creating a story, reward system – and the virtual game dynamics in addition to understanding intuitive ways in which children use geometric tools so that the physical interactions that we have proposed can be refined.

Looking forward to hearing your feedback! 🙂

Food Court Fountains

When I was having lunch at one of the Walldorf offices at SAP, I noticed a peculiar thing about the food courts there. There was a consistent background buzz of the water fountains which as I would come to know were strategically placed among the tables; a colleague told me that these fountains were engineered in a particular way and served a purpose and I was asked to identify that purpose.

In the middle of lunch, I realized that I could hear absolutely nothing from the table seated just 5 meters away and it also looked like the participants in the conversation there did not restrain their speech level to make their conversation private. I could hear my colleague seated across the table clearly above the background noise of the water fountains and this was something I recalled when I was reading Strangely Familiar by Blauvelt. This also inspired me to think about clever system decisions that solve more than one problem.

The water fountain while an aesthetic and a class value-add to the entire ambience of the food court served an acoustic utility for an interpersonal cause (privacy of speech), an invisible design underneath something very conspicuous. Rethinking seemingly ordinary architectural elements as powerful tools for a social purpose aligns with Perec’s inquiry method.

Explosions in the Sky

Description:
I tried to use Particles by Daniel Shiffman to create an interesting visualization using 3 potentiometers for adjusting the RGB values of the foreground particles as well as the background color and used the Force Sensor resistor to adjust the transparency of the particles. I used two artist’s sponge to create a diffusion pad for the force sensor. On squeezing it vertically, the pressure is increased and so is the opacity of the particles. On squeezing it horizontally however, the pressure is relieved and the opacity is reduced.

Components:
1 Arduino UNO
1 FSR
3 Potentiometers
1 10k Ohm resistor
2 Artist sponges for force diffusion
Insulation tape

Arduino Code:

``````
// Explosions in the Sky with an FSR, 3 pots and an Arduino UNO
// Code by Ganesh Iyer, UC Berkeley
// Date: 27th September, 2016

int fsrPin = A0;     // the cell and 10K pulldown are connected to a0

// The RGB Pots
// NOTE: Put in the right pin here with the A(x) prefix
int potRedPin = A3;

int potGreenPin = A2;

int potBluePin = A1;

void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
}

void loop() {
// put your main code here, to run repeatedly:
// obtain input from FSR to send to Processing

Serial.print(",");

Serial.print(",");

Serial.print(",");

Serial.print(",");
Serial.print(".");
delay(100);
}
``````

Processing Code:

``````

// Explosions in the Sky by Ganesh Iyer
// Time: Ungodly
// Date: 27th September, 2016
// Based on Particles, by Daniel Shiffman.

// The idea is to use FSR to vary the alpha values of explosion
// Use the potentiometers to vary colors

// Processing Setup
import processing.serial.*;
String portname = "/dev/cu.usbmodem1411"; // or "COM5"
Serial port;
String buf="";
int cr = 13; // ASCII return == 13
int lf = 10; // ASCII linefeed == 10

color colorOfParticles;
float hexCode;

float r = 0;
float g = 0;
float b = 0;
float fsr = 0;
int serialVal = 0;

ParticleSystem ps;
PImage sprite;

void setup() {
port = new Serial(this, portname, 9600);
size(1024, 768, P2D);
orientation(LANDSCAPE);
ps = new ParticleSystem(10000);

// Writing to the depth buffer is disabled to avoid rendering
// artifacts due to the fact that the particles are semi-transparent
// but not z-sorted.
}

void draw () {
serialEvent(port);
background((255-r), (255-g), (255-b));

ps.update();
ps.display();

ps.setEmitter(width/2,(height/2)-200);

fill(255);
}

void serialEvent(Serial p){
if(arduinoInput != null ){
arduinoInput = trim(arduinoInput);
float inputs[] = float(split(arduinoInput, ','));
r = inputs[0];
g = inputs[1];
b = inputs[2];
fsr = inputs[3];
}
}

class Particle {

PVector velocity; // do not change this
float lifespan;
// Verdict: Derive from FSR; this is alpha value. As long as the FSR is pressed, this will keep generating particles
// If FSR is not pressed at all, the alpha value will be 0 and you can't see the particles anyway.
// Hence, you do not require a mouseclick variable.

// Verdict: Derive from the RGB Pots.

PShape part;
float partSize;
// Do not change

PVector gravity = new PVector(0,0.1);
// Do not change

Particle() {
partSize = random(10,60);
part = createShape();
part.noStroke();
part.texture(sprite);
part.normal(0, 0, 1);
part.vertex(-partSize/2, -partSize/2, 0, 0);
part.vertex(+partSize/2, -partSize/2, sprite.width, 0);
part.vertex(+partSize/2, +partSize/2, sprite.width, sprite.height);
part.vertex(-partSize/2, +partSize/2, 0, sprite.height);
part.endShape();

rebirth(width/2,height/2);
lifespan = random(255);
}

PShape getShape() {
return part;
}

void rebirth(float x, float y) {
float a = random(TWO_PI);
float speed = random(0.5,4);
velocity = new PVector(cos(a), sin(a));
velocity.mult(speed);
lifespan = 255;
part.resetMatrix();
part.translate(x, y);
}

if (lifespan < 0) {
return true;
} else {
return false;
}
}

public void update() {
lifespan = lifespan - 1;

// setTint(rgb value, alpha value) replace 255 with particleColor

part.setTint(color(r, g, b, fsr));
// This is setting color to white all the time.
part.translate(velocity.x, velocity.y);
}
}

class ParticleSystem {
ArrayList<Particle> particles;

PShape particleShape;

float red;
float green;
float blue;

ParticleSystem(int n) {
particles = new ArrayList<Particle>();
particleShape = createShape(PShape.GROUP);

for (int i = 0; i < n; i++) {
Particle p = new Particle();
}
}

void update() {
for (Particle p : particles) {
p.update();
}
}

void setEmitter(float x, float y) {
for (Particle p : particles) {
p.rebirth(x, y);
}
}
}

void display() {

shape(particleShape);
}
}
``````

Video of the visualization

Revisiting Threats and Peripheral Noise

The most plausible explanation for peripheral vision seems to trace itself to evolution. During their hunting and gathering phase, humans needed to be aware of threats that do not dominate their main line of vision; this kind of critical information also gives the perspective of direction of the threat. While Pousman et al’s paper focuses on ambient media that display non-critical information, the fidelity in today’s ambient media and our modern day interpretation of what a threat is can allow us to investigate innovative approaches in ambient media by revisiting of how we can feed critical information to our peripheral senses more responsibly.

Distractions are an unfortunate outcome of the current proliferation of information and some irresponsibly designed stimuli which can exist at the periphery of our senses start to demand our attention. Because of this, we might need innovations that diffuse the noise from our periphery and help focus our attention. Of course, this is not in the way the isolator helmet does it (courtesy Noura: http://laughingsquid.com/the-isolator-a-bizarre-helmet-invented-in-1925-used-to-help-increase-focus-and-concentration/), but ambient media may be able to distinguish peripheral noise from peripheral signal in real-time to help us focus. Interpreting threats – both from an evolutionary and a modern day perspective – would then become an important asset to ambient media despite the various definitions of ambient media relating to non-critical information. Directionally, threats can also extend to time and space – for example, a threat to a student can be missing an important deadline or making the student aware that her workload is about to multiply in the next few weeks.

Ambient media may also help augment our peripheral vision by helping us identify threats which lie beyond our peripheral vision too. For example, on detecting a potential threat pursuing the user at night, ambient media could trigger street lights or a nearby car alarm to attract attention to detract the threat.

Midterm Project Proposal – Leah, Elena, Ganesh

As a group we’re primarily looking at problem spaces related to storytelling for children, children’s education and games in general. Our objective is to create an enriching and playful environment that both adults and children can find use in and enjoy. After extensive brainstorming, we came up with 3 ideas that we are looking to explore further!

Idea 1 – Yuyay: #children #storytelling #education

A small container (according to Holmquist et al. definition) that preserves any of your thoughts and memories to share them with someone… or to keep them for your future self. With these devices we want to extol the value of our thoughts and memories by making a transition from the realm of abstraction to the world of tangibility. Additionally, by seizing this materiality, we aspire to foster meaningful human connections.

Imagine your family used these devices when you were growing up as a game to foster learning and conversation. Every container had a prompt or question associated with it, blue containers were science questions (e.g., why is the sky blue?), the purple ones were questions related to the family history (e.g., How did grandma and grandpa meet?), the red ones were personal questions (e.g., What was your favorite Christmas?), etc. Every other evening after dinner, you and your whole family would bring all the pieces to the table, select one of them, and then enjoy a very pleasant conversation. Once the questions were answered, your mom would give the devices to you and your siblings, so you could record new interesting and meaningful questions.

Now imagine that 15 years have passed by… Your mom brings an old box, opens it and there they are. You listen to your voice and the questions you used to ask. Your mom shares with you her beautiful memories of those days.

Given the unrestricted nature of these devices we envision many uses for them. You could use them during a game-night with your friends similarly to how you would play truth or dare, or as icebreakers in a meeting or conference (pairing up people and asking a question related to their common interests – based on simple questions asked during the registration process), or you could send a “secret” message to your significant other, or send pieces of a single message to different family members (the full message will only reveal itself when all of them are together), teachers could use them in the classroom to capture students’ doubts, etc.

Idea 2 – Pong Tribute: #games

Using a projector and your phone to run the projector, you can project a specialized Pong interface on a wall. This Pong interface is not run by conventional arrow keys but by each player throwing a ball onto the wall. As the ball hits the wall, the paddle for that player appears at that position helping you hit the digital ball on the wall. When the ball bounces back and the player catches it, the paddle disappears encouraging the player to throw the ball again. Using this simple concept, game mechanics can be further developed.

This is an AR concept and will require a projector and a camera to capture the location of the ball.

The idea is to also make this open-source so that grokkers and geeks come up with their own cool innovations from this basic building block. For example, you can introduce the notion of gravity on the game environment so that your paddle starts falling down as soon as it is cast. You can also think of using this kind of interface for not just pong but for solving grid-based 2D puzzles. You can even use physical manipulations such as combining pong with racquetball to form an altogether new game or as a training routine to practice your shot accuracy.

Idea 3 – Augmented Tools for Mathematics: #children #education

A ruler, protractor, compass, and perhaps other tools could be augmented as computational input devices. They could be used as a tangible interface, perhaps for LOGO or a digital geometry environment (DGE). A child could specify a distance on the ruler by pushing a sliding knob, set an angle on the protractor by rotating an arm, or set a radius and arc length on the compass. The computer could react in real time (or perhaps when the user pushes a “play” button) by moving a figure or on-screen stylus the corresponding distance, angle, or arc length. Sequences of moves could be stored on tokens on which users could draw identifying shapes or words.

Child motivations remain ill-defined. We could pose an initial challenge (navigating a maze, constructing a goal shape or scene) as a training task. How could we frame the interaction or display space to motivate further interaction and exploration?

Educational goals of these tools include:

• Linking sometimes abstract virtual objects (distances and shapes) with the physical tools used to create those shapes in the real world
• Promoting progressive quantification of children’s drawing/movement techniques with the goal that these movements and experiences could become resources for more formal work (as in math classes).
• Comparing angles that are equal for example – making textbook figures talk by using gestures