Embodied Geometry – Final Project Proposal

Proposed Interaction: Think Twister meets Tetris.
We are creating a collaborative, embodied geometry game. This game could provide children with a collaborative learning environment in which to explore geometric properties of shapes as well as properties of symmetry and spatial reasoning. We are also exploring possibility of creating a shareable memento/artifact of their game play.

Figure 1 - Embodied Geometry
Figure 1 – Initial Setup

Interaction
For the scope of the current project, a pair of users will use their limbs to activate circles on an interactive mat (Figure 1). Users must always keep one limb on the center spot. From a geometric stand-point, this center point acts as a frame of reference for body movements (rotations, extensions, etc.). This is an intentional design constraint within the environment that allows for comparison of asynchronously created shapes. We intend for this point to guide their movements such that we believe they will more likely notice patterns and relationships.

Figure 2 - Embodied Geometry
Figure 2 – As the shape approaches towards the users, they are supposed to collaborate and create the shape before it hits the mat.

Users will coordinate their body positions to make composite shapes supplied by the system (Figure 2) paying special attention to the orientation of each component shape. A shape will be projected on the floor near the mat, moving slowly toward the mat. Users must create the shape before it reaches the mat (time runs out). For a shape to be recognized as successfully created, users must be touching all necessary points at the same time. The display will also include a small schematic of all the shapes completed so far. In Figure XXX, the triangle is the challenge shape (component), the image in the upper left represents shapes completed so far (composite). Users will also receive feedback on the mat regarding their touch positions.

The game will have different levels of difficulty that require users to use more limbs (easy: legs only, medium: legs and an arm, hard: both legs and both arms). All successfully created shapes will be overlaid to create a final image that users could take with them (Figure 3 below).

Figure 3 - Embodied Geometry
Fig 3: How the UI could possibly look like in terms of shapes created and to be created. This is an early sample

Virtual reality experience

This was my second experience with VR, and it largely  overshadowed the first one. It was awesome! I especially enjoyed being part of this fantasy world. Being able to easily switch from one world to another was incredible…

At the very beginning it took me some time to figure out how to start the application (the lab). I didn’t know I had to approach the control to the “start” button in the podium (not sure if this is the correct name) and then pull the trigger (or press the circular button, I don’t remember which one was it). However, now that I think about this it was probably the most intuitive thing to do.

Once in the lab I wanted to visit all the different places (or experiences) available. It took me a bit to realize that the best way to move around the lab was using the teleportation option. At first I wanted to move around as I would do in real life, but as you can imagine I hit a wall and some furniture.

The first thing that caught my attention in the lab was the old tree. Once I approached it I was not sure about what I had to do, but Noura told me I should grab the sphere and bring it to my face (without her help I’m not sure whether I would have figured it out by myself) I LOVED to be inside this old cabin with magical objects (in my head I was in the house of a wizard). It was great to suddenly be in this fantasy world.

The other experience that I liked a lot was visiting the solar system. Even though I have been told and read about the relative size of the planets, it was impressive to see how small the Earth is compered to the Sun, Saturn and Jupiter. I was there! I could see it. Being able to grab the planets with my hands was a nice detail, I even put Saturn on my head! While I was interacting with the planets I couldn’t help to think that  VR has a lot of potential in education.

I also liked playing with the robot-dog. Petting him, and scratching its tummy (or trying to) made me happy. I thought about my pets back in Ecuador. To be honest I didn’t think I was going to feel this kind of “connection” with a virtual entity. However, now that I reflect about this it makes me feel weird and a bit afraid.

Relate to the interaction with the dog, I think the experience could have been improved by providing some sort of haptic feedback. It was strange to pet it and feel nothing (having my hand going through it was ‘upsetting‘). Also, having a different kind of controls (like gloves) would help to make the experience more natural.

Finally, I completely agree with something that was mentioned in the article from Wired: the transition from the virtual to the real world might be difficult. I really wanted to stay in this parallel world longer. Once I took off the equipment, and realize that I was in this “boring” room, I felt disoriented and maybe even a bit sad (“I’m back to the real world”). I needed some seconds to be fully back. Although I’m well aware it might have been the excitement of the novelty, I can’t help to feel troubled. What if people actually prefer this other world? What if people get so hooked that forget about reality, and the valuable connection with real animals and people?

Fun with Drawing Tools – Elena, Ganesh, Leah

Members:
Elena Lopez, Ganesh Iyer, Leah Rosenbaum

Vision:
To create a collaborative puzzle for children which is fun physically and can be digitally enhanced for new ideas.

Fun with Drawing Tools

Description:
We envision this project as closely embodied, full-metaphor tangible UI where our main targeted audience – children aged 5 and above – can use drawing tools like a compass, a ruler and maybe a few other daily objects to contribute to a narrative in a game environment on a table top interface. The intention of the team is to encourage children to think of geometric tools in a more intuitive and intimate way. This project can also be thought of something adults can engage in to communicate with each other as conversation starters or as a game to play with over a get-together.

The puzzle is tied into a narrative where the players will use their tools to help a protagonist navigate her way to a reward. This environment would be offset by the tangible constraints that the drawing tools themselves pose and digital constraints through virtual traps in the puzzle which help make the game more difficult and engaging with progress and learned skill. The narrative that we’re looking to tie in this also involves the use of materials – straw, wood and stone – and this led us to exploring possible narratives like Three Little Pigs and the Big Bad Wolf or getting a colony of hardworking ants to secure food amid obstacles.

As a starting point, the most basic game environment could be a fluid path of dots indicating a reward-filled path to the goal and the task of the users is to connect these dots using the drawing tools that they have. To make the learning process more coherent and make gravity more intuitive, we chose to go with a birds’ eye view of the environment, so we are looking at the protagonist from above. The dots can involve materials that stack up across levels and the users may use up materials to build their path to the goal, so they might be nudged to draw optimal paths.

Fun with Drawing Tools 2

The game dynamics physically would be the users drawing paths using soft tools (since the age bar is low, we also want to make sure that the compass and rulers are not sharp/pointed objects). Straight-line paths can be drawn using a ruler and circular-directed paths can be made using a compass for the protagonist to travel along the circumference. An additional advantage is that users can indicate direction of the movement from the way they draw these constituent paths.

Fun with Drawing Tools 3

Next steps:
The next step for us as designers in this particular project would be to scope the project so that our vision is achieved. This would involve detailing out the game narrative – creating a story, reward system – and the virtual game dynamics in addition to understanding intuitive ways in which children use geometric tools so that the physical interactions that we have proposed can be refined.

Looking forward to hearing your feedback! 🙂

Sip of conflict

I’m not sure if this example will follow under the category of “industrial design” but it really helped me to look at things differently. Probably some of you are familiar with an iconic exhibit from the Exploratorium, “Sip of conflict”, where visitors are prompted to drink from a water fountain fashioned from an actual (but unused) toilet. The intention of the exhibit is to experience the tension between reason and emotion. What is wrong with drinking water from this fountain? How much does it matter its shape when you know for sure it is as clean as any other water fountain? I have some friends who were not able to do it. They were simply disgusted by the idea, and even watching me doing it made them feel uncomfortable. I feel this goes along with the point made by Sanders where she states that “experience is a constructive activity”, and Buchenau’s et al. assertion that “The experience of even simple artifacts does not exist in a vacuum but, rather, in dynamic relationship with other people, places and objects.” If I would have been by myself at the Exploratorium I might have struggled a little bit with myself before drinking the water, but being with my friends added a whole new level to the experience.

Additionally, “Sip of conflict” helped me to: a) be more aware of the psychological barriers that we impose on ourselves, and how hard it can be to overcome them (even with the help of logic), and b) rethink my assumptions about design (since there are no rules to be followed).

I believe this exhibit to be a very good example of inviting visitors to look at everyday objects in a different way, and challenging our presumptions and relationships with them (precisely the goal of the exhibit “Strangely familiar”).

Fit your backpack

Components:

  • 1 pot
  • 1 FSR
  • 1x 10KΩ resistor
  • 1x 220Ω resistor
  • jumper wires
  • USB cable
  • computer
  • Arduino Uno
  • Breadboard

Description

I created a program that lets you know if the weight of your backpack is well distributed among your two shoulders. In theory I should use  two FSR, one in each strap. However, since I only have one I am simulating one of them by using a POT.

The two analog signals are acquired by using the Arduino’s ADC, and then they are compared. The result of the comparison is sent to the computer via serial communication, and this information is used to show a graphical depiction of the situation.

Arduino code


int sensorF = A0; // select the input pin for the FSR
int sensorP = A1; // select the input pin for the POT

void setup() {
// declare the ledPin as an OUTPUT:

Serial.begin(9600); // Sets the data rate in bits per second (baud) for serial data transmission.
Serial.println(“WELCOME to the Sensor lab!”);
Serial.print(‘n’);
// analogWrite(redPin, 0);
// analogWrite(bluePin, 0);
// analogWrite(greenPin, 0);
}

void loop() {
int PotValue=0;
int FSRValue=0;
int x=0;
// read the values from the sensors:
PotValue = analogRead(sensorP);
FSRValue = analogRead(sensorF);
x = PotValue – FSRValue;

if (x > -265 && x < 256)
//Serial.println(“Igual”);
Serial.println(0);
else if (x > -265)
//Serial.println(“Izquierda”);
Serial.println(-1);
else
//Serial.println(“Derecha”);
Serial.println(1);

/* Serial.println(FSRValue);
Serial.print(“POT: “);
Serial.println(PotValue);
delay(1000); */
}

Processing code


import processing.serial.*; // Import the library for the serial communication. Serial library reads and writes data to and from external devices one byte at a time.
String portname = “COM4”; // or “COM5” //Providing the name of the serial port.
Serial port; //Creating an object (called “Port”) from the class “Serial”.
String buf=””; //Declare and initialize the object “buff” from the class “String”
int cr = 13; // ASCII carriage return == 13. A carriage return means moving the cursor to the beginning of the line. The code is r
int lf = 10; // ASCII linefeed == 10. Line feed: moving one line forward. The code is n.
int serialVal = 0; //declaring and initialize the variable “serialVal” where we will store the data read from the serial
PShape bot;
void setup() {
size(1024,1024); //Size of window
bot = loadShape(“pp.svg”);
port = new Serial(this, portname, 9600);
noStroke();
}
void draw(){
background(240,240,240);
shape(bot, 400, 200); // Draw at coordinate (280, 40) at the default size
textSize(48);
fill(36, 148, 245);
text(“Backpack FIT”, 350, 100);
if (serialVal==0)
{
textSize(24);
fill(8, 193, 30);
text(“Your back is safe. You are good to go”, 300, 900);
triangle(590, 300, 530, 300, 560, 350);
triangle(490, 300, 430, 300, 460, 350);
}
else if (serialVal==-1)
{
textSize(24);
fill(255, 0, 0);
text(“Fix the left strap of your backpack”, 310, 900);
triangle(590, 300, 530, 300, 560, 350);
}
else
{
textSize(24);
fill(255, 0, 0);
text(“Fix the right strap of your backpack”, 310, 900);
fill(255, 0, 0);
triangle(490, 300, 430, 300, 460, 350);
}
}
// called whenever serial data arrives
void serialEvent(Serial port) { //fromn the library. Called when data is available. Use one of the read() methods to capture this data.
int c = port.read(); //Returns a number between 0 and 255 for the next byte that’s waiting in the buffer. Returns -1 if there is no byte, although this should be avoided by first cheacking available() to see if data is available.
if (c != lf && c != cr) {
buf = buf + char(c); //If the value read is not lf nor keep reading the data.
}
if (c == lf) { //When done with the reading
serialVal = int(buf); // convert the string buf to int
println(“val=”+serialVal); //debugging
buf = “”; //Reset object to nothing
}
}

Some ideas

When I was reading the articles for this week I kept thinking about what kind of information I would like to have available in my surroundings, in a non-obtrusive way. Then, the first thought that came to mind was an issue that worried me (kind of) at the beginning of the semester. On August I moved to a new place, and for the first time in my life I was going to have roommates. Of course this was (and still is) a whole new experience for me because most of my life I lived with my family, and since I moved to the U.S. I lived by myself. Anyway, something I care a lot about is being respectful of other people’s space and comfort. Therefore, I would like to know when I could do things like playing the kind of music I like out loud in the living room, using the old squeaky dryer, or the super noisy blender without bothering other people. So, I think it would be awesome to have some sort of simple, yet elegant, information system (maybe a symbolic sculptural display, according to Pousman and Stasko) that would let us know who is home, and even maybe a simple status (“no bother”, “not my best day”, “sleeping”, “happy to talk”, “need human interaction”, “silence, studying” etc.)

Other ideas that came to mind are dynamic and aesthetically pleasing visualization of:

  1. The amount of electrical energy consumed by a specific space (room, office, house, building).
  2. The number of paper towels used in a specific public restroom
  3. The amount and speed of electrons flowing through a specific wire
  4. The amount of UV radiation
  5. Being a bit more futuristic, the “average mood” of a space, considering the general mood (although it might be very hard to classify human emotions, they are very complex) of the individuals that are in the space. Or the aggregated mood of the space but by visualizing the mood of each individual (in an anonymous way, obviously). I’m imagining some sort of 3D matrix of lights, and light representing a person, and each color a emotion.

Pots, pots, pots

Components
• 1 Arduino Uno
• 3 resistor (220 ohms)
• 3 pots
• 3 LEDs (red, blue, green)
• 1 breadboard
• 1 USB cable
• Jumper wires
• Laptop

Description
For this lab I used my Arduino Uno board to control the brightness of the three LEDs, by manipulating 3 pots.

Code

int sensorRed = A0;    // select the input pin for the potentiometer 1

int sensorBlue = A1;    // select the input pin for the potentiometer 2

int sensorGreen = A2;    // select the input pin for the potentiometer 3

int redPin = 9;        // select the pin for the red LED

int bluePin = 10;      // select the pin for the blue LED

int greenPin = 11;     // select the pin for the green LED

int redValue=0;

int blueValue=0;

int greenValue=0;

void setup() {

// declare the ledPin as an OUTPUT:

pinMode(redPin, OUTPUT);

pinMode(bluePin, OUTPUT);

pinMode(greenPin, OUTPUT);

}

void loop() {

// read the values from the sensors:

redValue = analogRead(sensorRed);

blueValue = analogRead(sensorBlue);

greenValue = analogRead(sensorGreen);

redValue=(redValue/4);

blueValue=(blueValue/4);

greenValue=(greenValue/4);

analogWrite(redPin, redValue);

analogWrite(bluePin, blueValue);

analogWrite(greenPin, greenValue);

}

A bit confused…

I must admit that my thoughts while I was reading Fishkin’s taxonomy align with those of Leah. I’m trying to understand the nature of the input event, the sophistication of the processing of the input signal, and the complexity of the output event (probably this comes to mind because I’m continuously thinking about the midterm project. What would qualify as a TUI for this class? Could it be as simple as a greeting card or should it be as complex as a topobo?). One of the examples provided is the ‘‘Platypus Amoeba’’ which responds to touching and petting, what would happen if we change this input for only pressing a “On/off” switch? The embodiment would not change, it will be still full because the output device is the input device, but maybe the metaphor would change from noun and verb to none? Then, is this second dimension capturing in some way the complexity of the system?

Additionally, (and as Leah mentions) if the input of the system is merely gestures from a user (that, for example, will be sensed through a Kinect) that will control the output of a screen or projection, will this system be considered as a TUI or simply a UI? The reading mentions the Graspable Display, in which the ‘‘push away’’ gesture maps to ‘‘reduce text point size’’, would this be the equivalent to what I am asking?

Finally, I am just wondering about what a user interface is? Are we assuming that a computational system must be part of it? If that is the case, how would we classify a door knob for example?

Whimsical light!

Description:

The brightness of the R, G, B LEDs is controlled via serial commands. Once we have compiled and uploaded the sketch in our Arduino, we open the Serial Monitor (tools  → serial monitor). When the Arduino is ready we will read the message “WELCOME to the WHIMSICAL light!” in the serial monitor window, and then we will be prompted to enter the color command.

The structure of the color command is “<colorCode><colorCode>…<colorCode>”, where “colorCode” is one of “r”,”g”,or “b”. The intensity of each light is determined by the number of times the color code is typed (i.e., once: LED off, twice: low, three times: medium low, four times: medium, five times: medium high, and six times: high).

E.g:

“r”    turns the red LED off.
“gg”   turns the green LED to low brightness
“bbbb” turns the blue LED to medium brightness
If an unrecognized character is received, you will get a surprise!

Components

  • 1 Arduino Uno
  • 3 resistor (220 ohms)
  • 1 red LED, 1 blue LED, 1 green LED
  • 1 USB cable
  • Solid core wire
  • Bubble wrap
  • White plastic piece (see photo)
  • Laptop

Code

char serInString[100];  // array that will hold the different bytes of the string. 100=100characters; [char data type that takes up 1 byte of memory that stores a character value]

// -> you must state how long the array will be else it won’t work properly

char colorCode;         // “colorCode” is one of “r”,”g”,or “b”
int colorVal;           // “colorVal” is a number 0 to 255
int redPin   = 9;   // Red LED,   connected to digital pin 9
int greenPin = 10;  // Green LED, connected to digital pin 10
int bluePin  = 11;  // Blue LED,  connected to digital pin 11
int flag = 1;
int cont = 0;

 

void setup() {            //********* Why don’t we declare all the variables here of before?
pinMode(redPin,   OUTPUT);   // sets the pins as output
pinMode(greenPin, OUTPUT);
pinMode(bluePin,  OUTPUT);
Serial.begin(9600);           // Sets the data rate in bits per second (baud) for serial data transmission.
analogWrite(redPin,   127);   // set them all to mid brightness
analogWrite(greenPin, 127);   // set them all to mid brightness
analogWrite(bluePin,  127);   // set them all to mid brightness

Serial.println(“WELCOME to the WHIMSICAL ligth!”);
Serial.print(‘\n’);
}

void loop () {
if (flag == 1){
Serial.println(“Please enter the 6 digit color command.”);
Serial.println(“(e.g. ‘r’ for turning off the red ligh, rr for a low red light, ggg for a medium green light, or ‘bbbb’ for a high blue light):”);
flag = 0;
}
memset(serInString, 0, 100);   //set all bytes in the buffer to 0. Sets the first num bytes of the block of memory pointed by ptr to the specified value (interpreted as an unsigned char).
readSerialString(serInString);  //read the serial port and create a string out of what you read
colorCode = serInString[0];

if( colorCode == ‘r’ || colorCode == ‘g’ || colorCode == ‘b’ ) {
Serial.print(“colorCode: “);
Serial.println(colorCode);
for (int x=0; x<7; x++){
if(serInString[x] == colorCode){
cont++;
}
}
Serial.print(“Setting color “);
Serial.print(colorCode);
if (cont == 1){
Serial.println(” to 0″);
Serial.print(‘\n’);
colorVal=0;
}
else if(cont == 2){
Serial.println(” to LOW”);
Serial.print(‘\n’);
colorVal=51;
}
else if(cont == 3){
Serial.println(” to MEDIUM LOW”);
Serial.print(‘\n’);
colorVal=102;
}
else if(cont == 4){
Serial.println(” to MEDIUM”);
Serial.print(‘\n’);
colorVal=153;
}
else if(cont == 5){
Serial.println(” to MEDIUM HIGH”);
Serial.print(‘\n’);
colorVal=204;
}
else if(cont == 6){
Serial.println(” to HIGH”);
Serial.print(‘\n’);
colorVal=255;
}
cont=0;
serInString[0] = 0;                   // indicates we’ve used this string. Reseting the value to 0 (we have already backed up its value in colorCode and colorVal.? WHY DO YOU NEED TO INDICATE THIS? AT THE BEGINNING OF THE LOOP WE ARE SETTING ALL THE ARRAY TO 0
if(colorCode == ‘r’)
analogWrite(redPin, colorVal);
else if(colorCode == ‘g’)
analogWrite(greenPin, colorVal);
else if(colorCode == ‘b’)
analogWrite(bluePin, colorVal);
flag = 1;
}
if (colorCode != ‘r’ && colorCode != ‘b’ && colorCode != ‘g’ && colorCode != 0){
int aleatorio=random(15);
for(int y=0;y<255;y=y+random(10,30)){
analogWrite(redPin, y);
delay(80);
}
for(int y=0;y<256;y=y+random(4,25)){
analogWrite(bluePin, y);
delay(80);
}
for(int y=0;y<256;y=y+random(12,30)){
analogWrite(greenPin, y);
delay(80);
}
analogWrite(redPin, 0);
analogWrite(bluePin, 0);
analogWrite(greenPin, 0);
serInString[0] = 0;
}
delay(100);  // wait a bit, for serial data
}
//read a string from the serial and store it in an array you must supply the array variable
void readSerialString (char *strArray) {
int i = 0;
//available(): Get the number of bytes (characters) available for reading from the serial port.
//Syntax: Serial.available()
//Returns: the number of bytes available to read
if(!Serial.available()) {  //gets into the if when there’s no data available through the serial port. If there’s 0 bytes to be read, !Serial.available()=1 (the negation), and enters to the if and returns to the main loop.
Return;
//Serial.print(” Serial available value: “);
//Serial.println(Serial.available());
while (Serial.available()>0) { // while the value being returned by serial.available is different from 0 (meaning there are bytes to be read), reads the info coming through the port and stores it in the specified array
strArray[i] = Serial.read(); //Serial.read(): Returns the first byte of incoming serial data available (or -1 if no data is available). The first byte received is stored in posiiton [0]
i++;
}
}

Being the machine

While I was reading the chapter the first thing that came to mind  was the work of Laura Devendorf, “being the machine”. This work is about an alternative way of 3D printing, in which a person plays the role of the 3D machine. By doing so, not only the intimate relationship human-material is preserved, but also the pleasure that goes along with the process of creating a handcrafted object (McCullough, pp. 10) In Laura’s own words :

“Being the Machine is an alternative 3D printer that operates in terms of negotiation rather than delegation. It takes the instructions typically provided to 3D printers and presents them to human makers to follow – essentially creating a system for 3D printing by hand with whatever tools and materials one deems necessary. It works like 3D version of a game of connect the dots: a 3D model is uploaded and sent to the printer, the printer draws a single laser point where the user should lay down their material, and as the laser point moves, the user follows, manually drawing the paths and layers until their model is complete. The system makes no attempt to guide the make or tell them how to be more precise or accurate, it simply presents the moves the machine would preform and asks the maker to take it from there.” (http://artfordorks.com/2014/06/being-the-machine/).

I think this one of many other examples that shows that advancement of abstraction and the improvement of human (hand) skill are not necessarily mutually exclusive, as McCullough might have feared.