Sensory information in calm computing

First off, I found it very interesting that the authors of Calm Technology chose to discuss glass office windows as an example—I found it very fitting, but surprising! However, it helped me bring the concept of calm computing more into my everyday life, since I don’t regularly encounter items like the Live Wire piece.

One aspect I think is missing from the examples is the concept of progress. One of the reasons, for instance, that users might have a dashboard is to know what their schedule is, what is upcoming, or what time it is. Is there a way to use ambient media to signal progress along some sort of continuum? Currently there are many variations tackling that problem using GUIs and traditional patterns, but it didn’t seem that any of the examples covered it. I take it as an assumption that one would need “progress” to be in the center of their attention in order to grasp it, but why? After all, we can judge progress in terms of distance out of our peripheral vision, or when there are tangible items within our view. But I’m having trouble calling to mind an example of calm computing that incorporates notions of progress when the TUI is in the periphery rather than central.

I’d also be curious to discuss how to bring in our other senses to calm computing. For instance, vision and hearing play a big role in the examples: one is looking at a dashboard, seeing or hearing a Live Wire, or looking through or hearing people through a glass window. But what about taste? What about touch or skin conductance? (These are, after all, classified as TUIs—but there wasn’t any discussion of their tangible properties.) What about temperature variations? And what about smell? Those senses always alert us to potential danger—original ambient media, as another student pointed out in reference to smoke detectors—but how are artists and creators incorporating them into calm computing as a way to communicate meaning and information?

Fishkin’s Taxonomy and the Rain Room

One UI that I wondered about (and struggled with how to characterize), is that of the Rain Room. The Rain Room is an embodied, immersive art exhibit that allows a visitor to enter a room and be surrounded by falling water, without getting wet. The room itself responds to the presence of humans and adjusts the rainfall accordingly, so that each human has a small envelope within which they can stand and not be caught “in the rain.”

RainRoom

When reading Fishkin, the Rain Room seemed initially like an example of calm computing. My main rationale for classifying it this way was because the human participants don’t use their hands for controlling the rain—something Fishkin emphasizes as one of the requirements of a TUI rather than calm computing. However, there are many TUIs that do not rely on the hands for the input (someone else’s example of Dance, Dance Revolution comes to mind, where input comes from participants’ feet, mainly). Thus, classification of the Rain Room seemed somewhat problematic: the human participants do control the rain, in the sense that their physical motion is the input that the computer system detects, to then alter output accordingly. However, they do not control the rain with their hands, which seems to be a requirement. Yet, all of Fishkin’s examples of calm computing seem to revolve around systems that do not take any kind of direct input from humans, and the Rain Room most definitely does. If it were an example of calm computing, I would posit that it fits into the “nearby” category of embodiment (which Fishkin couldn’t find offer an example of), and perhaps the verb level of metaphor (after all, moving to avoid the rain in the rain room is like avoiding the rain in real life). Yet, in some ways it seems like “full” metaphor to me, because there is none. To avoid the rain in the Rain Room is to avoid the rain. You are physically being acted upon, experiencing the computers output, in the same manner as in real life.

However, in the end this doesn’t seem to me to fit clearly into any of the taxonomical categories—I would suggest modifying the taxonomy to include a broader definition of input than just hand manipulation, so that the Rain Room could be classified as a TUI rather than calm computing. Much like some of the other students mentioned, it might be helpful for us to discuss more deeply where to draw the line between “what is a TUI” and “what isn’t a TUI,” especially as our computing devices rely more and more on physical manipulation to accomplish our goals.

Lab 03 – Color Switcher

Description:

For this lab, I wanted to play around with all of the options available to me. I began with the pot controlling the blinking rate, then added a potentiometer to control the fade. After working through a minor issue with the digitalOut vs. analogOut that prevented the fade from working properly with the blinking, I was able to figure out how to make the two potentiometers play nice with each other.

Then I decided that I wanted to use the third potentiometer to control the mapping between the three LEDs—to use the rotation as a signal for which LED should be lit and which should be dark. So I added the third potentiometer and then did some research (and used the serial monitor) to read what values were being output by that new pot. I then set four increments, and had each individual LED light up for the first three (low values would be red, mid-low values would be green, mid-high values would be blue). Then I set the highest value to map to having all three LEDs lit at the same time. By using the serial monitor to troubleshoot from the beginning, I was able to get it to work with a minimum of trouble. So my first pot controls the blink rate, my second pot controls the fade level, and the third pot controls which LEDs are lit.

circuit board and LEDs

Components:

  • Arduino Uno
  • Breadboard
  • 3 potentiometers
  • 3 LEDs (red, green, blue)
  • 3 220Ω resistors
  • jumper wires
  • USB cable
  • computer

Code:


// EXTRA CREDIT CODE: mapping a third potentiometer so that I can control individual LEDs to fire one at a time,
// or have all three fire at once (when third pot is maxed out)

int fadepotPin = A1; // Analog input pin that the fade potentiometer is attached to
int fadepotValue = 0; // value read from the fade pot
int blinkpotPin = A0; // Analog input pin that the blink potentiometer is attached to
int blinkpotValue = 0; // value read from the blink pot
int mappingPin = A2; // this will be the one that maps rotation to LED color
int mappingValue = 0; // initially set the rotational mapping value to zero
int redledPin = 11;
int greenledPin = 10;
int blueledPin = 9;

void setup() {
// initialize serial communications at 9600 bps:
Serial.begin(9600);
// declare the led pins as output:
pinMode(redledPin, OUTPUT);
pinMode(blueledPin, OUTPUT);
pinMode(greenledPin, OUTPUT);
}

void loop() {
fadepotValue = analogRead(fadepotPin); // read the fade pot value
blinkpotValue = analogRead(blinkpotPin); // read the blink pot value
mappingValue = analogRead(mappingPin); // read the value on the mapping pot

// turn on only the LED(s) that correspond to the value of the third pot
// minimum value for blue, middle values for green, and max value for red
// then that will turn on the LEDs according to the fade value and blink value

// print the results to the serial monitor:
Serial.print("mappingValue = ");
Serial.print(mappingValue);
Serial.print("\n");

if (mappingValue < 255) { analogWrite(redledPin, fadepotValue/4); // PWM the LED with the pot value (divided by 4 to fit in a byte) delay(blinkpotValue); // delay by the blink pot value in milliseconds digitalWrite(redledPin, LOW); // then blink the LEDs off } else if (mappingValue >= 255 and mappingValue <; 610) { analogWrite(greenledPin, fadepotValue/4); // PWM the LED with the pot value (divided by 4 to fit in a byte) delay(blinkpotValue); // delay by the blink pot value in milliseconds digitalWrite(greenledPin, LOW); // then blink the LEDs off } else if (mappingValue >= 610 and mappingValue < 770) { analogWrite(blueledPin, fadepotValue/4); // PWM the LED with the pot value (divided by 4 to fit in a byte) delay(blinkpotValue); // delay by the blink pot value in milliseconds digitalWrite(blueledPin, LOW); // then blink the LEDs off } else if (mappingValue >= 770) {
analogWrite(redledPin, fadepotValue/4); // PWM the LED with the pot value (divided by 4 to fit in a byte)
analogWrite(blueledPin, fadepotValue/4);
analogWrite(greenledPin, fadepotValue/4);
delay(blinkpotValue); // delay by the blink pot value in milliseconds
digitalWrite(redledPin, LOW); // then blink the LEDs off
digitalWrite(blueledPin, LOW);
digitalWrite(greenledPin, LOW);
}
delay(blinkpotValue); // pause before blinking back on (also allows loop to run and pots to be adjusted)
}

LED Diffusion

Description:

After experimenting with and testing out the serial monitor functionality of the Arduino environment and watching the fading color change program work, I chose cone coffee filters as my diffuser of choice. This was after much experimentation–initially, I’d had the three LEDs spaced a few rows apart on the breadboard, but this didn’t allow them to mix very well. So first, I modified my breadboard so that all the LEDs were touching each other.

IMG_1185

I tried diffusing with the large cotton balls in class, but those didn’t allow enough light to pass through. I then tried different configurations of coffee filters, lightbulb housings, plastic storage containers, and more. The cone-type coffee filters worked best for me because they maintained structure around the LEDs, and having two stacked on top of each other allowed for more uniform diffusion.

IMG_1184

I then modified the code so that a user can press r, g, or b and the LEDs will change based on the following equation: (#key presses/10)*255. This gives the percentage of light, and then applies that to the full value of 255.

Components:

  • Arduino Uno
  • Breadboard
  • 3 LEDs (rgb)
  • 3 220Ω Resistors
  • 2 Coffee filters
  • jumper wires
  • USB cable
  • laptop

Code:

/* Modified on 9/7/16 by Molly Mahar
*
* Serial RGB LED
* ---------------
* Serial commands control the brightness of R,G,B LEDs
*
* Command structure is "<colorCode><colorVal>", where "colorCode" is
* one of "r","g",or "b" and "colorVal" is a number 0 to 255.
* E.g. "r0" turns the red LED off.
* "g127" turns the green LED to half brightness
* "b64" turns the blue LED to 1/4 brightness
*
* Created 18 October 2006
* copyleft 2006 Tod E. Kurt <tod@todbot.com
* http://todbot.com/
*/

char serInString[100]; // array that will hold the different bytes of the string. 100=100characters;
// -> you must state how long the array will be else it won't work properly
char colorCode;
int colorVal;

int redPin = 9; // Red LED, connected to digital pin 9
int greenPin = 10; // Green LED, connected to digital pin 10
int bluePin = 11; // Blue LED, connected to digital pin 11

void setup() {
pinMode(redPin, OUTPUT); // sets the pins as output
pinMode(greenPin, OUTPUT);
pinMode(bluePin, OUTPUT);
Serial.begin(9600);
analogWrite(redPin, 127); // set them all to mid brightness
analogWrite(greenPin, 127); // set them all to mid brightness
analogWrite(bluePin, 127); // set them all to mid brightness
Serial.println("enter color command (e.g. pressing 'r' 5 times will put red LED at 50% brightness) :");
}

void loop () {
// clear the string
memset(serInString, 0, 100);
//read the serial port and create a string out of what you read
readSerialString(serInString);

colorCode = serInString[0];
if( colorCode == 'r' || colorCode == 'g' || colorCode == 'b' ) {
//colorVal = atoi(serInString+1);
colorVal = 0;
for (int i=0; i<strlen(serInString); i++) {
if( colorCode == 'r' || colorCode == 'g' || colorCode == 'b' )
{
colorVal += 1;
}
}
if (strlen(serInString) > 10){
colorVal = 0;
}

//colorVal = strlen(serInString);
colorVal = (colorVal/10.0)*255;
Serial.print("setting color ");
Serial.print(colorCode);
Serial.print(" to ");
Serial.print(colorVal);
Serial.println();
serInString[0] = 0; // indicates we've used this string
if(colorCode == 'r')
analogWrite(redPin, colorVal);
else if(colorCode == 'g')
analogWrite(greenPin, colorVal);
else if(colorCode == 'b')
analogWrite(bluePin, colorVal);
}

delay(100); // wait a bit, for serial data
}

//read a string from the serial and store it in an array
//you must supply the array variable
void readSerialString (char *strArray) {
int i = 0;
if(!Serial.available()) {
return;
}
while (Serial.available()) {
strArray[i] = Serial.read();
i++;
}
}

The Cintiq display

One UI—or rather, tool—that stands out for me is that of the Cintiq monitor. It’s a combination of a monitor and a large touch-sensitive screen that graphic designers can use with a stylus. I came across it when I was working with motion graphic designers and animators, and they all wanted one. Our company had one, so the sketch artist got to use it. It’s large—I think the screen was about 20 or so inches diagonal—but it wasn’t so large that it couldn’t be nestled into the sketch artist’s lap when he was working carefully. Thus, the stylus was effectively a pen or a pencil, and the screen became the sketch artist’s paper. This tool created a process that aligned much more closely with historical hand-drawn art than anything that had come before. It allowed the sketch artist to draw without having to translate movements from his eyes to his hand. He could sketch directly on the surface and see results immediately, just as when sketching on paper.

This experience is entirely different than using a mouse. Although those of us here at the I School are intimately familiar with using a mouse or a trackpad, such that it is almost second nature, it still remains difficult to draw or create digitally to the degree that skilled artists can on paper. However, the Cintiq display offers many of the affordances of paper, and thus differentiates itself from other methods of using the computer for visual creation. In addition, because it’s a digital medium, there are other benefits: the sketch artist could zoom in on his work and correct minor errors, or easily erase mistakes, and also create vector output of his sketches. This allows for subtlety in one’s work, as McCullough calls for.

Lab 01 – Molly

Description:

For this lab, I began with the simple on/off LED example to get used to the Arduino board. After that, I wanted to try putting multiple LEDs into the circuit. I found some examples online that showed me how to initialize the other LEDs into different ports in my Arduino, and how to create loops to run those. I ended up using all three LEDs, and lighting them in order sequentially.
(I had looked to find information on running multiple LEDs at different rates at the same time, but wasn’t able to find anything simple to implement. It seemed like the delay() function allowed other loops to start running? I’d like to explore this more but haven’t yet.)

Components:

  • 1 Arduino Uno
  • 1 breadboard
  • 3 LEDs
  • 3 220Ω resistors
  • jumper wires
  • USB cable
  • computer

Code:
int led1 = 13;
int led2 = 10;
int led3 = 6;

void setup() {
// put your setup code here, to run once:
pinMode(led1, OUTPUT);
pinMode(led2, OUTPUT);
pinMode(led3, OUTPUT);
}

void loop() {
// put your main code here, to run repeatedly:
digitalWrite(led1, HIGH);
delay(1000);
digitalWrite(led1, LOW);
delay(100);
digitalWrite(led2, HIGH);
delay(1000);
digitalWrite(led2, LOW);
delay(100);
digitalWrite(led3, HIGH);
delay(1000);
digitalWrite(led3, LOW);
delay(100);
}

Photos:

single LED
IMG_1169

three LEDs
IMG_1172

A favorite UI in context – Siri

One of my favorite UIs—mainly for its simplicity—is that of the iPhone’s Siri. There is almost no UI, and that is part of what I love about it. I use Siri only for the most simple and mundane of tasks: setting reminders, setting alarms, setting timers, and the like. Using her (I’ll use that pronoun since the voice is female) allows me to accomplish my goals much more quickly. When I use the GUI to set an alarm for the next day, I’m required to tap at least a few times, and also use fine motor skills to rotate a digital dial to the exact minute that I want the alarm to chime. In contrast, when using Siri I can simply speak clearly, and then see my request echoed in a visual format for confirmation after Siri takes care of it. Referencing activity theory gives me an insight into this particular draw of using Siri: Siri allows me to accomplish my intentions much more easily than I can with the GUI. My activity and interactions with her are goal-oriented, as activity theory points out.

Another pertinent point from activity theory concerns the issue of development. My usage of Siri is so simple that it’s unlikely my “skills” in using her for those purposes will develop mightily, but one of the reasons I enjoy using Siri is that her “skills” develop continually. Apple can update her algorithms and databases in the background, and all that I’m aware of is that she can suddenly hear me better, or come up with better answers, or simply do something more than she could before. This is slightly different than Kaptelinin and Nardi explained, because it is the object that is doing the developing, but I think it points to a key importance in whether people enjoy interacting with Siri. If her skills didn’t develop and improve, we would soon tire of her and her oddities.

Finally, a third reason that I can deduce for why I enjoy using Siri is that the interface allows me to jump around to whatever topic or task I care to do, in whatever order I desire. As highlighted in Dourish, humans when using tangible or ubiquitous interfaces do not approach tasks sequentially. The fact that Siri allows me to do this is no doubt one of the reasons I enjoy using her in contrast to the GUI, which has a very structured approach to setting an alarm or timer, and requires more than a single step.