The Cintiq display

One UI—or rather, tool—that stands out for me is that of the Cintiq monitor. It’s a combination of a monitor and a large touch-sensitive screen that graphic designers can use with a stylus. I came across it when I was working with motion graphic designers and animators, and they all wanted one. Our company had one, so the sketch artist got to use it. It’s large—I think the screen was about 20 or so inches diagonal—but it wasn’t so large that it couldn’t be nestled into the sketch artist’s lap when he was working carefully. Thus, the stylus was effectively a pen or a pencil, and the screen became the sketch artist’s paper. This tool created a process that aligned much more closely with historical hand-drawn art than anything that had come before. It allowed the sketch artist to draw without having to translate movements from his eyes to his hand. He could sketch directly on the surface and see results immediately, just as when sketching on paper.

This experience is entirely different than using a mouse. Although those of us here at the I School are intimately familiar with using a mouse or a trackpad, such that it is almost second nature, it still remains difficult to draw or create digitally to the degree that skilled artists can on paper. However, the Cintiq display offers many of the affordances of paper, and thus differentiates itself from other methods of using the computer for visual creation. In addition, because it’s a digital medium, there are other benefits: the sketch artist could zoom in on his work and correct minor errors, or easily erase mistakes, and also create vector output of his sketches. This allows for subtlety in one’s work, as McCullough calls for.

Lab 01 – Molly

Description:

For this lab, I began with the simple on/off LED example to get used to the Arduino board. After that, I wanted to try putting multiple LEDs into the circuit. I found some examples online that showed me how to initialize the other LEDs into different ports in my Arduino, and how to create loops to run those. I ended up using all three LEDs, and lighting them in order sequentially.
(I had looked to find information on running multiple LEDs at different rates at the same time, but wasn’t able to find anything simple to implement. It seemed like the delay() function allowed other loops to start running? I’d like to explore this more but haven’t yet.)

Components:

  • 1 Arduino Uno
  • 1 breadboard
  • 3 LEDs
  • 3 220Ω resistors
  • jumper wires
  • USB cable
  • computer

Code:
int led1 = 13;
int led2 = 10;
int led3 = 6;

void setup() {
// put your setup code here, to run once:
pinMode(led1, OUTPUT);
pinMode(led2, OUTPUT);
pinMode(led3, OUTPUT);
}

void loop() {
// put your main code here, to run repeatedly:
digitalWrite(led1, HIGH);
delay(1000);
digitalWrite(led1, LOW);
delay(100);
digitalWrite(led2, HIGH);
delay(1000);
digitalWrite(led2, LOW);
delay(100);
digitalWrite(led3, HIGH);
delay(1000);
digitalWrite(led3, LOW);
delay(100);
}

Photos:

single LED
IMG_1169

three LEDs
IMG_1172

A favorite UI in context – Siri

One of my favorite UIs—mainly for its simplicity—is that of the iPhone’s Siri. There is almost no UI, and that is part of what I love about it. I use Siri only for the most simple and mundane of tasks: setting reminders, setting alarms, setting timers, and the like. Using her (I’ll use that pronoun since the voice is female) allows me to accomplish my goals much more quickly. When I use the GUI to set an alarm for the next day, I’m required to tap at least a few times, and also use fine motor skills to rotate a digital dial to the exact minute that I want the alarm to chime. In contrast, when using Siri I can simply speak clearly, and then see my request echoed in a visual format for confirmation after Siri takes care of it. Referencing activity theory gives me an insight into this particular draw of using Siri: Siri allows me to accomplish my intentions much more easily than I can with the GUI. My activity and interactions with her are goal-oriented, as activity theory points out.

Another pertinent point from activity theory concerns the issue of development. My usage of Siri is so simple that it’s unlikely my “skills” in using her for those purposes will develop mightily, but one of the reasons I enjoy using Siri is that her “skills” develop continually. Apple can update her algorithms and databases in the background, and all that I’m aware of is that she can suddenly hear me better, or come up with better answers, or simply do something more than she could before. This is slightly different than Kaptelinin and Nardi explained, because it is the object that is doing the developing, but I think it points to a key importance in whether people enjoy interacting with Siri. If her skills didn’t develop and improve, we would soon tire of her and her oddities.

Finally, a third reason that I can deduce for why I enjoy using Siri is that the interface allows me to jump around to whatever topic or task I care to do, in whatever order I desire. As highlighted in Dourish, humans when using tangible or ubiquitous interfaces do not approach tasks sequentially. The fact that Siri allows me to do this is no doubt one of the reasons I enjoy using her in contrast to the GUI, which has a very structured approach to setting an alarm or timer, and requires more than a single step.