Project partner(s) interested in/tolerant of math education?

Hey! So I’m interested in putting together a project that offers a tangible means of interacting with mathematical representations (a graph, diagram, geometrical object, etc.). A main role for the computing part of this project would be to handle calculations so that users can focus on qualitative patterns and relationships. By backgrounding computation, my hope is that this project would make targeted mathematics concepts an accessible consequence of the designed physical interaction.

A few ideas so far (not wedded to any one, happy to consider more):

  1. An interactive graph where users could bend, move, or rotate a physical object representing a function. The corresponding equation for that function would update in real time. Could maybe be contextualized as a roller-coaster building game.
  2. Mirrored Spirograph (perhaps more aligned with art). Interface is that of a typical Spirograph, but users could choose to have the image reflected along various axes (as through mirrors, but electronically, perhaps in different colors or with different time delays/effects).
  3. A triangular frame with sides that lengthen/shorten and corners that rotate through a variety of angles. By automatically calculating properties of that triangle (area, perimeter, trigonometric functions), users could explore the impact of certain deformations on various properties. For this one, the motivating Object(ive) needs some work…

If any of these ideas sound interesting to you (or if this sounds like an avenue you’d like to think about together), please let me know!

-Leah

Jam Jar color mixer

Description

I used an Arduino, LEDs, and a jam jar “diffuser.” My program prompts users to enter a flavor of jam into the Arduino Serial monitor (“Welcome to the Jam Color Mixer! What type of jam would you like?”), then mixes red, green, and blue light to create a color approximating that jam type. Requesting strawberry jam gives a purpley-red color, blueberry gives a dark blue, and lime gives a bright green. The serial monitor repeats the request to the user and displays the RBG color combination (“You asked for cherry jam. cherry is composed of R255, G31, B31”).

To make the diffusor, a glass jam jar was lined with white tissue paper on the sides and a wad of cotton batting on the bottom. The jar screw cap (but not the rubberized seal) was screwed on to keep the tissue paper in place.

Components

  • Arduino UNO board
  • 1 red, 1 green, and 1 blue LED
  • connector wires
  • a “quilted” pattern, 6 oz glass jam jar
  • white tissue paper
  • cotton batting

Code

/* Jam RGB LED
* Serial commands control the type of jam (color of light)
* by controlling the brightness of R,G,B LEDs
* Command structure is “<fruitName>”, where “fruitName” is
* one of ten possible fruits. the colors of these fruits are translated into RGB colors.
* E.g. “apple” gives R163, G82, B0
* “blueberry” gives R51, G0, B102
*/
String fruitName; //a String class variable that will contain the entire serial input
int redPin = 9; // Red LED, connected to digital pin 9
int greenPin = 10; // Green LED, connected to digital pin 10
int bluePin = 11; // Blue LED, connected to digital pin 11
int debug = 1;
//arrays of RGB values
int redArray[10] = {163,51,46,255,245,255,112,138,184,255};
int greenArray[10] = {82,0,0,31,184,255,224,0,0,51};
int blueArray[10] = {0,102,184,31,0,31,0,184,46,102};

int colorIndex; //an index variable

void setup() {
pinMode(redPin, OUTPUT); // sets the pins as output
pinMode(greenPin, OUTPUT);
pinMode(bluePin, OUTPUT);
Serial.begin(9600);
analogWrite(redPin, 0); // set them all to off
analogWrite(greenPin, 0); // set them all to off
analogWrite(bluePin, 0); // set them all to off
Serial.println(“Welcome to the Jam Color Mixer! What type of jam would you like?”);
}

void loop() {
//clear the fruitName variable;
fruitName.remove(0);
//read the serial port and create a string out of it
readSerialString();

if (fruitName.length()>0) {
//confirm the request
Serial.print(“You asked for “);
Serial.print(fruitName);
Serial.println(” jam.”);

if(fruitName == “apple”)
colorIndex=0;
else if(fruitName == “blackberry”)
colorIndex=1;
else if(fruitName == “blueberry”)
colorIndex=2;
else if(fruitName == “cherry”)
colorIndex=3;
else if(fruitName == “honey”)
colorIndex=4;
else if(fruitName == “lemon”)
colorIndex=5;
else if(fruitName == “lime”)
colorIndex=6;
else if(fruitName == “plum”)
colorIndex=7;
else if(fruitName == “raspberry”)
colorIndex=8;
else if(fruitName == “strawberry”)
colorIndex=9;

analogWrite(redPin, redArray[colorIndex]);
analogWrite(greenPin, greenArray[colorIndex]);
analogWrite(bluePin, blueArray[colorIndex]);

if (debug == 1) {
Serial.print(fruitName);
Serial.print(” is composed of R”);
Serial.print(redArray[colorIndex]);
Serial.print(“, G”);
Serial.print(greenArray[colorIndex]);
Serial.print(“, B”);
Serial.println(blueArray[colorIndex]);
}

fruitName.remove(0); //removes (deletes) all of the inString so that it’s not used anymore
}

delay(100); // wait a bit, for serial data
}
//read a string from the serial and store it in a String
void readSerialString () {
//available() returns the number of bytes avaiable for reading from the serial port
if(!Serial.available()) { //if there’s no serial string received, do nothing
return;
}
else if (Serial.available()) {
fruitName=Serial.readString(); //read the entire serial input into a String variable
/*code for debugging fruitName
Serial.print(“Read in string: “);
Serial.print(fruitName);
Serial.println();*/
}
}

breadboard setup diffusor exteriorplum

Computing for Hands?

I think this perspective of computers being tools “for the mind and not the hands” is a valid one. We create tangible user interfaces, at least partially, to make the experience of using computing power closer to that of using our bodies in the physical world. But interfaces are just that – they are extensions, add-ons to the computer to create input and output experiences. The computing part does not happen with our hands or bodies; it happens on a piece of hardware with mechanisms too small to see or imagine at anything but an abstract level. Even an abacus, which performs much simpler calculations using hand-power, is governed not by the constraints and affordances of hands but by abstractions (beads representing numbers, columns representing units) organized and maintained by culturally transmitted practices.

And I don’t think the departure from the abilities of hands is by any means a damning one. If we relied only on hands for calculations, we might hardly be able to count past 10 (or 20 if you want to include toes). As soon as we start working with abstractions and representations, we leave the world of what hands alone are capable of. Hands are incredible tools, both for sensing our world and affecting changes in it. However, we do not function as disembodied hands (think Thing from the Addams Family), and it would be limiting, perhaps even impossible to design and create tools exclusively for hands.

All that said, interfaces have come a long way in allowing more naturalistic use of hands when working with computers. Touch screens are one example: instead of training our hands to a somewhat arbitrary input device (the keyboard) we can point, select, and drag our way around a tablet or phone. More toward the augmented reality side of the spectrum, systems that leverage visual indicators (QR codes, visual tags – don’t know the technical term for this category of things) or adjacency sensors (blocks that “know” when they’re stacked or connected – again, don’t know the technical term) allows us to naturalistically manipulate physical objects where the computer captures those interactions and makes sense of them. In the case of touch screens, we still have to learn which hand movements perform what functions (I think about my mom first using an iPhone). Visual indicators and physical sensors seem to have less of a learning curve, elevating intuitive interactions and backgrounding the computer (per our other reading).

Lab 1: Blinking LEDs

Description


I used an Arduino UNO, a blue LED, a red LED, and two 220 ohm resistors. I changed the Arduino sample code to blink one LED twice, then blink the other LED once.

Components


  • Arduino UNO
  • 1 blue LED
  • 1 red LED
  • 2 220 ohm resistors
  • a breadboard

Code


/*
Blinks 2 LEDs, the 1st for 2 beats,
and the 2nd for 1 beat.
Code was modified from the Arduino sample code Blink_1_LEDs
*/
int ledPin1 = 4; //allow for variable output pin
int ledPin2 = 13; //allow for variable output pin
void setup()
{
pinMode(ledPin1, OUTPUT); // initialize output pins
pinMode(ledPin2, OUTPUT);
}
void loop() // the loop function runs repeatedly
{
digitalWrite(ledPin1, HIGH); // turn the LED1 on
delay(800); // wait
digitalWrite(ledPin1, LOW); // turn the LED1 off
delay(100);
digitalWrite(ledPin1, HIGH);
delay(800); 
digitalWrite(ledPin1, LOW);
digitalWrite(ledPin2, HIGH); // turn the LED2 on
delay(400);
digitalWrite(ledPin2, LOW); // turn the LED2 off
}

A favored UI – Swype keyboard

I was delighted to try the Swype keyboard on a tablet a few years ago. At the time, I wasn’t used to typing on a touch screen, and Swype felt smoother, more natural, and less sporadic/stoccato than typical touch-screen typing. Being able to type with movement also felt somehow more expressive.

In terms of activity theory, it may have shifted the hierarchy of levels for my typing task. The Object, communication, was the same, and the activity was composing a message on whatever device and sending it to whoever. Deciding on the individual words in that message was a conscious action, but on a lap/desktop keyboard, typing individual letters of those words was (usually) for me an operation, work accomplished unconsciously. Suddenly, on a traditional touch-screen keyboard, I had to pay attention to the placement of my fingers for every. single. letter. It really broke up the flow of writing, because I think we tend to think and communicate in words or phrases, not individual letters. With Swype, instead of having an action for each word and then as many sub-actions as there were letters in that word, I just had word-level actions.