Update: Video Link!


Our instrument is a group of five robots that each have their own percussive abilities. They are 1.) a four armed tap-bot that has four servos that tap tap tap on things, 2.) a two-armed spinning bot that hits things with it’s metal hands to make noise, 3.) a rolling pinecone that makes a rumbling noise on it’s surface, 4.) a shepard with a tap-tapping staff, and 5.) a scraper-bot that uses a bristly brush to scrape things and make noise.

We mounted 4/5 of them on a turning lazy susan, with the intention of making customization possible by changing the things on which the robots are tapping. (We could rotate the lazy susan to change what object each robot was tapping on.)

Our robots are controlled by a control board with 5 pots. They control: 1.) the tempo of the music that our bots make, 2.) the pattern with which the pine cone rolls, 3.) the pattern with which the scraper scrapes, 4.) the pattern with which the shepard taps, and 5.) the speed with which the spinny bot spins.

Challenges included: 1.) Getting the robots to tap with similar patterns // with some semblance of coherent synchrony, 2.) getting the different settings of the pots to have noticeably different sounds.

Materials Used:
– 2 Arduinos
– 4 Micro-servos
– 3 normal servos
– 3D printed plastic
– lots! of jumper wires
– machine screws / nuts
– beer bottle
– 3 soda cans
– pine cone
– chopsticks
– 5 pots
– laser cut control board, pinecone eyes, lazy susan parts
– construction paper
– foam ball
– clay
– DC motor
– metal wire
– metal bolts/nuts from Dan’s bed
– wire brush
– metal marbles
– chipotle tin
– cardboard scrapey surface w/ packaging material
– diode
– resistors
– breadboard
– 3 battery packs
– rubber bands


#include <Servo.h> 

Servo myservoR;
Servo myservoRp;
Servo myservoL;
Servo myservoLp;
Servo servoLeah;
Servo servoAndrew;
Servo servoJake;
int deltaPot = 0;

int leahPot = 1; 
int leahBeat = 0;

int andrewPot = 2; 
int andrewBeat = 0;
int danielPot = 3;
int danielBeat = 0;

int jakePot = 4;
int jakeBeat = 0;

int pos = 0; // variable to store servo position 

void setup() 
 Serial.begin(9600); // setup serial
 myservoR.attach(4); //Rightmost arm from point of view of the crab
 myservoRp.attach(5); //Right-sub-prime (right arm of the left crab)
 myservoL.attach(6); //Leftmost arm from point of view of the crab
 myservoLp.attach(7);// "Left-sub-prime" (left arm of the right crab)
void loop() {

 int delta = potCipher(analogRead(deltaPot))*2; //speed of the hammering
 Serial.print("delta: ");

 servoAndrew.write(80); //ARMS UP!!!

 andrewBeat = potCipher(analogRead(andrewPot));
 Serial.print("andrewBeat: ");

 danielBeat = potCipher(analogRead(danielPot));
 Serial.print("danielBeat: ");
 jakeBeat = potCipher(analogRead(jakePot));
 Serial.print("jakeBeat: ");

 leahBeat = potCipher(analogRead(leahPot));
 Serial.print("leahBeat: ");
 for (int i=0; i <= 400; i++){
 servoAndrew.write(getArmLoc(pos, andrewBeat)); 
 servoLeah.write(getArmLoc(pos, leahBeat)); 
 servoJake.write(getArmLoc(pos, jakeBeat));
 myservoR.write(abs(abs(80-pos)-80)); //This series SHOULD do 16th-notes, approximately... but it sounds a bit off, so my math might be wrong
 pos += delta;

 if (pos >= 160) pos=0;


int getArmLoc(int pos, int beatType) {
 if (beatType == 1) {
 return abs(abs(80-pos)-80);
 else if (beatType == 2) {
 return abs(abs(40-pos)-80);
 else if (beatType == 3) {
 return abs(abs(80-(abs(pos-60)))+100);
 else if (beatType == 4) {
 return abs(abs(80-(abs(pos-80)))+100);

// returns a potSection value based on the position of the pot
int potCipher(int potVal) {
 int potSection;
 if (potVal >= 0 && potVal <= 205) {
 potSection = 0; 
 else if (potVal >= 206 && potVal <= 410) {
 potSection = 1;
 else if (potVal >= 411 && potVal <= 615) {
 potSection = 2;
 else if (potVal >= 615 && potVal <= 820) {
 potSection = 3;
 else {
 potSection = 4;
 return potSection;

Embodied self & real-world 3D

This is cool:

I liked the immersive aspect, especially when I was able to view my controls or use different kinds of actuation. I also liked when I was able to see parts of myself embodied (such as my hands).

I was hyper-aware that I was in a created space. This was limiting in some way. It was like a copy of reality, which felt like reality but lacked many of its affordances. The ability to navigate the space was limited by the controls, and how the creators had designed the space. It was a little surreal and created a bit of dissonance, like my experience/ability to observe was limited by the creators of the space.

One way to exploit the embodied aspect is to add additional position sensors to your face, hands, and body, and have those map to embodied selves in the 3D context. The environment can have different opportunities to reflect those embodied selves – the view of your hands, reflection in a pool, appearance in a mirror, people’s reactions to you. I think Kafka’s The Metamorphosis and movies like District 9 would be very visceral if individuals could play those characters.

To overcome the sense of being in a created environment, virtual spaces could be constructed either entirely or relying heavily on direct mappings from physical inputs, etc. 3D cameras (though mapping these inputs into vector or 3-D objects might not exist or be advanced enough). Like navigating different real-world environments – in the deep ocean, open grasslands, different architectural landmarks etc.

For the last two weeks I’ve been “hunting” thoughtless acts. It has been very fun experience because I had to pay close attention to details that I usually don’t even notice (it turns they are everywhere!)  I was able to capture some of them with my camera… others were difficult to capture because they happened to fast, or because I was too shy to take a picture.

While I was in the BART I noticed that the woman sitting in front of me used the space between her seat and the wall of the train to put her bottle of water. She did it almost automatically, not paying much attention to it. She never stopped talking to her husband/boyfriend. The bottle of water was always in a good position (i.e., vertical), it was easily accessible at all times, and neither grabbing it or putting it back required much effort (the rigidity of the wall combined with the softness of the seat provided ideal conditions for the temporal “storage” of the bottle).


During the same ride, a woman used her purse as a platform to rest her arm, and then she rested her head on her hand. It seems that in this way she found a good solution to rest and keep her purse safe at the same time.


In the next picture (man in the back) we can see how the young man has tied his jacket on the strap of his backpack. By having this piece of clothing on the exterior of the backpack, he can easily reach to it whenever he wants to. Additionally, tying it guarantees that he won’t lose it.


Another interesting thing I noticed just when I was getting off (so I was not able to take a picture) was a boy using his skateboard as a seat. Since all seats were taken, he was taking advantage of the affordances of his skateboard, and also of the availability of space in the “standing” area.

Now, I must confess that I’m struggling a bit with the second point of this task (possible physical and/or virtual design solution). The common thread of all this pictures (and in general of all the thoughtless acts I’ve identified) is multi-functionality (it doesn’t matter if we are talking about a human made object (seat, purse, skateboard, books, etc.) or parts of the body (mouth, hands, arms, legs, etc.) This has made me reflect about how could we design objects that have different purposes taking full advantage of  their affordances.