Enhancing Virtual Reality and Mixed Reality

The best VR experience I had with HTC Vive is that it feels so real with the vision simulation that I didn’t dare to step too close or step beyond a simulated robot standing right in front of me. Even though I thought some part of my rationality at that time still knew that VR is a fictional environment, but my eyes have believe there is a robot right in front of me that I can’t just step across its body.

However, when I finally decided to push my hand against that robot, there is no resistant force prevent my hand moving through the robot body. This could be something to be enhanced. For example, haptics could be used to enhance this part of the VR experience. One illustration is the video “Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality”, where a single box could create realistic haptic feedback.

There are also some down sides of my experience with HTC Vive. I feel that I lost my senses of orientation and direction in the real world when I was immersed in the VR environment. This causes issues since I couldn’t perceive through vision that I was probably too close to a wall or if I messed up with the cables on the ground. However, to solve this issue, I feel that the mixed reality technology developed by Magic Leap could be a great opportunity. By overlapping virtual reality with the real environment, it makes user feel more realistic and always being aware of what’s going on in the surrounding real environment.

Crawler with Servo Motor


For this mini project, I used one servo motor and tested with different geometries and two different cardboard directions to come up with this mini cardboard crawler. I also played around with different combinations of the paper box and the servo motor to optimize the location of center of gravity to avoid the crawler turning over in the process of crawling


  • 1 Bread Board
  • 1 Arduino Uno
  • 1 Futaba Servo Motor
  • 1 Box
  • 1 Green Wire
  • Transparent Tape
  • Jumper Wires
int servoPin = 7; // Control pin for servo motor
int potPin = 0; // select the input pin for the potentiometer

int pulseWidth = 0; // Amount to pulse the servo
long lastPulse = 0; // the time in millisecs of the last pulse
int refreshTime = 20; // the time in millisecs needed in between pulses
int val; // variable used to store data from potentiometer

int minPulse = 500; // minimum pulse width

void setup() {
 pinMode(servoPin, OUTPUT); // Set servo pin as an output pin
 pulseWidth = minPulse; // Set the motor position to the minimum
 Serial.begin(9600); // connect to the serial port
 Serial.println("servo_serial_better ready");

void loop() {
 val = analogRead(potPin); // read the value from the sensor, between 0 - 1024
 if (val > 0 && val <= 999 ) {
 pulseWidth = val*2 + minPulse; // convert angle to microseconds
 Serial.print("moving servo to ");
 updateServo(); // update servo position

// called every loop(). 
void updateServo() {
 // pulse the servo again if the refresh time (20 ms) has passed:
 if (millis() - lastPulse >= refreshTime) {
 digitalWrite(servoPin, HIGH); // Turn the motor on
 delayMicroseconds(pulseWidth); // Length of the pulse sets the motor position
 digitalWrite(servoPin, LOW); // Turn the motor off
 lastPulse = millis(); // save the time of the last pulse

Image & Video




Thoughtless Acts

I found that hanging sunglasses around neckline is quite a common thoughtless act. Does that reveal some design opportunities for fashion design? Or does that reveal that carrying sunglasses in glass cases or in bags is kind of a pain?



This one is also very interesting. Newspaper is light-weight and easy to carry, while it can also be used for sun shading when people are lying on lawn.



Below is a thoughtless act when you run out of desktop surface, and the bottom platform of your lamp could be a possible place for temporary storage. 🙂


DC motor and Dancing Wires


For this mini project, I made a small installation where people can control the rotational speed of dancing wires and see the change in the form of the dancing wires as the speed changes.


  • Arduino Uno
  • Wires
  • 1 Potentiometer
  • 1 DC motor
  • Batteries
  • 1 Resistor
  • 1 Diode
  • 1 Transistor
  • Tape


int potPin = 0; // select the input pin for the potentiometer
int motorPin = 9; // select the pin for the Motor
int val = 0; // variable to store the value coming from the sensor
void setup() {
void loop() {
 val = analogRead(potPin); // read the value from the sensor, between 0 - 1024
 analogWrite(motorPin, val/4); // analogWrite can be between 0-255


The whole connected device:


The wires dancing at different radiuses, when the rotational speed of the  DC motor changes:

img_0944 img_0952 img_0948 img_0957




Arduino Lamp Photocell Serial Control with Bottle Cork


I’m interested in how to push certain part of a system to create changing outputs over the time. With this in mind, for this project, I tried to mimic a serial lighting input environment for the photocell sensor, by making a paper bottle cork which is long enough to cover the photocell entirely inside. In this way, if I push and pull the hollow bottle cork, it can cover the photocell from the outside lighting from 100% to 0%. However, since white paper is actually semi-translucent to light, so I decide to use black paper to make the bottle cork, which is designed to be an excellent shading barrier for the photocell.


  • 1 paper diffuser
  • 1 black bottle cork
  • Arduino uno
  • wires
  • 1 LED
  • 1 photocell
  • 2 resistors
  • 1 laptop



int sensorPin = A0; 
int ledPin = 13; 
int sensorValue = 0; 

void setup() {
 // declare the ledPin as an OUTPUT:
 pinMode(ledPin, OUTPUT);

void loop() {
 // read the value from the sensor:
 sensorValue = analogRead(sensorPin);
 // turn the ledPin on
 digitalWrite(ledPin, HIGH);
 // stop the program for <sensorValue> milliseconds:
 // turn the ledPin off:
 digitalWrite(ledPin, LOW);
 // stop the program for for <sensorValue> milliseconds:





img_0910 img_0911 img_0914 img_0915


Midterm Project Sketches – Daniel, Safei, Michelle

Project: Memento

Group: Daniel, Michelle, Safei

For our project, we are going to be making a tangible memento to track shared social experiences in a relationship. Below is a diagram for the information architecture.


As an example use case:
Two people in a relationship (parent-child, best friends, siblings, partners, etc.) who care about each other want to capture their shared experiences together so they can remember them later.
One person will get a pair of Mementos and give one to a person they care about, and keep the other.
These Mementos will be with the person at all times, and activate when they are near each other.
When these Mementos are activated, they capture information about the scene around them: if there is music, movement, light, geolocation.
These mementos then display a visual representation of the experience.
As experiences are captured and added onto previous experiences, creating a beautiful visual of the relationship.
When the two are apart, they always have these beautiful visuals of their time together, and they have incentive to hang out again soon to add more visuals to their relationship display.


This can be compared to the Pousman and Stasko ambient design principles (see image below).
The Information Capacity of the Memento is High: it is capturing multiple input from the user and the environment: geolocation, photosensitivity, movement, sound, and if the device is near others.
The Notification Level is Medium: the device exists primarily to showcase the relationship between the users, but can notify the users if they are not spending enough time together or opportunities to spend time together.
The Representational Fidelity is Somewhat High: the Memento does not display exact geo coordinates of the location or time of day spent together, but rather may change color or shape based on these features to represent them in a different way.
The Aesthetic Emphasis is Somewhat High: the device captures aspects of the relationship and displays them beautifully so users can reflect on their relationship and be reminded of the time they spent together from these abstract forms.


Cups and Strange Familiarity

A step back from the high tech innovation world, I found a variety of industrial designed cute cups that are so interesting that customers can’t help paying for the priceless ideas embedded in their design.

The examples below all show various degrees of strange ways to combining two familiar common forms to re-design the traditional form of a cup. More over, every time I experience a different interesting cup, the joy of discovery it’s hidden tricky ideas, and sometimes the feeling to try the strange form of the cup with my own hand holding the cup is very delightful and impressive.


This cup combines the familiarity of a cat with 2 feet hanging to the cup’s spoon.


This is an example of cup handler mimicing the void form of human fingers holding a cup.



This is a fun example of how the form of the cup mimics cute animal feet.


This final example of cup handle mimics a cat’s tail to blend in an interesting visual element, as well as delightful feelings when people hold the handle.

FSR + PhotoCell

For this lab project, I tried to use both FSR and PhotoCell to feed in multiple streams of changing data to Processing. Along with the blinking change of LED light, the ball on the screen can also change accordingly, in terms of color, size, and location on the screen.

For the mechanical part, I tried hand, foam, and a small bean-bag ball, and it’s amazing even for bean ball it works to transform the force from my squeezing hand to the FSR. Perhaps force could be transmitted evenly across the material I chose?

I also found it challenging to figure out how to feed more than one data to Processing, and that’s why I ended up using 1 FSR for this lab. I want to consult with someone who has experience to push this further.


  • 1 Arduino Uno
  • Wires
  • 2 Resistors (220 Ohms, 10k Ohms)
  • 1 LED
  • 1 FSR
  • 1 Small Bean-Bag Ball

Arduino Code

int sensorPin = A0; // select the input pin for the potentiometer
int val = 0; // variable to store the value coming from the sensor
int ledPin = 13; // select the pin for the LED

void setup() {
 // declare the ledPin as an OUTPUT:
 pinMode(ledPin, OUTPUT);

void loop() {
 // read the value from the sensor:
 val = analogRead(sensorPin);
 // turn the ledPin on
 digitalWrite(ledPin, HIGH);
 // stop the program for <sensorValue> milliseconds:
 // turn the ledPin off:
 digitalWrite(ledPin, LOW);
 // stop the program for for <sensorValue> milliseconds:
 Serial.println(val);//I tried to use 1 photoCell and 1 FSR, but I couldn't figure out how to send 2 data at the same time

Processing Code

* Arduino Ball Paint
 * (Arduino Ball, modified 2008)
 * ---------------------- 
 * Draw a ball on the screen whose size is
 * determined by serial input from Arduino.
 * Created 27 September 2016
 * Edited by Safei Gu
import processing.serial.*;
// Change this to the portname your Arduino board
String portname = "/dev/cu.usbmodem1411"; // or "COM5"
Serial port;
String buf="";
int cr = 13; // ASCII return == 13
int lf = 10; // ASCII linefeed == 10

int serialVal = 0;

void setup() {
 port = new Serial(this, portname, 9600); 

void draw() {
 // erase the screen
 background(40, 40, 40);

// draw the ball
 fill(serialVal/100, serialVal/100, serialVal/100);

// called whenever serial data arrives
void serialEvent(Serial port) {
 int c = port.read();
 if (c != lf && c != cr) {
 buf += char(c);
 if (c == lf) {
 serialVal = int(buf);
 buf = ""; 




[RR04] Ambient Design Examples

It’s very interesting when I tried to think about ambient examples, the first one that came to my mind is actually quite a traditional example, which is almost everywhere today. I would argue that ambient design isn’t some concept that is entirely without industrial design background. This very first example I thought about is the smoke detector in almost every house in U.S. today. Why I think that’s a great example of ambient design? Several reasons are because of it being able to switch to the user’s main attention of focus, and being able to switch back to the background during the most of the time, and providing normally non-critical information, which is “no alarm”.

With that, we could even argue that the natural environment we are nested in could also be a huge ambient information system. If it rains a little bit today, your windows at your house would give you a little bit of visual hint of the traces of raindrops. This might be drag-to-attention information, but not such a piece of information that would require immediate attentions. But if there is a storm today, your windows will give you much more visual hint of dynamic raindrops running down the glass surface, you will also be able to hear the large sound of the rain dropping on the window, and there could even be thunders dragging to your attention. And this will require more immediate reactions, such as checking if your family members who are still outside are ok or not, and if you’re heading out, then you would be looking for rain gear immediately.

With this thinking, I remember a PARV paper discussed about how they imagined ubiquitous design several decades ago, and in their fictional narrative story of Sai, who lives in a ubiquitously designed house, could be able to look into his glass window of the house, and read the emerging information on the window surface about buses, his neighbors activities, etc. Isn’t this digital example actually a translated usage of the windows giving people ambient information in daily context?

Reflection on Fishkin’s Taxonomy

I think Fishkin’s taxonomy is helpful in understanding the UIs mentioned in the paper, in terms of embodiment and metaphor. However, there are a lot of other types of UIs left out of the taxonomy that Fishkin’s taxonomy could not be very helpful to help people understand.

For example, his taxonomy on the embodiment axis didn’t discuss the distance between human body and the input device at all. TUI doesn’t need to be literally using hands to do input. For example, we could have Siri or Echo as part of the TUI input system.

Another issue about Fishkin’s taxonomy is his metaphor axis, which only considers the shape and the motion. I think the metaphor concept could be pushed much further. For example, the taxonomy on the metaphor axis could also include all the other human senses other than sight, such as taste, touch, smell, and sound.

Moreover, I would suggest the metaphor axis could also include personality, for people to understand more robot-type TUI systems, such as Baymax in the movie of Big Hero 6.