HTS vive VR response

I found the HTC’s world to be amazingly rich and spatial. By the word spatial I mean that instead of feeling that you are in a “box” with graphics, you feel actually feel a sense of gravity and its force you. I felt different in the mountaintop scene from the underwater whale scene. In the mountaintop scene, I had the sense that the ground was sloping down away from me when I was facing downhill. When I tried to follow the little doggie-bot, I was a little apprehensive of losing my balance if I didn’t take into account distributing my weight. The underwater scene was beautiful for the sense of scale – between the tiny flickers of the fish schools shimmering around, and the gigantic whale so big that you can only see parts of it at one time. The scale of the whale was an interesting decision in just how much you could experience at a given time: how big is so big? I could only see an eye, then the mouth, a fin, and then as it swooshes past me, a tail that came so close to me that I was afraid I would be knocked off my perch.

The drawing application is pretty amazing in that you felt a spatial relationship in proximity to the “painting” you create. I think one could spend hours in there, which kind of is a scary thought.

I think what I liked least about the experience is that it was still very much an enclosed environment that separates one from the actual world. I find the Mixed Reality described in the WIRED magazine extremely fascinating because it does not take one away from the “real” world that one exists in. I also like that it blurs the identification of one’s state of mind as to defining what is “reality” and what is “fiction.”

Alternate solutions for already-existing design solutions

My uncle is a Nuclear Science PhD and he likes to tinker with things and come up with his own solutions.

His glasses are an example of his refusal to toss away, as he puts it, “perfectly functional elements” — so he Frankensteins some most bizarre concoctions, which in the end, you have to admit, do work and achieve the means to the end.

Pic 1: The frames of his current prescription glasses (for myopia) broke, so he popped the lens out and took another frames of a whatever reading glasses and taped them in.

moral of the story: this is a sort of interesting design problem of creating a frame that could withstand a few lens prescription updates. Normally if the lens of your glasses need to be updated you have to change the entire frame – which themselves are not super cheap. Glasses shops say that they don’t have the capability to interchange new lens. (is that really true?) perhaps this is “design” limitation that can be looked into.

 

Pic 2: Since he is of a certain age so he needs reading glasses, but he also can’t be bothered to go out and get a bifocal, so he takes an existing reading glasses of the strength he needs and sticks them in behind his big frames (on which are taped the lens for his myopia prescription).

moral of the story: besides the funky bug-look of 6-eyes, you gotta admit there’s a system there; he found reading glasses with frames just the size that would fit behind the larger frames of his normal-use glasses. There’s a heirarchy of scale that corresponds to frequency and importance of use. The larger, sturdier frames are for his myopia, which is a first and foremost need, and the lesser sized reading lens/frames are for as-needed.

 

 

 

 

 

 

 

Even though there are solutions out there that already exist, so this is actually sort of an example of reverse functionality.

But in a way it’s a sort of survivalist mentality, which is kind of interesting –

 

20161013_141533 20161013_141301

Lab 5: Coincidential Input/Output

I hooked up the computer that would have been the output for the Processing code to a small pico projector and placed it in a box. I cut the top of the box off and replaced it with a translucent material (waxed paper) so that the top of the box is effectively a small rear-projection screen.

I placed a mirror in the end of the box, opposite the pico projector so that the image would be reflected up 90 degrees. Also in the box are the Arduino and a photocell resistor. When one waves a source of light over the box where the photocell is, the Processing graphics on the box moves in response.

 

Components:

1 Arduino

1 breadboard

1 pico projector

1 photocell resistor

10-ohm resistor

cables

cardboard box

small mirror

waxed paper

tape

computer

 

code

Arduino:

int potPin = 0; // Analog input pin that the potentiometer is attached to
int potValue = 0; // value read from the pot
int ledR = 9; // PWM pin that the LED is on. n.b. PWM 0 is on digital pin 9
int ledG = 10; // PWM pin that the LED is on. n.b. PWM 0 is on digital pin 9
int ledB = 11; // PWM pin that the LED is on. n.b. PWM 0 is on digital pin 9
void setup() {
// initialize serial communications at 9600 bps:
Serial.begin(9600);
// declare the led pin as an output:
pinMode(ledR, OUTPUT);
pinMode(ledG, OUTPUT);
pinMode(ledB, OUTPUT);
}

void loop() {
potValue = analogRead(potPin); // read the pot value
analogWrite(ledR, potValue/4); // PWM the LED with the pot value (divided by 4 to fit in a byte)
analogWrite(ledG, potValue/4); // PWM the LED with the pot value (divided by 4 to fit in a byte)
analogWrite(ledB, potValue/4); // PWM the LED with the pot value (divided by 4 to fit in a byte)
Serial.println(potValue); // print the pot value back to the debugger pane
delay(10); // wait 10 milliseconds before the next loop
}

Processing:
import processing.serial.*;
// Change this to the portname your Arduino board
String portname = “/dev/cu.usbmodem1421”; // or “COM5″
Serial port;
String buf=””;
int cr = 13; // ASCII return == 13
int lf = 10; // ASCII linefeed == 10

int serialVal = 0;

void setup() {
size(1280,720);
frameRate(10);
smooth();
background(255,255,255);
noStroke();
port = new Serial(this, portname, 9600);
}

void draw() {
// erase the screen
background(0, 0, 0);

// draw the ball
noFill();
stroke(200);
strokeWeight(.5);
line(150,serialVal,serialVal,100);
line(100,serialVal,serialVal,150);
ellipse(serialVal, serialVal, 50 + serialVal, 50 + serialVal);

strokeWeight(1);
line(200,serialVal,serialVal,75);
line(75,serialVal,serialVal,200);
ellipse(serialVal, serialVal, serialVal, serialVal);

strokeWeight(2);
line(250,serialVal,serialVal,55);
line(55,serialVal,serialVal,250);
ellipse(serialVal, serialVal, serialVal -50, serialVal – 50);

strokeWeight(2.5);
line(300,serialVal,serialVal,35);
line(35,serialVal,serialVal,300);
ellipse(serialVal, serialVal, serialVal -150, serialVal – 150);

strokeWeight(3.5);
line(500,serialVal,serialVal,15);
line(15,serialVal,serialVal,500);

}

// called whenever serial data arrives
void serialEvent(Serial p) {
int c = port.read();
if (c != lf && c != cr) {
buf += char(c);
}
if (c == lf) {
serialVal = int(buf);
println(“potValue=”+serialVal);
buf = “”;
}
}

 

 

6 5 4 3 2 1

Lab 4 FSR and graphics

Description:

I thought there were two parts to the assignment: 1. making a graphic that reacts to FSR

2. making a reactive graphic that coincides w a pressure sensor application.

so for 2.

I thought of making a graphic that can tell you if you are pressing on your cellphone and hopefully warn you before you crack it.

Components:

  • Arduino Uno
  • Breadboard
  • 1 force sensor
  • 1 220Ω resistors
  • 1 10Ω resistor
  •  wires

Part 1 graphic is a sort of abstract pattern of lines and circles. I thought it was interesting that the lines rotate when you apply serialVal to the variables.

— processing code —

String portname = “/dev/cu.usbmodem1421”; // or “COM5″
Serial port;
String buf=””;
int cr = 13; // ASCII return == 13
int lf = 10; // ASCII linefeed == 10

int serialVal = 0;

void setup() {
size(640,640);
frameRate(10);
smooth();
background(40,40,40);
noStroke();
port = new Serial(this, portname, 9600);
}

void draw() {
// erase the screen
background(100, 40, 40);

// draw the ball
noFill();
stroke(200);
strokeWeight(.5);
line(150,serialVal,serialVal,100);
line(100,serialVal,serialVal,150);
ellipse(serialVal, serialVal, 50 + serialVal, 50 + serialVal);

strokeWeight(1);
line(200,serialVal,serialVal,75);
line(75,serialVal,serialVal,200);
ellipse(serialVal, serialVal, serialVal, serialVal);

strokeWeight(2);
line(250,serialVal,serialVal,55);
line(55,serialVal,serialVal,250);
ellipse(serialVal, serialVal, serialVal -50, serialVal – 50);

strokeWeight(2.5);
line(300,serialVal,serialVal,35);
line(35,serialVal,serialVal,300);
ellipse(serialVal, serialVal, serialVal -150, serialVal – 150);

strokeWeight(3.5);
line(500,serialVal,serialVal,15);
line(15,serialVal,serialVal,500);

}

// called whenever serial data arrives
void serialEvent(Serial p) {
int c = port.read();
if (c != lf && c != cr) {
buf += char(c);
}
if (c == lf) {
serialVal = int(buf);
println(“potValue=”+serialVal);
buf = “”;
}
}

+ + + + +

Part 2 graphic I can’t really create an actual connection between the cellphone and the graphic so I made a visual mockup on computer monitor where I overlay the Processor sketch window over a photo of a cellphone.

if you press on the center of the phone, the screen turns from black to red (the universal warning color)  and the text

“—hey— watch it.” comes up

—processing code—

String portname = “/dev/cu.usbmodem1421”; // or “COM5″
Serial port;
String buf=””;
int cr = 13; // ASCII return == 13
int lf = 10; // ASCII linefeed == 10

int serialVal = 0;

void setup() {
size(325,560);
frameRate(10);
smooth();
background(40,40,40);
noStroke();
port = new Serial(this, portname, 9600);
}

void draw() {

background(serialVal, serialVal-155, serialVal-155);
stroke(255,0,0);
fill(serialVal, serialVal, serialVal);
textSize(15);
text(“—hey—”, 130,260);
text(“watch it.”, 130, 300);

}

// called whenever serial data arrives
void serialEvent(Serial p) {
int c = port.read();
if (c != lf && c != cr) {
buf += char(c);
}
if (c == lf) {
serialVal = int(buf);
println(“potValue=”+serialVal);
buf = “”;
}
}

 

 

 

screen-shot-2016-09-28-at-12-03-59-am screen-shot-2016-09-28-at-12-03-55-am screen-shot-2016-09-28-at-12-03-53-am screen-shot-2016-09-28-at-12-03-40-am untitled1

Between the Cracks of Time

The description of calm technology as an engagement that alternates between the user’s needs for “center” and “periphery” is also an analogy of how memory functions. The most popular conception of memory is that it exists as a linear sequence of happenings, then sits in the back of the mind like a dusty VHS waiting to be discovered and played in its entirety. Charles Fernyhough, British developmental psychologist and author of the book “Pieces of Light,” writes of research that increasingly point toward memories as anything but a neatly packaged, linear strand of data. If anything, they exist as fragments of details hovering in the fringe of consciousness and are assembled by the mind when the situation requires. In that light, pun unintended, memories are notoriously unreliable, as suggestions inserted during the time of recollection could easily be woven into the “memory.”

I’m interested in creating an interactive scenario in which fragments of a particular environment is recreated as a projection. I find that the more time one spends in a space, the less one “sees” of the details. The mundane breeds indifference. One may pass by a metal ashtray that has a little ceramic frog hanging off the side, on a table at the corner of the long hallway ten times a day: to use the bathroom; to find a colleague, to go to the kitchen, etc. And one day someone shows this person the ashtray with the frog and he would be seeing it for the first time: one of many fragments of objects floating in the periphery the memory of this hallway.

To another person, this ashtray may be a portal to another time, another world. Person B, as opposed to Person A to whom the frog ashtray is invisible, had traveled to Prague five years ago, and was lost in the winding old city streets looking for the rumored former residence of Franz Kafka. Frustrated she finds a cafe and flops down to recollect his bearings over a coffee. And on this cafe table is this odd-looking ashtray with a plastic toy frog that possibly a child who sat there before her had forgotten. The tablecloth under the ashtray was checkered red and white and the chairs were brown wood, worn in the seats where countless patrons have sat. So Person B, who works in the same hallway as Person A, passes by this ashtray in the hallway, and it jumps out at her like a beacon.

To these two persons, “center” and “periphery” are the same object: the froggy ashtray. One sees and does not see simultaneously.

The space around them can also be a periphery – and be subject to fragmented shifts. My visual work is collective fragmented information about a particular space. Layers of time could be overlapped and interwoven within these fragments. The ambient layer of information could possibly be the quiet shifting of these layers as they merge between the present and the past. Should a viewer become more invested in exploring the scene, the proximity of the viewer or interaction with an object related to the ambient scene may trigger another level of information: the periphery becomes the center, a portal. The oxymoronic osmosis between these two boundaries is like a sense of journey and discovery within the limbo between the cracks of time.

combo3_o yesterday_today_tomorrow_o

Need a team!

Hi folks,

My name is Olivia and my partner just dropped the TUI class so I don’t have a team any more. My interest is in cross-sensorial perceptions (i.e. “seeing” sounds or “hearing” visuals) in any context. Any teams out there who can use a 3rd person?

Thanks! can email me at oting1@nullgmail.com

Lab 3: 3 Potentiometers for 3 LEDs

In this lab the brightness of 3 LED lights are each capable of being independently controlled by its own coinciding potentiometer.

 

components

  • 1 Arduino
  • 1 breadboard
  • 1 red LED
  • 1 green LED
  • 1 blue LED
  • 3 potentiometers
  • 3 225-ohm resisters
  • wires

 

code

int potPinB = 0; // Analog input pin that the potentiometer is attached to
int potValueB = 0; // value read from the pot
int ledB = 9; // PWM pin that the LED is on. n.b. PWM 0 is on digital pin 9
int potPinG = 1; // Analog input pin that the potentiometer is attached to
int potValueG = 0; // value read from the pot
int ledG = 10; // PWM pin that the LED is on. n.b. PWM 0 is on digital pin 10
int potPinR = 2; // Analog input pin that the potentiometer is attached to
int potValueR = 0; // value read from the pot
int ledR = 11; // PWM pin that the LED is on. n.b. PWM 0 is on digital pin 11
void setup() {
// initialize serial communications at 9600 bps:
Serial.begin(9600);
// declare the led pin as an output:
pinMode(ledB, OUTPUT);
pinMode(ledG, OUTPUT);
pinMode(ledR, OUTPUT);
}

void loop() {
potValueB = analogRead(potPinB); // read the pot value
potValueG = analogRead(potPinG); // read the pot value
potValueR = analogRead(potPinR); // read the pot value
analogWrite(ledB, potValueB/4); // PWM the LED with the pot value (divided by 4 to fit in a byte)
analogWrite(ledG, potValueG/4); // PWM the LED with the pot value (divided by 4 to fit in a byte)
analogWrite(ledR, potValueR/4); // PWM the LED with the pot value (divided by 4 to fit in a byte)
}

 

 

 

IMG_7613

The Rattling Chair

I’m interested in an interface that does not involve a user interacting with a physical object to induce an output. What if the user’s movements in their proximity within a given parameter of space were the input? The input of information by the user could be detected by sensors that don’t make contact with a physical object — such as sonar sensors, infrared, or motion sensors. I am interested in the ability of a human body to retain its memory of its position within an environment, and how it navigates itself through said environment. What does the body remember through its senses?

In a way this scenario might also be defined as a tangible user interface because it does take place in a physical space.

A blind person navigates through touch and sound. A deaf person navigates through sight and proportion. Each is an example of a person lacking one sense utilizes available senses to make sense of feedback. A blind person may move toward sensations that evoke a pleasing emotional response, such as a sound, a warmth, or a smell. This would be somewhat of an example of an environmental embodiment, where the output (like “Toon Town”) is ambient, but the input is not an avatar, but the user. Chris Downey, a blind architect who lost his sight in the aftermath of successful surgery to remove his brain tumor, speaks of discovering a world of information garnered from different sense that defines his relationship to his environment.

Here is a story from his Ted Talk that illuminates the range of emotion that comes with subsequent discovery of his surroundings:

“So, stepping down out of the bus, I headed back to the corner to head west en route to a braille training session. It was the winter of 2009, and I had been blind for about a year.Things were going pretty well. Safely reaching the other side, I turned to the left, pushed the auto-button for the audible pedestrian signal, and waited my turn. As it went off, I took off and safely got to the other side. Stepping onto the sidewalk, I then heard the sound of a steel chair slide across the concrete sidewalk in front of me. I know there’s a cafe on the corner, and they have chairs out in front, so I just adjusted to the left to get closer to the street. As I did, so slid the chair. I just figured I’d made a mistake, and went back to the right, and so slid the chair in perfect synchronicity. Now I was getting a little anxious. I went back to the left, and so slid the chair, blocking my path of travel. Now, I was officially freaking out. So I yelled, “Who the hell’s out there? What’s going on?” Just then, over my shout, I heard something else, a familiar rattle. It sounded familiar, and I quickly considered another possibility, and I reached out with my left hand, as my fingers brushed against something fuzzy, and I came across an ear, the ear of a dog, perhaps a golden retriever. Its leash had been tied to the chair as her master went in for coffee, and she was just persistent in her efforts to greet me, perhaps get a scratch behind the ear. Who knows, maybe she was volunteering for service. ”

In a designed scenario, a seeing person may remember where they are if they are able to see the objects that anchor his or her position within a space. Suppose they are in a dark, blank room with no objects or markers. They could be disoriented and emotionally unmoored. If they were to move toward one corner and find that that gentle lights illuminate that corner. Suppose in tandem to that light source, perhaps a warmth can also emanate (presence of heater) – the combination of these two sense could suggest a sense of being outdoors, of being in the presence of sunlight. If this person moves away from this area, the light and warmth diminishes. That particular corner would take on an emotional value. Conversely, another area in that blank room could trigger displeasing effects; the person would avoid that area. In short, what ultimately happens is that the user develops a spatial mapping of the environment based on their physical position within that space. It is also about a discovery of the relationship of their body’s physicality and how it relates to an environment, such as an urban community.

Lab 2

Part 1:  LIGHT DIFFUSER

I made an origami flower out of waxed paper. I was curious how the wax coating on the paper would affect the light quality. The origami flower is folded in a way that the center is a hollow sphere, with the petals attached to this sphere. I like how the light reaches beyond the sphere and is reflected on the petals.

Part 2: MANUAL LED OPERATION

I wanted to set the LED lights so they could be each be individually turned on by increments of 50 with each letter of their color typed in. At the beginning of a load-in, the lights are all off, but when the first letter of a color (i.e. “r” for red) is typed in the serial monitor, that color is lit by increments of 50. So the first “r” will bring the light to 50, second “r” brings the light to 100, third to 150 and so on. When the brightness reaches maximum (250), the next “r” brings it back to 0, or off.

This control is repeated for the other two lights: “g” for green and “b” for blue.

materials:
—1 Arduino

—3 LEDs

—3 220 ohm resistors

—1 breadboard

—7 wires

 

CODE

char serInString[100]; // array that will hold the different bytes of the string. 100=100characters;
// -> you must state how long the array will be else it won’t work properly
char colorCode;
int colorVal;

int redPin = 9; // Red LED, connected to digital pin 9
int greenPin = 10; // Green LED, connected to digital pin 10
int bluePin = 11; // Blue LED, connected to digital pin 11
int redVal = 0; // brightness value for red LED, must be between 0 and 255
int greenVal = 0; // brightness value for green LED, must be between 0 and 255
int blueVal = 0; // brightness value for blue LED, must be between 0 and 255

void setup() {
pinMode(redPin, OUTPUT); // sets the pins as output
pinMode(greenPin, OUTPUT);
pinMode(bluePin, OUTPUT);
Serial.begin(9600);
analogWrite(redPin, 0); // set them all to mid brightness
analogWrite(greenPin, 0); // set them all to mid brightness
analogWrite(bluePin, 0); // set them all to mid brightness
Serial.println(“enter color command (e.g. ‘r43’) :”);
}

void loop () {
// clear the string
memset(serInString, 0, 100);
//read the serial port and create a string out of what you read
readSerialString(serInString);

colorCode = serInString[0];
if( colorCode == ‘r’ || colorCode == ‘g’ || colorCode == ‘b’ ) {
colorVal = atoi(serInString+1);

serInString[0] = 0; // indicates we’ve used this string
if(colorCode == ‘r’){
redVal = redVal + 50;
if (redVal > 255){
redVal = 0;
}
Serial.print(“setting color “);
Serial.print(colorCode);
Serial.print(” to “);
Serial.print(redVal);
Serial.println();

analogWrite(redPin, redVal);
}
else if(colorCode == ‘g’){
greenVal = greenVal + 50;
if (greenVal > 255){
greenVal = 0;
}
Serial.print(“setting color “);
Serial.print(colorCode);
Serial.print(” to “);
Serial.print(greenVal);
Serial.println();

analogWrite(greenPin, greenVal);
}
else if(colorCode == ‘b’){
blueVal = blueVal + 50;
if (blueVal > 255){
blueVal = 0;
}
Serial.print(“setting color “);
Serial.print(colorCode);
Serial.print(” to “);
Serial.print(blueVal);
Serial.println();
}
analogWrite(bluePin, blueVal);
}

delay(100); // wait a bit, for serial data
}

//read a string from the serial and store it in an array
//you must supply the array variable
void readSerialString (char *strArray) {
int i = 0;
if(!Serial.available()) {
return;
}
while (Serial.available()) {
strArray[i] = Serial.read();
i++;
}

 

 

 

IMG_4398 IMG_4401 IMG_4409 20160913_215453 waxed paper origami flower IMG_4390

David Hockney’s iPad drawings

In my personal experience with drawing app for tablets, the response to touch on the surface of the tablet (the would-be stand-in for traditional drawing substrate) is quite impressive. The drawn markings reflect the amount of pressure I use: darker, more opaque, and thicker, compared to areas where I use a lighter touch. I can also layer colors in varying transparency, mimicking different materials such as watercolor, versus pastels or markers. The result is not too unlike using actual materials to make a drawing.

Drawing is an exercise of coordinating the hand to the eye, so while the first sketch tends to be less dependent on pressure, being mostly lines, but in order to develop the story within this sketch, emotions need to be evoked, through color or a sense of physical evidence left by the drawer.

Although I have not personally used a drawing app for tablets to any great extent, I have had the opportunity to witness the result of extensive exploration of the medium at an exhibit of David Hockney’s work in San Francisco’s de Young Museum a few years back. What distinguishes this body of work, titled “A Bigger Exhibition” is that the several of the pieces are entirely created digitally, utilizing a humble drawing app on his iPhone, and later, his iPad.

I refer here to David Hockney’s work because his time investment in the medium reveals an ownership of a unique level of craftsmanship. He started doodling in 2008 with the Brushes app when he discovered it on his iPhone. Since then he had graduated to using an iPad, which had larger surface area, and allowed him to use “more of his fingers.”

This is an example of a medium changing the end product of a craft and altering the possibilities of storytelling experience. The artwork can be presented in two forms: digital or traditionally printed as ink on paper. These two forms differ greatly in their experiential quality — the colors are different because they are based in different technologies: RGB for lights, and CMYK for ink on paper. The works shown digitally have an inherent luminosity because of the nature of the media; the screens themselves are backlit.

The other experiential quality of the works is the ease of enlargement in scale of presentation of these works. They could be enlarged digitally in computer and printed on large scale fabrics, or they could also be mapped onto several monitors mounted on a wall. With traditional substrates such as paper of canvas, the physicality of the object does not allow for change in scale unless if the drawing itself is painstakingly replicated utilizing the grid method on another object. This object would also have to be the size of the intended final presentation.

  In the digital medium we have such a limitation called “pixels,” which is the fundamental unit of any visual information.  All evidences of scaling of the work is dependent on the finite resolution of the file itself; the more the work is enlarged, the more visible these pixels will be. The experience of the viewer varies according to obviousness of these building block units. A viewer standing at a greater distance would have a greater illusion of continuity of form and color, whereas standing up close, the viewer will see the evidence of this illusion created with blocks of solid colors. There is a degree of revelation of the medium itself involved in the viewing experience. The viewer is reminded of the medium itself, but it is not much different from seeing witnessing gobs of paint on a canvas surface up close. The medium may have changed, and the process of craftsmanship may have also changed but the larger concept of the crafting a story utilizing available means remains the same.

One of the digital qualities that Hockney chose to embrace is the playback function of the Brush app. Each drawing stroke is recorded as a separate layer of information, so the iPad is able to replay every layer individually, revealing an animation of time and process. With traditional substrates, this replay of time would be impossible. The viewer, and the artist himself, are able to “travel back in time” so to speak.

While my experience with the table recreates a remarkable mimicry of visual possibilities achievable with physical media, what is conspicuously absent is the friction of contact of actual objects. Hockney, evidently, noticed this as well in his interview: “You miss the resistance of paper a little, but you can get a marvellous flow. So much variety is possible. You can’t overwork this, because it’s not a real surface. In watercolour, for instance, about three layers are the maximum. Beyond that it starts to get muddy. Here you can put anything on anything. You can put a bright, bright blue on top of an intense yellow.”

In response to the present limitations of recreating the physical world, companies such as Fujitsu has developed a prototype “haptic sensory tablet” that can convey a sense of contrasting textures such as slipperiness or roughness. While this simulated texture had been achieved previously existing technology by generating static electricity, Fujitsu’s haptic sensory technology utilizes ultrasonic vibrations to vary the friction between the touchscreen display and the user’s fingertip.

People from the village come up and tease me: “We hear you’ve started drawing on your telephone.” And I tell them, “Well, no, actually, it’s just that occasionally I speak on my sketch pad.”—David Hockney

slide_3 How to enlarge a picture using a home made drawing grid or view finder