Archive for the ‘presentations’ Category

This blog post is a version of a talk given at the 2018 ACM Computer Supported Cooperative Work and Social Computing (CSCW) Conference based on a paper written by Richmond Wong, Deirdre Mulligan, Ellen Van Wyk, John Chuang, and James Pierce, entitled Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks, which was honored with a best paper award. Find out more on our project page, our summary blog post, or download the paper: [PDF link] [ACM link]

In the work described in our paper, we created a set of conceptual speculative designs to explore privacy issues around emerging biosensing technologies, technologies that sense human bodies. We then used these designs to help elicit discussions about privacy with students training to be technologists. We argue that this approach can be useful for Values in Design and Privacy by Design research and practice.

dhs slide

Image from publicintelligence.net. Note the middle bullet point in the middle column – “avoids all privacy issues.”

 

Let me start with a motivating example, which I’ve discussed in previous talks. In 2007, the US Department of Homeland Security proposed a program to try to predict criminal behavior in advance of the crime itself –using thermal sensing, computer vision, eye tracking, gait sensing, and other physiological signals. And supposedly it would “avoid all privacy issues.” But it seems pretty clear that privacy was not fully thought through in this project. Now Homeland Security projects actually do go through privacy impact assessments and I would guess that in this case, they would probably go through the impact assessment process, find that the system doesn’t store the biosensed data, so privacy is protected. But while this might  address one conception of privacy related to storing data, there are other conceptions of privacy at play. There are still questions here about consent and movement in public space, about data use and collection, or about fairness and privacy from algorithmic bias.

While that particular imagined future hasn’t come to fruition; a lot of these types of sensors are now becoming available as consumer devices, used in applications ranging from health and quantified self, to interpersonal interactions, to tracking and monitoring. And it often seems like privacy isn’t fully thought through before new sensing devices and services are publicly announced or released.

A lot of existing privacy approaches, like privacy impact assessments, are deductive, checklist-based, or assume that privacy problems already known and well-defined in advance which often isn’t the case. Furthermore, the term “design” in discussions of Privacy by Design, is often seen as a way of providing solutions to problems identified by law, rather than viewing design as a generative set of practices useful to understanding what privacy issues might need to be considered in the first place. We argue that speculative design-inspired approaches can help explore and define problem spaces of privacy in inductive, situated, and contextual ways.

Design and Research Approach

We created a design workbook of speculative designs. Workbooks are collections of conceptual designs drawn together to allow designers to explore and reflect on a design space. Speculative design is a practice of using design to ask social questions, by creating conceptual designs or artifacts that help create or suggest a fictional world. We can create speculative designs explore different configurations of the world, imagine and understand possible alternative futures, which helps us think through issues that have relevance in the present. So rather than start with trying to find design solutions for privacy, we wanted to use design workbooks and speculative designs together to create a collection of designs to help us explore the what problem space of privacy might look like with emerging biosensing technologies.

workbook pages

A sampling of the conceptual designs we created as part of our design workbook

In our prior work, we created a design workbook to do this exploration and reflection. Inspired by recent research, science fiction, and trends from the technology industry, we created a couple dozen fictional products, interfaces, and webpages of biosensing technologies. These included smart camera enabled neighborhood watch systems, advanced surveillance systems, implantable tracking devices, and non-contact remote sensors that detect people’s heartrates. This process is documented in a paper from Designing Interactive Systems. These were created as part of a self-reflective exercise, for us as design researchers to explore the problem space of privacy. However, we wanted to know how non-researchers, particularly technology practitioners might discuss privacy in relation to these conceptual designs.

A note on how we’re approaching privacy and values.  Following other values in design work and privacy research, we want to avoid providing a single universalizing definition of privacy as a social value. We recognize privacy as inherently multiple – something that is situated and differs within different contexts and situations.

Our goal was to use our workbook as a way to elicit values reflections and discussion about privacy from our participants – rather than looking for “stakeholder values” to generate design requirements for privacy solutions. In other words, we were interested in how technologists-in-training would use privacy and other values to make sense of the designs.

Growing regulatory calls for “Privacy by Design” suggest that privacy should be embedded into all aspects of the design process, and at least partially done by designers and engineers. Because of this, the ability for technology professionals to surface, discuss, and address privacy and related values is vital. We wanted to know how people training for those jobs might use privacy to discuss their reactions to these designs. We conducted an interview study, recruiting 10 graduate students from a West Coast US University who are training to go into technology professions, most of whom had prior tech industry experience via prior jobs or internships. At the start of the interview, we gave them a physical copy of the designs and explained that the designs were conceptual, but didn’t tell them that the designs were initially made to think about privacy issues. In the following slides, I’ll show a few examples of the speculative design concepts we showed – you can see more of them in the paper. And then I’ll discuss the ways in which participants used values to make sense of or react to some of the designs.

Design examples

 This design depicts an imagined surveillance system for public spaces like airports that automatically assigns threat statuses to people by color-coding them. We intentionally left it ambiguous how the design makes its color-coding determinations to try to invite questions about how the system classifies people.

truwork

Conceptual TruWork design – “An integrated solution for your office or workplace!”

In our designs, we also began to iterate on ideas relating to tracking implants, and different types of social contexts they could be used in. Here’s a scenario advertising a workplace implantable tracking device called TruWork. Employers can subscribe to the service and make their employees implant these devices to keep track of their whereabouts and work activities to improve efficiency.

coupletrack3

Conceptual CoupleTrack infographic depicting an implantable tracking chip for couples

We also re-imagined the implant as “coupletrack,” an implantable tracking chip for couples to use, as shown in this infographic.

Findings

We found that participants centered values in their discussions when looking at the designs – predominantly privacy, but also related values such as trust, fairness, security, and due process. We found eight themes of how participants interacted with the designs in ways that surfaced discussion of values, but I’ll highlight three here: Imagining the designs as real; seeing one’s self as multiple users; and seeing one’s self as a technology professional. The rest are discussed in more detail in the paper.

Imagining the Designs as Real

peta-cam-2

Conceptual product page for a small, hidden, wearable camera

Even though participants were aware that the designs were imagined, Some participants imagined the designs as seemingly real by thinking about long term effects in the fictional world of the design. This design (pictured above) is an easily hideable, wearable, live streaming HD camera. One participant imagined what could happen to social norms if these became widely adopted, saying “If anyone can do it, then the definition of wrong-doing would be questioned, would be scrutinized.” He suggests that previously unmonitored activities would become open for surveillance and tracking like “are the nannies picking up my children at the right time or not? The definition of wrong-doing will be challenged”. Participants became actively involved fleshing out and creating the worlds in which these designs might exist. This reflection is also interesting, because it begins to consider some secondary implications of widespread adoption, highlighting potential changes in social norms with increasing data collection.

Seeing One’s Self as Multiple Users

Second, participants took multiple user subject positions in relation to the designs. One participant read the webpage for TruWork and laughed at the design’s claim to create a “happier, more efficient workplace,” saying, “This is again, positioned to the person who would be doing the tracking, not the person who would be tracked.”  She notes that the website is really aimed at the employer. She then imagines herself as an employee using the system, saying:

If I called in sick to work, it shouldn’t actually matter if I’m really sick. […] There’s lots of reasons why I might not wanna say, “This is why I’m not coming to work.” The idea that someone can check up on what I said—it’s not fair.

This participant put herself in both the viewpoint of an employer using the system and as an employee using the system, bringing up issues of workplace surveillance and fairness. This allowed participants to see values implications of the designs from different subject positions or stakeholder viewpoints.

Seeing One’s Self as a Technology Professional

Third, participants also looked at the designs through the lens of being a technology practitioner, relating the designs to their own professional practices. Looking at the design that automatically flags and detects supposedly suspicious people, one participant reflected on his self-identification as a data scientist and the values implications of predicting criminal behavior with data when he said:

the creepy thing, the bad thing is, like—and I am a data scientist, so it’s probably bad for me too, but—the data science is predicting, like Minority Report… [and then half-jokingly says] …Basically, you don’t hire data scientists.

Here he began to reflect on how his practices as data scientist might be implicated in this product’s creepiness – that a his initial propensity to want to use the data to predict if subjects are criminals or not might not be a good way to approach this problem and have implications for due process.

Another participant compared the CoupleTrack design to a project he was working on. He said:

[CoupleTrack] is very similar to our idea. […] except ours is not embedded in your skin. It’s like an IOT charm which people [in relationships] carry around. […] It’s voluntary, and that makes all the difference. You can choose to keep it or not to keep it.

In comparing the fictional CoupleTrack product to the product he’s working on in his own technical practice, the value of consent, and how one might revoke consent, became very clear to this participant. Again, we thought it was compelling that the designs led some participants to begin reflecting on the privacy implications in their own technical practices.

Reflections and Takeaways

Given the workbooks’ ability to help elicit reflections on and discussion of privacy in multiple ways, we see this approach as useful for future Values in Design and Privacy by Design work.

The speculative workbooks helped open up discussions about values, similar to some of what Katie Shilton identifies as “values levers,” activities that foreground values, and cause them to be viewed as relevant and useful to design. Participants’ seeing themselves as users to reflect on privacy harms is similar to prior work showing how self-testing can lead to discussion of values. Participants looking at the designs from multiple subject positions evokes value sensitive design’s foregrounding of multiple stakeholder perspectives. Participants reflected on the designs both from stakeholder subject positions and through the lenses of their professional practices as technology practitioners in training.

While Shilton identifies a range of people who might surface values discussions, we see the workbook as an actor to help surface values discussions. By depicting some provocative designs that raised some visceral and affective reactions, the workbooks brought attention to questions about potential sociotechnical configurations of biosensing technologies. Future values in design work might consider creating and sharing speculative design workbooks for eliciting values reflections with experts and technology practitioners.

More specifically, with this project’s focus on privacy, we think that this approach might be useful for “Privacy by Design”, particularly for technologists trying to surface discussions about the nature of the privacy problem at play for an emerging technology. We analyzed participants’ responses using Mulligan et al’s privacy analytic framework. The paper discusses this in more detail, but the important thing is that participants went beyond just saying privacy and other values are important to think about. They began to grapple with specific, situated, and contextual aspects of privacy – such as considering different ways to consent to data collection, or noting different types of harms that might emerge when the same technology is used in a workplace setting compared to an intimate relationship. Privacy professionals are looking for tools to help them “look around corners,” to help understand what new types of problems related to privacy might occur in emerging technologies and contexts. This provides a potential new tool for privacy professionals in addition to many of the current top-down, checklist approaches–which assume that the concepts of privacy at play are well known in advance. Speculative design practices can be particularly useful here – not to predict the future, but in helping to open and explore the space of possibilities.

Thank you to my collaborators, our participants, and the anonymous reviewers.

Paper citation: Richmond Y. Wong, Deirdre K. Mulligan, Ellen Van Wyk, James Pierce, and John Chuang. 2017. Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 111 (December 2017), 26 pages. DOI: https://doi.org/10.1145/3134746

This post summarizes a research paper, Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks, co-authored with Deirdre Mulligan, Ellen Van Wyk, John Chuang, and James Pierce. The paper will be presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) on Monday November 5th (in the afternoon Privacy in Social Media session). Full paper available here.

Recent wearable and sensing devices, such as Google GlassStrava, and internet-connected toys have raised questions about ways in which privacy and other social values might be implicated by their development, use, and adoption. At the same time, legal, policy, and technical advocates for “privacy by design” have suggested that privacy should embedded into all aspects of the design process, rather than being addressed after a product is released, or rather than being addressed as just a legal issue. By advocating that privacy be addressed through technical design processes, the ability for technology professionals to surface, discuss, and address privacy and other social values becomes vital.

Companies and technologists already use a range of tools and practices to help address privacy, including privacy engineering practices, or making privacy policies more readable and usable. But many existing privacy mitigation tools are either deductive, or assume that privacy problems already known and well-defined in advance. However we often don’t have privacy concerns well-conceptualized in advance when creating systems. Our research shows that design approaches (drawing on a set of techniques called speculative design and design fiction) can help better explore, define, perhaps even anticipate, the what we mean by “privacy” in a given situation. Rather than trying to look at a single, abstract, universal definition of privacy, these methods help us think about privacy as relations among people, technologies, and institutions in different types of contexts and situations.

Creating Design Workbooks

We created a set of design workbooks — collections of design proposals or conceptual designs, drawn together to allow designers to investigate, explore, reflect on, and expand a design space. We drew on speculative design practices: in brief, our goal was to create a set of slightly provocative conceptual designs to help engage people in reflections or discussions about privacy (rather than propose specific solutions to problems posed by privacy).

A set of sketches that comprise the design workbook

Inspired by science fiction, technology research, and trends from the technology industry, we created a couple dozen fictional products, interfaces, and webpages of biosensing technologies, or technologies that sense people. These included smart camera enabled neighborhood watch systems, advanced surveillance systems, implantable tracking devices, and non-contact remote sensors that detect people’s heartrates. In earlier design work, we reflected on how putting the same technologies in different types of situations, scenarios, and social contexts, would vary the types of privacy concerns that emerged (such as the different types of privacy concerns that would emerge if advanced miniatures cameras were used by the police, by political advocates, or by the general public). However, we wanted to see how non-researchers might react to and discuss the conceptual designs.

How Did Technologists-In-Training View the Designs?

Through a series of interviews, we shared our workbook of designs with masters students in an information technology program who were training to go into the tech industry. We found several ways in which they brought up privacy-related issues while interacting with the workbooks, and highlight three of those ways here.

fictional truwork design

TruWork — A product webpage for a fictional system that uses an implanted chip allowing employers to keep track of employees’ location, activities, and health, 24/7.

First, our interviewees discussed privacy by taking on multiple user subject positions in relation to the designs. For instance, one participant looked at the fictional TruWork workplace implant design by imagining herself in the positions of an employer using the system and an employee using the system, noting how the product’s claim of creating a “happier, more efficient workplace,” was a value proposition aimed at the employer rather than the employee. While the system promises to tell employers whether or not their employees are lying about why they need a sick day, the participant noted that there might be many reasons why an employee might need to take a sick day, and those reasons should be private from their employer. These reflections are valuable, as prior work has documented how considering the viewpoints of direct and indirect stakeholders is important for considering social values in design practices.

fictional couple track design

CoupleTrack — an advertising graphic for a fictional system that uses an implanted chip for people in a relationship wear in order to keep track of each other’s location and activities.

A second way privacy reflections emerged was when participants discussed the designs in relation to their professional technical practices. One participant compared the fictional CoupleTrack implant to a wearable device for couples that he was building, in order to discuss different ways in which consent to data collection can be obtained and revoked. CoupleTrack’s embedded nature makes it much more difficult to revoke consent, while a wearable device can be more easily removed. This is useful because we’re looking for ways workbooks of speculative designs can help technologists discuss privacy in ways that they can relate back to their own technical practices.

fictional airport tracking system

Airport Tracking System — a sketch of an interface for a fictional system that automatically detects and flags “suspicious people” by color-coding people in surveillance camera footage.

A third theme that we found was that participants discussed and compared multiple ways in which a design could be configured or implemented. Our designs tend to describe products’ functions but do not specify technical implementation details, allowing participants to imagine multiple implementations. For example, a participant looking at the fictional automatic airport tracking and flagging system discussed the privacy implication of two possible implementations: one where the system only identifies and flags people with a prior criminal history (which might create extra burdens for people who have already served their time for a crime and have been released from prison); and one where the system uses behavioral predictors to try to identify “suspicious” behavior (which might go against a notion of “innocent until proven guilty”). The designs were useful at provoking conversations about the privacy and values implications of different design decisions.

Thinking About Privacy and Social Values Implications of Technologies

This work provides a case study showing how design workbooks and speculative design can be useful for thinking about the social values implications of technology, particularly privacy. In the time since we’ve made these designs, some (sometimes eerily) similar technologies have been developed or released, such as workers at a Swedish company embedding RFID chips in their hands, or Logitech’s Circle Camera.

But our design work isn’t meant to predict the future. Instead, what we tried to do is take some technologies that are emerging or on the near horizon, and think seriously about ways in which they might get adopted, or used and misused, or interact with existing social systems — such as the workplace, or government surveillance, or school systems. How might privacy and other values be at stake in those contexts and situations? We aim for for these designs to help shed light on the space of possibilities, in an effort to help technologists make more socially informed design decisions in the present.

We find it compelling that our design workbooks helped technologists-in-training discuss emerging technologies in relation to everyday, situated contexts. These workbooks don’t depict far off speculative science fiction with flying cars and spaceships. Rather they imagine future uses of technologies by having someone look at a product website, or a amazon.com page or an interface and thinking about the real and diverse ways in which people might experience those technology products. Using these techniques that focus on the potential adoptions and uses of emerging technologies in everyday contexts helps raise issues which might not be immediately obvious if we only think about positive social implications of technologies, and they also help surface issues that we might not see if we only think about social implications of technologies in terms of “worst case scenarios” or dystopias.

Paper Citation:

Richmond Y. Wong, Deirdre K. Mulligan, Ellen Van Wyk, James Pierce, and John Chuang. 2017. Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 111 (December 2017), 26 pages. DOI: https://doi.org/10.1145/3134746


This post is crossposted with the ACM CSCW Blog

This blog post is a version of a talk I gave at the 2018 ACM Designing Interactive Systems (DIS) Conference based on a paper written with Nick Merrill and John Chuang, entitled When BCIs have APIs: Design Fictions of Everyday Brain-Computer Interface Adoption. Find out more on our project page, or download the paper: [PDF link] [ACM link]

In recent years, brain computer interfaces, or BCIs, have shifted from far-off science fiction, to medical research, to the realm of consumer-grade devices that can sense brainwaves and EEG signals. Brain computer interfaces have also featured more prominently in corporate and public imaginations, such as Elon Musk’s project that has been said to create a global shared brain, or fears that BCIs will result in thought control.

Most of these narratives and imaginings about BCIs tend to be utopian, or dystopian, imagining radical technological or social change. However, we instead aim to imagine futures that are not radically different from our own. In our project, we use design fiction to ask: how can we graft brain computer interfaces onto the everyday and mundane worlds we already live in? How can we explore how BCI uses, benefits, and labor practices may not be evenly distributed when they get adopted?

Brain computer interfaces allow the control of a computer from neural output. In recent years, several consumer-grade brain-computer interface devices have come to market. One example is the Neurable – it’s a headset used as an input device for virtual reality systems. It detects when a user recognizes an object that they want to select. It uses a phenomenon called the P300 – when a person either recognizes a stimulus, or receives a stimulus they are not expecting, electrical activity in their brain spikes approximately 300 milliseconds after the stimulus. This electrical spike can be detected by an EEG, and by several consumer BCI devices such as the Neurable. Applications utilizing the P300 phenomenon include hands-free ways to type or click.

Demo video of a text entry system using the P300

Neurable demonstration video

We base our analysis on this already-existing capability of brain computer interfaces, rather than the more fantastical narratives (at least for now) of computers being able to clearly read humans’ inner thoughts and emotions. Instead, we create a set of scenarios that makes use of the P300 phenomenon in new applications, combined with the adoption of consumer-grade BCIs by new groups and social systems.

Stories about BCI’s hypothetical future as a device to make life easier for “everyone” abound, particularly in Silicon Valley, as shown in recent research.  These tend to be very totalizing accounts, neglecting the nuance of multiple everyday experiences. However, past research shows that the introductions of new digital technologies end up unevenly shaping practices and arrangements of power and work – from the introduction of computers in workplaces in the 1980s, to the introduction of email, to forms of labor enabled algorithms and digital platforms. We use a set of a design fictions to interrogate these potential arrangements in BCI systems, situated in different types of workers’ everyday experiences.

Design Fictions

Design fiction is a practice of creating conceptual designs or artifacts that help create a fictional reality. We can use design fiction to ask questions about possible configurations of the world and to think through issues that have relevance and implications for present realities. (I’ve written more about design fiction in prior blog posts).

We build on Lindley et al.’s proposal to use design fiction to study the “implications for adoption” of emerging technologies. They argue that design fiction can “create plausible, mundane, and speculative futures, within which today’s emerging technologies may be prototyped as if they are domesticated and situated,” which we can then analyze with a range of lenses, such as those from science and technology studies. For us, this lets us think about technologies beyond ideal use cases. It lets us be attuned to the experiences of power and inequalities that people experience today, and interrogate how emerging technologies might get uptaken, reused, and reinterpreted in a variety of existing social relations and systems of power.

To explore this, we thus created a set of interconnected design fictions that exist within the same fictional universe, showing different sites of adoptions and interactions. We build on Coulton et al.’s insight that design fiction can be a “world-building” exercise; design fictions can simultaneously exist in the same imagined world and provide multiple “entry points” into that world.

We created 4 design fictions that exist in the same world: (1) a README for a fictional BCI API, (2) a programmer’s question on StackOverflow who is working with the API, (3) an internal business memo from an online dating company, (4) a set of forum posts by crowdworkers who use BCIs to do content moderation tasks. These are downloadable at our project page if you want to see them in more detail.  (I’ll also note that we conducted our work in the United States, and that our authorship of these fictions, as well as interpretations and analysis are informed by this sociocultural context.)

Design Fiction 1: README documentation of an API for identifying P300 spikes in a stream of EEG signals

 

First, this is README documentation of an API for identifying P300 spikes in a stream of EEG signals. The P300 response, or “oddball” response is a real phenomenon. It’s a spike in brain activity when a person is either surprised, or when see something that they’re looking for. This fictional API helps identify those spikes in EEG data. We made this fiction in the form of a GitHub page to emphasize the everyday nature of this documentation, from the viewpoint of a software developer. In the fiction, the algorithms underlying this API come from a specific set of training data from a controlled environment in a university research lab. The API discloses and openly links to the data that its algorithms were trained on.

In our creation and analysis of this fiction, for us it surfaces ambiguity and a tension about how generalizable the system’s model of the brain is. The API with a README implies that the system is meant to be generalizable, despite some indications based on its training dataset that it might be more limited. This fiction also gestures more broadly toward the involvement of academic research in larger technical infrastructures. The documentation notes that the API started as a research project by a professor at a University before becoming hosted and maintained by a large tech company. For us, this highlights how collaborations between research and industry may produce artifacts that move into broader contexts. Yet researchers may not be thinking about the potential effects or implications of their technical systems in these broader contexts.

Design Fiction 2: A question on StackOverflow

 

Second, a developer, Jay, is working with the BCI API to develop a tool for content moderation. He asks a question on Stack Overflow, a real website for developers to ask and answer technical questions. He questions the API’s applicability beyond lab-based stimuli, asking “do these ‘lab’ P300 responses really apply to other things? If you are looking over messages to see if any of them are abusive, will we really see the ‘same’ P300 response?” The answers from other developers suggest that they predominantly believe the API is generalizable to a broader class of tasks, with the most agreed-upon answer saying “The P300 is a general response, and should apply perfectly well to your problem.”

This fiction helps us explore how and where contestation may occur in technical communities, and where discussion of social values or social implications could arise. We imagine the first developer, Jay, as someone who is sensitive to the way the API was trained, and questions its applicability to a new domain. However, he encounters the commenters who believe that physiological signals are always generalizable, and don’t engage in questions of broader applicability. The community’s answers re-enforce notions not just of what the technical artifacts can do, but what the human brain can do. The stack overflow answers draw on a popular, though critiqued, notion of the “brain-as-computer,” framing the brain as a processing unit with generic processes that take inputs and produce outputs. Here, this notion is reinforced in the social realm on Stack Overflow.

Design Fiction 3: An internal business memo for a fictional online dating company

 

Meanwhile, SparkTheMatch.com, a fictional online dating service, is struggling to moderate and manage inappropriate user content on their platform. SparkTheMatch wants to utilize the P300 signal to tap into people’s tacit “gut feelings” to recognize inappropriate content. They are planning to implement a content moderation process using crowdsourced workers wearing BCIs.

In creating this fiction, we use the memo to provide insight into some of the practices and labor supporting the BCI-assisted review process from the company’s perspective. The memo suggests that the use of BCIs with Mechanical Turk will “help increase efficiency” for crowdworkers while still giving them a fair wage. The crowdworkers sit and watch a stream of flashing content, while wearing a BCI and the P300 response will subconsciously identity when workers recognize supposedly abnormal content. Yet we find it debatable whether or not this process improves the material conditions of the Turk workers. The amount of content to look at in order to make the supposedly fair wage may not actually be reasonable.

SparkTheMatch employees creating the Mechanical Turk tasks don’t directly interact with the BCI API. Instead they use pre-defined templates created by the company’s IT staff, a much more mediated interaction compared to the programmers and developers reading documentation and posting on Stack Overflow. By this point, the research lab origins of the P300 API underlying the service and questions about its broader applicability are hidden. From the viewpoint of SparkTheMatch staff, the BCI-aspects of their service just “works,” allowing managers to design their workflows around it, obfuscating the inner workings of the P300 API.

Design fiction 4: A crowdworker forum for workers who use BCIs

 

Fourth, the Mechanical Turk workers who do the SparkTheMatch content moderation work, share their experiences on a crowdworker forum. These crowd workers’ experiences and relationships to the P300 API is strikingly different from the people and organizations described in the other fictions—notably the API is something that they do not get to explicitly see. Aspects of the system are blackboxed or hidden away. While one poster discusses some errors that occurred, there’s ambiguity about whether fault lies with the BCI device or the data processing. EEG signals are not easily human-comprehensible, making feedback mechanisms difficult. Other posters blame the user for the errors. Which is problematic, given the preciousness of these workers’ positions, as crowd workers tend to have few forms of recourse when encountering problems with tasks.

For us, these forum accounts are interesting because they describe a situation in which the BCI user is not the person who obtains the real benefits of its use. It’s the company SparkTheMatch, not the BCI-end users, that is obtaining the most benefit from BCIs.

Some Emergent Themes and Reflections

From these design fictions, several salient themes arose for us. By looking at BCIs from the perspective of several everyday experiences, we can see different types of work done in relation to BCIs – whether that’s doing software development, being a client for a BCI-service, or using the BCI to conduct work. Our fictions are inspired by others’ research on the existing labor relationships and power dynamics in crowdwork and distributed content moderation (in particular work by scholars Lilly Irani and Sarah T. Roberts). Here we also critique utopian narratives of brain-controlled computing that suggest BCIs will create new efficiencies, seamless interactions, and increased productivity. We investigate a set of questions on the role of technology in shaping and reproducing social and economic inequalities.

Second, we use the design fiction to surface questions about the situatedness of brain sensing, questioning how generalizable and universal physiological signals are. Building on prior accounts of situated actions and extended cognition, we note the specific and the particular should be taken into account in the design of supposedly generalizable BCI systems.

These themes arose iteratively, and were somewhat surprising for us, particularly just how different the BCI system looks like from each of the different perspectives in the fictions. We initially set out to create a rather mundane fictional platform or infrastructure, an API for BCIs. With this starting point we brainstormed other types of direct and indirect relationships people might have with our BCI API to create multiple “entry points” into our API’s world. We iterated on various types of relationships and artifacts—there are end-users, but also clients, software engineers, app developers, each of whom might interact with an API in different ways, directly or indirectly. Through iterations of different scenarios (a BCI-assisted tax filing service was thought of at one point), and through discussions with our colleagues (some of whom posed questions about what labor in higher education might look like with BCIs), we slowly began to think that looking at the work practices implicated in these different relationships and artifacts would be a fruitful way to focus our designs.

Toward “Platform Fictions”

In part, we think that creating design fictions in mundane technical forms like documentation or stack overflow posts might help the artifacts be legible to software engineers and technical researchers. More generally, this leads us to think more about what it might mean to put platforms and infrastructures at the center of design fiction (as well as build on some of the insights from platform studies and infrastructure studies). Adoption and use does not occur in a vacuum. Rather, technologies get adopted into and by existing sociotechnical systems. We can use design fiction to open the “black boxes” of emerging sociotechnical systems. Given that infrastructures are often relegated to the background in everyday use, surfacing and focusing on an infrastructure helps us situate our design fictions in the everyday and mundane, rather than dystopia or utopia.

We find that using a digital infrastructure as a starting point helps surface multiple subject positions in relation to the system at different sites of interaction, beyond those of end-users. From each of these subject positions, we can see where contestation may occur, and how the system looks different. We can also see how assumptions, values, and practices surrounding the system at a particular place and time can be hidden, adapted, or changed by the time the system reaches others. Importantly, we also try to surface ways the system gets used in potentially unintended ways – we don’t think that the academic researchers who developed the API to detect brain signal spikes imagined that it would be used in a system of arguably exploitative crowd labor for content moderation.

Our fictions try to blur clear distinctions that might suggest what happens in “labs,” is separate from the “the outside world”, instead highlighting their entanglements. Given that much of BCI research currently exists in research labs, we raise this point to argue that BCI researchers and designers should also be concerned about the implications of adoption and application. This helps gives us insight into the responsibilities (and complicitness) of researchers and builders of technical systems. Some of the recent controversies around Cambridge Analytica’s use of Facebook’s API points to ways in which the building of platforms and infrastructures isn’t neutral, and that it’s incumbent upon designers, developers, and researchers to raise issues related to social concerns and potential inequalities related to adoption and appropriation by others.

Concluding Thoughts

This work isn’t meant to be predictive. The fictions and analysis present our specific viewpoints by focusing on several types of everyday experiences. One can read many themes into our fictions, and we encourage others to do so. But we find that focusing on potential adoptions of an emerging technology in the everyday and mundane helps surface contours of debates that might occur, which might not be immediately obvious when thinking about BCIs – and might not be immediately obvious if we think about social implications in terms of “worst case scenarios” or dystopias. We hope that this work can raise awareness among BCI researchers and designers about social responsibilities they may have for their technology’s adoption and use. In future work, we plan to use these fictions as research probes to understand how technical researchers envision BCI adoptions and their social responsibilities, building on some of our prior projects. And for design researchers, we show that using a fictional platform in design fiction can help raise important social issues about technology adoption and use from multiple perspectives beyond those of end-users, and help surface issues that might arise from unintended or unexpected adoption and use. Using design fiction to interrogate sociotechnical issues present in the everyday can better help us think about the futures we desire.


Crossposted with Richmond’s blog, The Bytegeist

 

BioSENSE researchers will be presenting two papers at the 2018 ACM Designing Interactive Systems (DIS) Conference next week.

A paper by Richmond Wong, Nick Merrill, and John Chuang entitled When BCIs have APIs: Design Fictions of Everyday Brain-Computer Interface Adoption uses design fiction methods to think about the potential social implications of brain-computer interfaces as they are integrated into existing technical infrastructures, and systems of work and labor. [Read a pre-print version here]

ABSTRACT: In this paper, we use design fiction to explore the social implications for adoption of brain-computer interfaces (BCI). We argue that existing speculations about BCIs are incomplete: they discuss fears about radical changes in types of control, at the expense of discussing more traditional types of power that emerge in everyday experience, particularly via labor. We present a design fiction in which a BCI technology creates a new type of menial labor, using workers’ unconscious reactions to assist algorithms in performing a sorting task. We describe how such a scenario could unfold through multiple sites of interaction: the design of an API, a programmer’s question on StackOverflow, an internal memo from a dating company, and a set of forum posts about laborers’ experience using the designed system. Through these fictions, we deepen and expand conversations around what kinds of (everyday) futures BCIs could create.

A second paper co-authored by BioSENSE researchers Nick Merrill and Richmond Wong, with collaborators James Pierce, Sarah Fox, and Carl DiSalvo, entitled An Interface without A User: An Exploratory Design Study of Online Privacy Policies and Digital Legalese investigates long textual privacy policies as a type of interface in order to probe people’s perceptions of privacy policies, and to investigate the potential for other forms of notice. [Download a pre-print version here]

ABSTRACT: Privacy policies are critical to understanding one’s rights on online platforms, yet few users read them. In this pictorial, we approach this as a systemic issue that is part a failure of interaction design. We provided a variety of people with printed packets of privacy policies, aiming to tease out this form’s capabilities and limitations as a design interface, to understand people’s perception and uses, and to critically imagine pragmatic revisions and creative alternatives to existing privacy policies.

Authors

Noura Howell, Laura Devendorf, Tomás Alfonso Vega Gálvez, Rundong Tian, Kimiko Ryokai

Abstract

Biosensing displays, increasingly enrolled in emotional reflection, promise authoritative insight by presenting users’ emotions as discrete categories. Rather than machines interpreting emotions, we sought to explore an alternative with emotional biosensing displays in which users formed their own interpretations and felt comfortable critiquing the display. So, we designed, implemented, and deployed, as a technology probe, an emotional biosensory display: Ripple is a shirt whose pattern changes color responding to the wearer’s skin conductance, which is associated with excitement. 17 participants wore Ripple over 2 days of daily life. While some participants appreciated the ‘physical connection’ Ripple provided between body and emotion, for others Ripple fostered insecurities about ‘how much’ feeling they had. Despite our design intentions, we found participants rarely questioned the display’s relation to their feelings. Using biopolitics to speculate on Ripple’s surprising authority, we highlight ethical stakes of biosensory representations for sense of self and ways of feeling.

 

Nick Merrill, Richmond Wong, Noura Howell, Luke Stark, Lucian Leahu, and Dawn Nafus hosted a workshop on Biosensing in Everyday Life at the ACM Designing Interactive Systems conference (DIS 2017).

From the workshop website:

Biosensing, by which we mean sensors measuring human physiological and behavioral data, is becoming pervasive throughout daily life: beyond wristwatches that measure heartrate and skin conductance, to clothing, furniture, cars, personal robots, ingestibles, virtual reality headsets, as well as visual and wireless sensors that can collect bodily data at a distance.

Biosensing brings with it new challenges (and opportunities) for the design of interactive systems, such as supporting social and emotional interpretations of biosensory data; implications for how people construct themselves and are constructed through data; and what privacy means in such contexts.

This workshop seeks to engage researchers in exploring these themes in lights of the emerging ubiquity of biosensors in everyday life. We welcome participants whose work covers a variety of different topics, including but not limited to:

  • Self-tracking practices
  • Privacy and surveillance
  • Critical and speculative design
  • Infrastructure studies
  • Affective systems
  • Design for reflection

We welcome work from a variety of methodologies, such as design research, anthropology, STS, ethnographic studies, user studies, art practice, systems building, and critical or speculative design. Submissions may take the form of essays, arguments, empirical work, pictorials, video, portfolios or artifacts.

Workshop proposal on ACM Digital Library

Full text workshop proposal

 

This post is a version of a talk given at the 2017 ACM Designing Interactive Systems (DIS) Conference on a paper by Richmond Wong, Ellen Van Wyk, and James Pierce, Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies in which we used a science fiction novel as the starting point for creating a set of design fictions to explore issues around privacy.  This blog post is also cross-posted on Richmond’s blog, The Bytegeist. Find out more on our project page, or download the paper: [PDF link ] [ACM link]

Many emerging and proposed sensing technologies raise questions about privacy and surveillance. For instance new wireless smarthome security cameras sound cool… until we’re using them to watch a little girl in her bedroom getting ready for school, which feels creepy, like in the tweet below.

Or consider the US Department of Homeland Security’s imagined future security system. Starting around 2007, they were trying to predict criminal behavior, pre-crime, like in Minority Report. They planned to use thermal sensing, computer vision, eye tracking, gait sensing, and other physiological signals. And supposedly it would “avoid all privacy issues.”  And it’s pretty clear that privacy was not adequately addressed in this project, as found in an investigation by EPIC.

 

dhs slide.png

Image from publicintelligence.net. Note the middle bullet point in the middle column – “avoids all privacy issues.”

A lot of these types of products or ideas are proposed or publicly released – but somehow it seems like privacy hasn’t been adequately thought through beforehand. However, parallel to this, we see works of science fiction which often imagine social changes and effects related to technological change – and do so in situational, contextual, rich world-building ways. This led us to our starting hunch for our work:

perhaps we can leverage science fiction, through design fiction, to help us think through the values at stake in new and emerging technologies.

Designing for provocation and reflection might allow us to do a similar type of work through design that science fiction often does.

Read the rest of this entry »