Archive for October, 2018

Halloween Cuckoo Clock

October 30th, 2018

Team: Azin Mirzaagha, Patrick Barin, Yunjie Yao

 

Description

Create a “Cuckoo Clock” mechanics. Cuckoo Clock typically has an automaton of the bird that appears through a small trap door while the clock is striking.

 

Components Used

  • Plywood
  • wood beams
  • wood pieces
  • rubber bands
  • Small hinge
  • wood squares
  • tape
  • Glue
  • Scissor

 

Halloween Cuckoo Clock

 

We created a Halloween theme cuckoo clock because this Wednesday is Halloween!!

When made a house, and when the door of the house is opened, a pumpkin will pop out with a message “trick or treat”!

 

Is the GDPR’s Bark Bigger than its Bite?
by Zach Day on 10/21/2018

The landmark EU regulation, formally called the General Data Protection Regulation or GDPR, took effect on May 25, 2018. Among other protections, GDPR grants “data subjects” with a bundle of new rights and placed an increased obligation on companies who collect and use the data. Firms were given a two year preemptive notice to implement changes that would bring them into compliance with GDPR by May 2018.


Image Credit: https://www.itrw.net/2018/03/22/what-you-need-to-know-about-general-data-protection-regulation-gdpr/

I don’t think I’m reaching too far to make this claim: some for-profit enterprises won’t do the right thing just because it’s the right thing, especially when the right thing is costly. Do the EU member countries’ respective Data Protection Authorities, also called DPAs, have enforcement tools that are powerful enough to motivate firms to invest in the systems and processes required for compliance?

Let’s compare two primary enforcement tools/consequences, monetary fines and bad press coverage.

Monetary Fines

When the UK Information Commissioner’s Office released their findings on Facebook’s role in the Cambridge Analytica scandal, the fine was capped at 500,000 pounds or $661,000. This is because Facebook’s transgressions occured before the initiation of GDPR and was therefore subject to the regulations of the UK Data Protection Act of 1998, the UK’s GDPR precursor, which specifies a maximum administrative fine of 500,000 pounds. How painful do you think a sub-million dollar fine is for a company that generated $40B of revenue in 2017?

GDPR vastly increases the potential monetary fine amount to a maximum of $20M euros or 4% of the company’s global turnover. For Facebook, this would have amounted to a fine of ~$1.6B. That’s more like it.

But how effectively can EU countries enforce the GDPR? GDPR enforcement occurs at the national level, with each member country possessing its own Data Protection Authority. Each nation’s DPA has full enforcement discretion. Because of this, there will inevitably be variation in enforcement trends from country to country. Countries like Germany, with a strong cultural value of protecting individual privacy, may enforce the GDPR with far more gusto than a country like Malta or Cyprus.

Monetary fines are not going to be the go-to tool for every enforcement case brought under the GDPR. DPAs have vast investigative powers, such as carrying out audits, obtaining access to stored personal data, accessing the facilities of the data controller or processor then issuing warnings, reprimands, orders, and bans on processing. It’s likely that these methods will be used with much more frequency. Although, the first few cases will be anomalies since, a) media outlets are chomping at the bit to report on the first few enforcement actions taken under the GDPR and b) DPAs will be trying to send a message.

PR Damage

What do you think burned worse to Facebook, a $661,000 fine or the front page of every international media outlet running the story for hundreds of millions of readers to see (imagine for how many this was the last straw, causing them to deactivate their Facebook accounts)? I would argue that the most powerful tool in the GDPR regulators toolbox is the bad press associated with a GDPR violation brought against a company, especially in the early years of the regulation when the topic is still fraught.

Mark Zuckerberg testifying before a joint hearing of the Senate Judiciary and Senate Commerce Committees, April 10, 2018. Image Credit: https://variety.com/2018/digital/news/zuckerberg-congress-testimony-1202749461/

A report published in July by TrustArc outlining estimated GDPR compliance rates across the US, UK, and EU noted that 57% of firms are motivated to comply with GDPR by ‘customer satisfaction’, whereas only 39% were motivated by fines. Of course a small business with 100 employees in a suburb of London is chiefly concerned with a potential 20,000,000 euro fine. They’d simply be out of business. On the other hand, large Silicon Valley based tech firms, with armies of experienced attorneys (Facebook attorneys have plenty of litigation experience in this area, by now), have much more to lose from more bad press versus a fine of any amount allowed under GDPR.

Path Forward Firms are going to pursue any path that leads to maximum revenue growth and profitability, even if it means operating in ethical/legal grey areas. If GDPR regulators plan to effectively motivate compliance from companies, they need to focus on the most sensitive pressure points. For some, it’s the threat of monetary penalty. For the tech behemoths, it’s the threat of another negative front-page headline. Regulators will be at a strategic disadvantage if they don’t acknowledge this fact and master their PR strategies.

Ethical Issues of the Healthcare Internet of Things
By Osmar Coronel  | October 21, 2018


Tracking vital signs on the fly

Very likely you are already using an Internet of Things (IoTs) product for your healthcare.


Our new connected world

IoTs are small computing objects that are constantly collecting information and sending it to the cloud or turning on or off something automatically. This blog is mainly focused on the ethical risks of IoT devices applied to improving your health.

IoT devices are predicted to expand in the healthcare industry due to the many benefits they provide. However, our FIPP, Fair Information Practice Policy, regulation might not be able to protect the consumer against all the new ethical risks associated with the upcoming healthcare IoT device applications.

IoT devices have several applications in healthcare. For instance, insulin pumps and blood-pressure cuffs connect to a mobile app that tracks and monitors blood pressure. The power of technology allows people to take control of their health. With IoTs people can also be more engage with their health. A patient with an insulin pump can be in more control of their blood pressure levels and this gives them more control of their diabetes. IoTs applied to healthcare can monitor and collect information such as heart rate and skin temperature. The data captured from the consumer can be transmitted, stored, and analyzed. This creates opportunities for research.

The use of IoTs in healthcare is expanding in the medical area. According to a Market Watch article, the healthcare IoT Market is expected to be worth $158 Billion by 2022The Consumer Electronics Show (CES) in 2018, showcased several companies IoT products created to diagnose, monitor and treat illnesses.

Under the lens of the Federal Trade Commission (“FTC”), the FIPP focus is on notice, access, accuracy, data minimization, security, and accountability. The most relevant recommendations for IoT are security, data minimization, notice, and choice.

From the FIPP security recommendation, companies should implement from the very beginning “security by design”. They should also train their employees and retain providers that are able to enforce security in their services.


New Healthcare

Another risk of the application of healthcare IoTs is that it collects a large amount of data of the consumer for a long period. The FIPP Data Minimization proposes that companies should limit the data collected only for what is needed and for a limited time. Companies should develop and apply best practices, business needs and develop policies and practices that impose reasonable limits on the collection and retention of consumer data. Security and data minimization have more explicit initiatives to help minimize the ethical risks of IoTs.

On the other hand, notice and choice could be a challenge. In general there is a high risk that IoT companies do not provide notice or choice to the customer. Providing a notice or choice is challenging since IoTs are used in everyday life and they typically lack a user interface. Furthermore, some people think that the benefit of the IoT devices outweigh the cost of not giving the consumer notice and choice.

It is challenging to provide a choice when there is no user interface. However, according to the FIPP, there are still suitable alternatives like the implementation of video tutorials and implementation of QR codes on the devices. Also, in many cases, the data use might be under the consumers’ expectations, so that means that not every data collection needs to require the consumer consenting to the collection of data. Companies should implement opt-in choices at the point of sale when the consumer is acquiring the device with an easy to understand language.

The use of new technological advances in healthcare IoT devices offers a large number of benefits and they will expand considerable in the healthcare sector. Nonetheless, they will require careful implementation. The expansion of healthcare IoTs will come with a surge of new ethical problems and conflicts.

Unknown Knowns

October 24th, 2018

Unknown Knowns
by Anonymous on 10/21/2018


Image Credit: https://www.azquotes.com/quote/254214
Donald Rumsfeld during Department of Defense News Briefing, archive.defense.gov. February 12, 2002.

The taxonomy of knowledge laid out by Rumsfeld in his much quoted new briefing conspicuously omits a fourth category: unknown knowns. In his critique of Rumsfeld’s analysis, the philosopher Slavoj Žižek defines the unknown knowns as “the disavowed beliefs and suppositions we are not even aware of adhering to ourselves, but which nonetheless determine our acts and feelings.” While this may seem like the realm of psychoanalysis, it’s a term that could also be applied to two of the most important topics in machine learning today: bias and interpretability.

The battle against bias, especially illegal biases that discriminate against protected classes, is a strong focus for both academia and industry. Simply testing the outputs of an algorithm for different categories of people for statistical difference can reveal things about the decision making process that were previous unknown, flipping things from the “unknown known” state to “known known.” More advanced interpretability tools, like LIME, are able to reveal even more subtle relationships between inputs and outputs.

While swaths of “unknown knowns” are being converted to “known knowns” with new techniques and attention, there’s still a huge amount of “unknown knowns” that we will miss forever. Explicitly called out protected classes are becoming easier to measure, but it’s rare to check for all possible intersections of protected classes. For example, there may be know measurable bias in some task when comparing genders or across race separately, but there may be bias when looking at the combinations. The fundamental nature of intersections is that their populations become smaller as more dimensions are considered, so the statistical tests become less powerful and it’s harder for automated tools to identify bias with certainty.


Image Credit: https://www.ywboston.org/2017/03/what-is-intersectionality-and-what-does-it-have-to-do-with-me/

There are are also many sub-classes that we don’t even know to look for bias against and have to rely on chance to discover. For example, in 2014 Target was called out for predicting pregnancies based on shopping patterns. Their Marketing Analytics team had a hypothesis that they could target pregnant women and made the explicit choice to single out this population, but with modern unsupervised learning techniques it could have just as easily been an automatically deployed campaign where no human had ever seen the description of the target audience.

“Pregnant women” as a category is easy to describe and the concerns about such targeting are easy to stir up controversy and change corporate behaviour, but more niche groups that may be biased against by algorithms may never be noticed. It’s also troubling that there may be classes discovered by unsupervised learning algorithm that have no obvious description yet, but would be controversial if given a name.

So what can be done? It may seem like a contradiction to try and address unknown knowns, given that they’re unknown, but new interpretability tools are changing what can be known. Practitioners could also start dedicating more of their model validation time to exploring the full set of combinations of protected classes, rooting out the subtle biases that might be missed with separate analysis of each category. A less technical but more ambitious solution is for organizations and practitioners to start sharing the biases they’ve discovered in their models and to contribute to some sort of central repository that others can learn from.

GDPR Will Transform Insurance Industry’s Business Model
By Amit Tyagi | October 21, 2018

The European Union wide General Data Protection Regulation, or GDPR, came into force on May 25, 2018, with significant penalties for non-compliance. In one sweep, GDPR harmonizes data protection rules across the EU and gives greater rights to individuals over how their data is used. GDPR will radically reshape how companies can collect, use and store personal information, giving people the right to know how their data are used, and to decide whether it is shared or deleted. Companies face fines of up to 4 per cent of global turnover or €20m, whichever is greater.

To comply with GDPR, companies across various industries are strengthening their data usage and protection policy and procedures, revamping old IT systems to ensure that they have the functionality to comply with GDPR requirements, and reaching out to customers to get required consents.

However, GDPR will also require a fundamental rethink of business models for some industries, especially those that heavily rely on personal data to make business and pricing decisions. A case in point is insurance industry. Insurers manage and underwrite risks. Collection, storage, processing and analyzing data is central to their business model. The data insurers collect go beyond personal information. They collect sensitive information such as health records, genetic history of illnesses, criminal records, accident-related information, and much more.

GDPR is going to affect insurance companies in many ways. Start with pricing. Setting the right price for underwriting risks heavily relies on data. With data protection and usage restriction provisions of GDPR, insurers will have to re-look at their pricing models. This may have an inflationary effect on insurance prices: not a good thing for consumers. This will be further compounded by ‘data minimization’, a core principle of GDPR limits the amount of data companies can lawfully collect.

Insurance companies typically store their data for long periods. This aids them in pricing analytics and customer segmentation. With right to erasure, customers can request insurers to erase their personal data and claims history. These requests might come from customers who have an unfavorable claims history, leading to adverse selection due to information asymmetry.

Insurance frauds are another area that will be impacted by GDPR. Insurance companies protect themselves from fraudulent claims by analyzing myriad data points, including criminal convictions. With limitation on the type of data they are able to lawfully use, quite possibly insurance frauds may spike.

Insurance companies will also have to rethink their internal processes and IT systems which were built for a pre-GDPR era. Most decisions in insurance industry are automated, which includes, inter alia, whether to issue a policy or not, how much insurance premium to charge, whether to processes a claim fully or partially. Now with GDPR, customers can lawfully request human intervention in decision making.

GDPR gives the right to customers to receive their personal data held by an insurer, or have it transmitted to another insurer in a structured, commonly used and machine-readable format. This will be a challenge as insurers will have to maintain interoperable data formats from disparate legacy IT systems. Further, this has to be done free of charge. This will surely lead to lower profitability as competition among insurers will increase.

GDPR mandates that data should be retained only as long as is necessary for the purpose for which it was collected, after which it needs to be deleted and anonymized. If stored for longer duration, the data should be pseudonymized. This will require significant system changes, which will be a huge challenge for insurance companies as the rely on disparate systems and data sources, all of which will have to change to meet GDPR requirements.

Though insurers may be acutely impacted by GDPR, their path to compliance should be a disciplined approach: revisiting systems and processes to assess readiness for this regulation and investing in filling gaps. Some changes may be big, such as data retention and privacy by design, while some may be more straightforward, such as providing privacy notices. In all cases, effective change management is the key.

Making AI “Explainable”: Easier Said than… Explained
By Julia Buffinton | October 21, 2018

Technological advances in machine learning and artificial intelligence (AI) have opened up many applications for these computational techniques in a range of industries. AI is now used in facial recognition for TSA pre-check, reviewing resumes for large companies, and determining criminal sentencing. In all of these examples, however, these algorithms have received attention for being biased. Biased predictions can have grave consequences, and determining how biases end up in the algorithm is key to preventing them.

Understanding how algorithms reach their conclusions is crucial to their adoption in industries such as insurance, medicine, finance, security, legal, and military. Unfortunately, the majority of the population is not trained to understand these models, viewing them as opaque and non-intuitive. This is challenging when accounting for the ethical considerations that surround these algorithms – it’s difficult to understand their bias if we don’t understand how they work in general. Thus, seeking AI solutions that are explainable is key to making sure that end users of these approaches will “understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.”

How can we do that?

Developing AI and ML systems are resource-intensive, and being thorough in managing ethical implications and safety adds to that. The federal government has recognized not only the importance of AI but also ethical AI and has increased its attention and budget for both.

In 2016, the Obama administration formed the new National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence to coordinate federal activity relating to AI. Additionally, the Subcommittee on Networking and Information Technology Research and Development (NITRD) created a National Artificial Intelligence Research and Development Strategic Plan to recommended roadmaps for AI research and development investments by the federal government.

This report identifies seven priorities:

  1. Make long-term investments in AI research
  2. Develop effective methods for human-AI collaboration
  3. Understand and address the ethical, legal, and societal implications of AI
  4. Ensure the safety and security of AI systems
  5. Develop shared public datasets and environments for AI training and testing
  6. Measure and evaluate AI technologies through standards and benchmarks
  7. Better understand the national AI R&D workforce needs

Three of the seven priorities focus on the minimizing the negative impacts of AI. The plan indicates that “researchers must strive to develop algorithms and architectures that are verifiably consistent with, or conform to, existing laws, social norms and ethics,” and to achieve this, they must “develop systems that are transparent, and intrinsically capable of explaining the reasons for their results to users.”

Is this actually happening?

Even though topics related to security, ethics, and policy of AI comprise almost half of the federal government’s funding priorities for AI, this has not translated directly into funding levels for programs. A brief survey of the budget for the Defense Advanced Research Project Agency (DARPA) shows an overall increase in funding for 18 existing programs that focus on advancing basic and applied AI research, almost doubling each year. However, only one of these programs fits into the ethical and security priorities.


Funding Levels for DARPA AI Programs

The yellow bar on the table represents the Explainable AI program, which aims to generate machine learning techniques that produce more explainable yet still accurate models and enable human users to understand. trust, and manage these models. Target results from this program include a library of “machine learning and human-computer interface software modules that could be used to develop future explainable AI systems” that would be available for “refinement and transition into defense or commercial applications.” While funding for Explainable AI increases, it is not at rate proportional to the overall spending increases for DARPA AI programs.

What’s the big deal?

This issue will become more prevalent as the national investment in AI grows. Recently, predictions have been made that China will close the AI gap by the end of this year. As US organizations in industry and academia strive to compete with their international counterparts, careful consideration must be given not only to improving technical capabilities but also developing an ethical framework to evaluate these approaches. This not only affects US industry and economy, but it has big consequences for national security. Reps. Will Hurd (TX) and Robin Kelly (IL) argue that, “The loss of American leadership in AI could also pose a risk to ensuring any potential use of AI in weapons systems by nation-states comports with international humanitarian laws. In general, authoritarian regimes like Russia and China have not been focused on the ethical implications of AI in warfare.” These AI tools give us great power, but with great power comes great responsibility, and we have a responsibility to ensure that the systems we build are fair and ethical.

What do GDPR and the #MeToo Movement Have in Common?
By Asher Sered | October 21, 2018

At first glance it might be hard to see what the #MeToo movement has in common with the General Data Protection Rule (GDPR), the new monumental European regulation that governs the collection and analysis of commercially collected data. One is a 261 page document composed by a regulatory body while the other is a grassroots movement largely facilitated by social media. However, both are attempts to protect against commonplace injustices that are just now starting to be recognized for what they are. And, fascinatingly, both have brought the issue of consent to the forefront of public consciousness. In the rest of this post, I examine the issue of consent from the perspective of sexual assault prevention and data privacy and lay out what I believe to be a major issue with both consent frameworks.

Consent and Coercion

Feminists and advocates who work on confronting sexual violence have pointed out several issues with the consent framework including the fact consent is treated as a static and binary state, rather than an evolving ‘spectrum of conflicting desires’[1]. For our purposes, I focus on the issue of ‘freely given’ consent and coercion. Most legal definitions require that for an agreement to count as genuine consent, the affirmation must be given freely and without undue outside influence. However, drawing a line between permissible attempts to achieve a desired outcome and unacceptable coercion can be quite difficult in theory and in practice.

Consider a young man out on a date where both parties seem to be hitting it off. He asks his date if she is interesting in having sex with him and she says ‘yes’. Surely this counts as consent, and we would be quick to laud the young man for doing the right thing. Now, how would we regard the situation if the date was going poorly and the man shrugged off repeated ‘nos’ and continued asking for consent before his date finally acquiesced? What if his date feared retribution if she were to continue saying ‘no’? What if the man was a celebrity? A police officer? The woman’s supervisor? At some point a ‘yes’ stops being consent and starts being a coerced response. But where that line is drawn is both practically and conceptually difficult to disentangle.

Consent Under GDPR

The authors of GDPR are aware that consent can also be a moving target in the realm of data privacy, and have gone to somelengths to try and articulate under what conditions an affirmation qualifies as consent. The Rule spends many pages laying out the specifics of what is required from a business trying to procure consent from a customer, and attempts to build in consumer protections that shield individuals from unfair coercion. GDPR emphasizes 8 primary principles of consent, including that consent be easy to withdraw, free of coercion and given with no imbalance in the relationship.

How Common is Coercion?

Just because the line between consent and coercion is difficult to draw doesn’t necessarily mean that the concept of consent isn’t ethically and legally sound. After all, our legal system rests on ideas such as intention and premeditation that are similarly difficult to disentangle. Fair point. But the question remains, in our society how often is consent actually coerced?

Michal Buchhandler-Raphael, a professor of Law at Washington and Lee University, argues that problems with legal frameworks in which the definition of rape is built around non-consensual sex ‘[are] most noticeable in the context of sexual abuse of power stemming from professional and institutional relationships.[2]’ She cites numerous cases in which a supervisor or someone in an otherwise privileged position managed to extract consent from a subordinate, and was therefore unpunished by the legal system. Since 70% of rapes are committed by someone known to the victim, and presumably an even larger percentage of sexual interactions take place between parties who know each other, we can expect that some amount of coercion occurs in numerous day to day sexual interactions. Especially given the fact that we continue to live in a patriarchal society where men are much more likely than women to be in positions of power[3].

This observation about power imbalances in sexual interactions neatly parallels a major concern with consent under GDPR. While GDPR requires that data subjects have a ‘genuine or free choice’ about whether to give consent, it fails to adequately account for the fact that there is always a power differential between a major corporation and a data subject. Perhaps I could decide to live without email, a smartphone, social networks or search engines, but giving up those technologies would have a major impact on my social, political and economic life. It matters much more to me that I have access to a Facebook account than it does to Facebook that they have access to my data. If I opt out, they can sell ads to their other 2 billion customers.

Conclusion

I should be clear that I do not intend to suggest that companies stop offering Terms and Conditions to potential data subjects or that prospective sexual partners stop seeking affirmative consent. But we do need to realize that consent is only part of the equation for healthy sexual relationships and just data practices. The next step is to think about what a world would like where people are not constantly pressured to give things away, but instead are empowered to pursue their own ends.

Notes

[1] See, https://economictimes.indiatimes.com/news/politics-and-nation/thoughts-on-metoo-why-cant-men-understand-the-concept-of-consent-a-flimmaker-explains/articleshow/66198444.cms?from=mdr&fbclid=IwAR0O8fzj4cQ4d68nwqWciQPSLrepZIV00RJKAnUmsC0id6JBqnNb4CR69WQ for a fascinating take on the topic

[2] https://repository.law.umich.edu/cgi/viewcontent.cgi?article=1014&context=mjgl

[3] Of course, coercion can be used by people of all genders to convince potential partners to agree to have sex.

This post summarizes a research paper, Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks, co-authored with Deirdre Mulligan, Ellen Van Wyk, John Chuang, and James Pierce. The paper will be presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) on Monday November 5th (in the afternoon Privacy in Social Media session). Full paper available here.

Recent wearable and sensing devices, such as Google GlassStrava, and internet-connected toys have raised questions about ways in which privacy and other social values might be implicated by their development, use, and adoption. At the same time, legal, policy, and technical advocates for “privacy by design” have suggested that privacy should embedded into all aspects of the design process, rather than being addressed after a product is released, or rather than being addressed as just a legal issue. By advocating that privacy be addressed through technical design processes, the ability for technology professionals to surface, discuss, and address privacy and other social values becomes vital.

Companies and technologists already use a range of tools and practices to help address privacy, including privacy engineering practices, or making privacy policies more readable and usable. But many existing privacy mitigation tools are either deductive, or assume that privacy problems already known and well-defined in advance. However we often don’t have privacy concerns well-conceptualized in advance when creating systems. Our research shows that design approaches (drawing on a set of techniques called speculative design and design fiction) can help better explore, define, perhaps even anticipate, the what we mean by “privacy” in a given situation. Rather than trying to look at a single, abstract, universal definition of privacy, these methods help us think about privacy as relations among people, technologies, and institutions in different types of contexts and situations.

Creating Design Workbooks

We created a set of design workbooks — collections of design proposals or conceptual designs, drawn together to allow designers to investigate, explore, reflect on, and expand a design space. We drew on speculative design practices: in brief, our goal was to create a set of slightly provocative conceptual designs to help engage people in reflections or discussions about privacy (rather than propose specific solutions to problems posed by privacy).

A set of sketches that comprise the design workbook

Inspired by science fiction, technology research, and trends from the technology industry, we created a couple dozen fictional products, interfaces, and webpages of biosensing technologies, or technologies that sense people. These included smart camera enabled neighborhood watch systems, advanced surveillance systems, implantable tracking devices, and non-contact remote sensors that detect people’s heartrates. In earlier design work, we reflected on how putting the same technologies in different types of situations, scenarios, and social contexts, would vary the types of privacy concerns that emerged (such as the different types of privacy concerns that would emerge if advanced miniatures cameras were used by the police, by political advocates, or by the general public). However, we wanted to see how non-researchers might react to and discuss the conceptual designs.

How Did Technologists-In-Training View the Designs?

Through a series of interviews, we shared our workbook of designs with masters students in an information technology program who were training to go into the tech industry. We found several ways in which they brought up privacy-related issues while interacting with the workbooks, and highlight three of those ways here.

fictional truwork design

TruWork — A product webpage for a fictional system that uses an implanted chip allowing employers to keep track of employees’ location, activities, and health, 24/7.

First, our interviewees discussed privacy by taking on multiple user subject positions in relation to the designs. For instance, one participant looked at the fictional TruWork workplace implant design by imagining herself in the positions of an employer using the system and an employee using the system, noting how the product’s claim of creating a “happier, more efficient workplace,” was a value proposition aimed at the employer rather than the employee. While the system promises to tell employers whether or not their employees are lying about why they need a sick day, the participant noted that there might be many reasons why an employee might need to take a sick day, and those reasons should be private from their employer. These reflections are valuable, as prior work has documented how considering the viewpoints of direct and indirect stakeholders is important for considering social values in design practices.

fictional couple track design

CoupleTrack — an advertising graphic for a fictional system that uses an implanted chip for people in a relationship wear in order to keep track of each other’s location and activities.

A second way privacy reflections emerged was when participants discussed the designs in relation to their professional technical practices. One participant compared the fictional CoupleTrack implant to a wearable device for couples that he was building, in order to discuss different ways in which consent to data collection can be obtained and revoked. CoupleTrack’s embedded nature makes it much more difficult to revoke consent, while a wearable device can be more easily removed. This is useful because we’re looking for ways workbooks of speculative designs can help technologists discuss privacy in ways that they can relate back to their own technical practices.

fictional airport tracking system

Airport Tracking System — a sketch of an interface for a fictional system that automatically detects and flags “suspicious people” by color-coding people in surveillance camera footage.

A third theme that we found was that participants discussed and compared multiple ways in which a design could be configured or implemented. Our designs tend to describe products’ functions but do not specify technical implementation details, allowing participants to imagine multiple implementations. For example, a participant looking at the fictional automatic airport tracking and flagging system discussed the privacy implication of two possible implementations: one where the system only identifies and flags people with a prior criminal history (which might create extra burdens for people who have already served their time for a crime and have been released from prison); and one where the system uses behavioral predictors to try to identify “suspicious” behavior (which might go against a notion of “innocent until proven guilty”). The designs were useful at provoking conversations about the privacy and values implications of different design decisions.

Thinking About Privacy and Social Values Implications of Technologies

This work provides a case study showing how design workbooks and speculative design can be useful for thinking about the social values implications of technology, particularly privacy. In the time since we’ve made these designs, some (sometimes eerily) similar technologies have been developed or released, such as workers at a Swedish company embedding RFID chips in their hands, or Logitech’s Circle Camera.

But our design work isn’t meant to predict the future. Instead, what we tried to do is take some technologies that are emerging or on the near horizon, and think seriously about ways in which they might get adopted, or used and misused, or interact with existing social systems — such as the workplace, or government surveillance, or school systems. How might privacy and other values be at stake in those contexts and situations? We aim for for these designs to help shed light on the space of possibilities, in an effort to help technologists make more socially informed design decisions in the present.

We find it compelling that our design workbooks helped technologists-in-training discuss emerging technologies in relation to everyday, situated contexts. These workbooks don’t depict far off speculative science fiction with flying cars and spaceships. Rather they imagine future uses of technologies by having someone look at a product website, or a amazon.com page or an interface and thinking about the real and diverse ways in which people might experience those technology products. Using these techniques that focus on the potential adoptions and uses of emerging technologies in everyday contexts helps raise issues which might not be immediately obvious if we only think about positive social implications of technologies, and they also help surface issues that we might not see if we only think about social implications of technologies in terms of “worst case scenarios” or dystopias.

Paper Citation:

Richmond Y. Wong, Deirdre K. Mulligan, Ellen Van Wyk, James Pierce, and John Chuang. 2017. Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 111 (December 2017), 26 pages. DOI: https://doi.org/10.1145/3134746


This post is crossposted with the ACM CSCW Blog

Workplace monitoring

October 17th, 2018

Workplace monitoring
by Anonymous on September 30, 2018

We never intended to build a pervasive workplace monitoring system – We just wanted to replace a clunky system of shared spreadsheets and email.

Our new system was intended to track customer orders as they moved through the factory. Workers finding a problem in the production process could quickly check order history and take corrective action. As a bonus, the system would also record newly-mandatory safety-related records. And it also appealed to the notion of “Democratization of Data,” giving workers direct access to customer orders. No more emails or phone-tag with production planners.

It goes live

The system was well received, and we started collecting a lot of data: a log every action performed on each sales order, with user IDs and timestamps. Workers could see all the log entries for each order they processed. And the log entries became invisible to workers once an order was closed.

Invisible, but not deleted.

Two years later, during a time of cost-cutting, it came to the attention of management that the logs could be consolidated and sorted by *any* field. Like this:

A new report was soon generated; logs sorted by worker ID. It didn’t seem like such a major request. After all, the data was already there. No notice was given to workers about the new report, or its potential use as a worker performance evaluation tool.

Re-purposed data

The personally identifiable information was re-purposed without notice or consent. The privacy issue may be intangible now to the workers, but could one day become very tangible as a factor in pay or layoff decisions. There is also potential for misinterpretation of the data. A worker doing many small tasks could be seen as doing far more than a worker doing a few time-consuming tasks.

Protection for workers’ information

California employers may monitor workers’ computer usage. The California Consumer Privacy act of 2018 covers consumers, not workers.

However, European Union’s General Data Protection Regulation (GDPR) addresses this directly, and some related portions of the system operate in the EU.

GDPR’s scope is broad; covering personally identifiable information in all walks of life (e.g. as a worker, as consumer, as citizen.) Section 26 makes clear: “The principles of data protection should apply to any information concerning an identified or identifiable natural person.” Other sections cover consent, re-purposing, and fairness/transparency issues, and erasability (sections 32, 60, 65, and 66.)

Most particularly, Article 88 requires that collection and use of personally identifiable information in workplace should be subject to particularly high standards of transparency.

Failings

It’s easy in hindsight to find multiple points at which this might have been avoided. Mulligan’s and Solove’s frameworks suggest looking at “actors” and causes.

  • Software Developer’s action (harm during data collection): There could have been a login at the level of a processing station ID, rather than a worker’s personal ID.
  • Software Developer’s action (and timescale considerations, yielding harm during data processing): The developer could have completely deleted the worker IDs once the order closed.
  • Software Developer’s action (harm during data processing and dissemination: increased Accessibility, Distortion): The
    developer could have written the report to show that simple “event counting” wasn’t a reliable way of measuring worker contribution.
  • Management’s action (harm during data gathering and processing): the secondary use of the data resulted in intrusive surveillance. Businesses have responsibility (particularly under GDPR) to be transparent with respect to workplace data. Due concern for the control over personal information was not shown.

Prevention

One way forward, when working on systems which evolve over time: Consider Mulligan’s contrast concept dimension of privacy analysis, applied with Solove’s meta-harm categories. Over the full evolutionary life of a software system, we can ask: “What is private and what is not?” If the actors – developer, manager, and even workers – had asked, as each new feature was requested: “What is being newly exposed, and what is made private,” they might not have drifted into the current state. It’s a question which could readily be built into formal software development processes.

But in an era of “Data Democratization,” when data-mining may be more broadly available withing organizations, such checklists might be necessary but not sufficient. Organizations will likely need broadly-based internal training on protection of personal information.

May we recommend…some Ethics?

October 17th, 2018

May we recommend…some Ethics?
by Jessica Hays | September 30, 2018

The internet today is awash with recommender systems. Some pop to mind quickly – like Netflix’s suggestions of shows to binge, or Amazon’s nudges towards products you may like. These tools use your personal history on their site, along with troves of data from other users, to predict which items or videos are most likely to tempt you. Other examples of recommender system include social media (“people you may know!”), online dating apps, news aggregators, search engines, restaurants finders, and music or video streaming services.


Screenshots of recommendations from LinkedIn, Netflix

Recommender systems have proliferated because there are benefits to be shared on both sides of the coin. Data-driven recommendations mean customers have to spend less time digging for the perfect product themselves. The algorithm does the heavy lifting – once they set off on the trail of whatever they’re seeking, they are guided to things they may have otherwise only found after hours of searching (or not at all). Reaping even more rewards, however, are the companies using the hook of an initial search to draw users further and further into their platform, increasing their revenue potential (the whole point!) with every click.

Not that innocent

In the last year, however, some of these recommender giants (YouTube, Facebook, Twitter) have gotten attention for the ways in which their algorithms have been unwitting enablers of political radicalization and the proliferation of conspiratorial content. It’s not surprising, in truth, that machine learning quickly discovered that humans are drawn to drama and tempted by content more extreme than what they originally set out to find. And if drama means clicks and clicks mean revenue, that algorithm has accomplished its task! Fortunately, research and methods are underway to redirect and limit radicalizing behavior.

However, dangers need not be as extreme as ISIS sympathizing to merit notice. Take e-commerce. With over 70% of the US population projected to make an online purchase this year, behind-the-scenes algorithms could be influencing the purchasing habits of a healthy majority of the population. The sheer volume of people impacted by recommender systems, then, is cause for a closer look.

It doesn’t take long to think up recommendation scenarios that could raise an eyebrow. While humans fold ethical considerations into their recommendations, algorithms programmed to drive revenue do not. For example, imagine an online grocery service, like [Instacart


Junk food -> more junk food!

Can systems be held accountable?

While this may be great for retailers’ bottom line, it’s clearly not for our country’s growing waistlines. Some might argue that nothing has fundamentally changed from the advertising and marketing schemes of yore. Aren’t companies merely responding to the same old pushes and pulls of supply and demand – supplying what they know users would ask for if they had the knowledge/chance? Legally, of course, they’re right – nothing new is required of companies to suggest and sell ice cream, just because they now know intimately that a customer has a weakness for it.

Ethics, however, points the other way. Increased access to millions of users’ preferences and habits and opportunity to influence behavior aren’t negligible. Between the power of suggestion, knowledge of users’ tastes, and lack of barriers between hitting “purchase” and having the treat delivered – what role should ethically-responsible retailers play in helping their users avoid decisions that could negatively impact their well-being?

Unfortunately, there’s unlikely to be a one-size-fits-all approach across sectors and systems. However, it would be advantageous to see companies start integrating approaches that mitigate potential harm to users. While following the principles of respect for persons, beneficence, and justice laid out in the Belmont Report is always a good place to start, some specific approaches could include:

  • Providing users more transparency and access to the algorithm (e.g. being able to turn off/on recommendations for certain items)
  • Maintaining manual oversight of sensitive topics where there is potential for harm
  • Allowing users to flag and provide feedback when they encounter a detrimental recommendation

As users become more savvy and aware of the ways in which recommender systems influence their interactions online, it is likely that the demand for ethical platforms will only rise. Companies would be wise to take measures to get out ahead of these ethical concerns – for both their and their users’ sakes.