Archive for July, 2018

What happened to the Green Supply Chain? by Dave Owen

In 2012 I wrote a whitepaper on the growing opportunity and in some cases requirement for companies to build a green supply chain. The premise was based on growing customer desire to understand where their products came from – and the reaction by some leading companies to use their best practices as an advertising opportunity.

The market moved quickly at the start of the decade – promoting sustainable sourcing and efficient transportation. Take for example the three apparel brands with PR moves in 2010-2012:

  • H&M Conscious Collection
  • Timberland’s Green Index
  • Patagonia’s Footprint Chronicles

Considering these moves today – they seem to have met very different results. On the H&M side, the company’s brand would likely receive low marks for sustainability given recent news. Timberland may come in quite neutral if customers were asked. Patagonia, on the other hand, has a brand synonymous with sustainability. That said, their Footprint Chronicles had very limited success/impact.

The speed at which the market was changing at the turn of the decade was quite remarkable. There was a window where eco branding was going mainstream. So what happened? While many eco-brands have taken hold as alternatives to the most popular brands, in most segments the eco option has not won the market as many predicted.

Business trends are definitely part of the story. I’d argue much of the shift has to do with the rise of our Amazon-culture. The equation changes when products are delivered in ubiquitous brown boxes compared to walking out of the store with the green packaging. The price spread between the most generic and eco-product now is greater in online environment. And, fads change. The 100-mile diet was a thing in 2010. Not so much anymore.


Google Trend

However, there have been market shocks that have made consumers wary. And those shocks relate more to data than to product. Take for example Volkswagen. The emissions scandal of 2015 was all about data. In effect, the company manipulated engine test data to cheat pollutant requirements. The story is one that’s unique. In fact, Nissan admitted this month that it had faked emissions data as well(1).

Back in at the time of the rise of green supply chain data there was a highly publicized event, where a sweatshop collapsed in Bangladesh that killed 112 in 2012. That factory manufactured for Walmart(2), Benetton(3), and others. I can recall at the same time hearing stories about conditions of factories where my new iPhone came from.


Aftermath the Tazreen Fashions Ltd. fire in Dhaka, Bangledesh on November 25th, 2012. Source: AP/Polash Khan2

The data component of this story is quite interesting to consider. I believe many things were at play in the evolution of the growth of sustainable supply chain reporting five plus years ago. Part of which had to do with companies realizing that the financial benefits weren’t equal to the costs, and that the reporting was best limited to annual sustainability reports. However, from a consumer perspective I am compelled to believe that a few events changed the public perception in trust in the system. As customers became warry of information and attentions moved to other areas, the ability to build a lasting system also changed.

These hypotheses are difficult to prove out. I believe further investigation in the limited impact of green supply chain thinking will help inform how the green supply chain 2.0 emerges (which I believe it will). The lesson is one that is relevant to data oriented systems. Trust in the system is paramount – designing with this trust in mind is an important job of the data scientist.

References:

1. (10 July 2018). “Nissan says emissions data for 19 models had been faked”. The Straits Times. Retrieved 25 July 2018

2. (26 November 2012). “Wal-Mart: Bangladesh factory in deadly fire made clothes without our knowledge”. CBS News. Retrieved 25 July 2018

3. Smithers, Rebecca (29 April 2013). “Benetton admits link with firm in collapsed Bangladesh building”. The Guardian. London. Retrieved 25 July 2018.

Privacy and Anti-Trust

July 30th, 2018

Privacy and Anti-Trust by Todd Young

I have long been concerned about what mergers and acquisitions mean for privacy agreements.  Beyond my concerns about the notice-and-consent framework[1], it seemed to me that a change of ownership of the firm, as in the case of a merger or acquisition, is particularly problematic for individual privacy.  After all, people agree to share the personal information in a particular context, and a change of ownership of the firm can radically change that context.

I could think of particularly sensational hypothetical mergers to elicit lively discussion with friends and colleagues:  What if Facebook bought 23andme and used the genetic information to map out families, and cancer-proneness?  (Facebook has shown interest in medical information[2]. A recent study showed how large DNA database can be used to identify individuals.[3])   What if BlueCross signed an information sharing agreement with Ancestry?  (Insurance companies are terrified of cheap DNA tests for consumers.[4])  Or what if Google bought FedEx and made business document shipping free but you had to agree to allow them to scan the documents as part of their effort to organize the world’s information?  (OK, there’s no evidence I could find of this, but it’s basically the same deal you agree to with your Gmail account.)

How should we view such proposed mergers?  How should we decide if such a merger was harmful to consumers?  If such a merger was indeed likely to be harmful, who could we rely on to stop it from happening?  The Federal Trade Commission?  What monopoly would they be preventing?  What pricing power would the new company have if its apps were free all along?  Could it be considered a harm to accumulate too much personal information about people?

I will discuss each of these issues and then look at a real case from 2014, when Facebook purchased WhatsApp for $19B where examine the reactions to this merger in the EU and in the U.S. and offer my opinion on it.

Let’s start with legal framework of Antitrust, and its enforcers[5].  The FTC is a bipartisan federal agency with a unique dual mission to protect consumers and promote competition.  The FTC enforces anti-trust laws and will challenge anti-competitive mergers.[6]  Section 7 of the Clayton Act prohibits mergers that “may substantially lessen competition or tend to create a monopoly.”[7]  The process of evaluation is to: define the relevant market, test theories of harm, and examine efficiencies created by the merger.

Relevant Market

The FTC Horizontal Merger Guidelines[8], section 4, Market Definition specifies the line of commerce and the allows for the identification of the market participants, so market share can be considered.  I propose to use Jaron Lanier’s characterization of the social media companies as ones that collect personal information about consumers for the purpose of selling to other firms (chiefly advertising) the promise of behavior modification of those same users.[9]  The behavior modification could be to make a specific purchase of a product, or to “like” a particular tweet, or to read a certain news story, or even to attend a political rally.

To me, what is interesting about this market view is that it allows us to consider the actual harms that consumers are worried about[10], as well as the harms that typically get reviewed by the FTC – price and market power.  Let’s take a deeper look at price.

Price, in an economic sense, is the factor that balances supply and demand.  In a simpler way of thinking, price is what you give up for what you get, and Facebook users give up their personal information to use the platform ‘for free.’  According to Thomas Lenard, senior fellow and president emeritus at the Technology Policy Institute, “price is a proxy for a whole bunch of attributes.”  In Jason Lanier’s description “Let us spy on you and in return you’ll get free services.”  If Data is the new Oil, personal data is the jet fuel that powered Google, Amazon, and Facebook into the world’s most valuable companies top ten.[11]   The price users pay is to allow Facebook to use their personal information.

In other words, the “price” consumers pay for social networking apps is the privacy of their personal information.  Hence, we can look at the role of providing personal information as the “price” of social networking, and at the same time look at harms caused by the loss of privacy.  This interpretation is not mine alone.  Wired magazine’s June 2017 article “Digital Privacy is Making Antitrust Exciting Again” quotes Andreas Mundt, president of Germany’s antitrust agency, Bundeskartellant, saying he was “deeply concerned privacy is a competitive issue.”[12]

The FTC Horizontal Merger Guidelines clarify that non-price terms and conditions can also adversely affect consumers, and that non-price attributes can be considered as price.  Further if the market is one of buyer power, the same analysis can be conducted for the ‘monopsony’ situation.  So we can stay within the existing framework of the evaluation of mergers as being anti-competitive or not.

An anti-competitive merger would result in market power for the merged company, typically in terms of pricing power.  However, as ‘price’ in this case contains the notion of private personal information, we can describe Facebook’s  as in Figure 1 below.


Figure 1. Information and Money Flows for Facebook

The figure above illustrates that FaceBook is the purchaser of personal information from users, who, in exchange, get otherwise free services from FaceBook – a platform on which to build community and share (with that community, and with FaceBook).  Facebook rents this information to its customers who sell ads or otherwise attempt to modify the user’s behavior based on a personalized environment.

 

The determination of whether a merger of another company with FaceBook would be anti-competitive would thus include the effect of the price of service on other consumers.  Or, in the economics of social media, how would the required information sharing change for customers of the old company change under the Facebook merger?  Here are the pre-merger and post-merger information and money flows.


Figure 2. Pre-Merger Information and Money Flows for Facebook and WhatsApp


Figure 3. Post-Merger Information and Money Flows for Facebook and WhatsApp

I contend that through the merger, Facebook forced WhatsApp out of its hi-privacy, paid business model into a low-privacy “free” market, despite WhatsApp’s history of insisting on privacy for its customers.  In the words of Tom Grossman, a senior branding consulting at Brand Union, from the Guardian, and quoted in Wired[13]:

“One of the reasons why so many millions have flocked to WhatsApp is the added level of privacy the brand provides. In a world where your every word echoes endlessly across the internet it was a communication channel where sharing could take place on a more contained level. However, much like Google’s acquisition of Nest and Facebook’s of Instagram, with this purchase consumers are suddenly associated with, and have their information accessible by a brand that they didn’t buy into. It’s this intrusion that can make it feel uncomfortable, as both you and your data are seized without your say-so.”

Theories of Harm

As I pointed out in the previous section, there was no financial harm to WhatsApp users, as their paid service became free in time, as it was integrated[14] into the Facebook family.  As price includes a component of privacy, we need to look beyond monetary harm, and Daniel Solove’s Taxonomy of Privacy[15] is instructive in the evaluation of post-merger privacy harms. The Taxonomy describes the information flows and issues related to them.


Figure 4. Solove’s Taxonomy of Privacy

In comparing the two apps and their business/privacy models, areas of significant difference include: 1) Surveillance – WhatsApp’s Koum said “Respect for your privacy is coded into our DNA…   We don’t know your likes, what your search for on the internet or collect your GPS location…[16]”  Facebook’s business is built around ‘likes’ and tracking your online.  2) Aggregation – the power of adding another layer of data, another network, into the Facebook graph must have been a key driver for the decision to acquire Whatsapp.  3) Secondary use – this is the crux of the outrage by WhatsApp users.  They used the WhatsApp platform based on the user agreement, which showed a strong focus on privacy and security, and then all that data got sucked into the Facebook graph.  4) Breach of confidentiality – I could not reasonably include this as a harm had Facebook actually gotten the affirmative consent of WhatsApp users, but their insistence on defaulting the users into the agreement requires that I consider this to be a breach of confidentiality for the WhatsApp users.  5) Invasions of privacy.  It’s hard not to include this as well given the way the WhatsApp consumer were assimilated into the Facebook graph.

Consumers are much more willing to share personal data with the promise of anonymization[17].  However, as personal information about individuals accumulates in the hands of large corporations, promises of anonymization of data becomes less viable, and consumers’ trust becomes misplaced.  Narayanan and Shmatikov showed that they could successfully de-anonymize social network data based purely on topology.[18]  Michael Zimmer similarly showed how difficult it is to anonymize Facebook data used in research.  Despite researchers’ attempts to anonymize Facebook data for a college student population that was the subject of their research, the school and students were identified in a matter of days of the release of the study, putting the students’ privacy at risk.[19]  Also recall the DNA study referenced on Page 1 (reference 3).

Real Harm and a Slap on the Wrist

The FTC did not block the merger but focused on WhatsApp’s privacy promises to its users: “We want to make clear that, regardless of the acquisition, WhatsApp must continue to honor these promises to consumers. Further, if the acquisition is completed and WhatsApp fails to honor these promises, both companies could be in violation of Section 5 of the Federal Trade Commission (FTC) Act and, potentially, the FTC’s order against Facebook,” the letter states.[20]

In 2014, Mark Zuckerburg was quoted as saying “We are absolutely not going to change plans around WhatApp and the way it uses user data.  WhatsApp is going to operate completely autonomously.”[21]  But, all that changed, and consumers were left with a choice to leave or stay and share their personal information.  Even worse, despite the FTC’s admonition to “obtain consumers’ affirmative consent before doing so”, Facebook took the opposite approach and by default opted consumers in.

The FTC also used the Facebook/WhatsApp merger as a backdrop to provide guidance to other companies about keeping their pre-merger privacy promises post-merger.[22]

In the Facebook-WhatsApp merger, the EU and US both criticized and fined FaceBook, but the FTC did not prevent the merger.  The fines ($112M by the EU and EU and US ultimately focused on keeping privacy promises), but were tiny compared with the $19B they paid for WhatsApp.

However, in the end, it was not theoretical harms that lead to the criticism and fines from the EU and the FTC.  It was lies.  Despite both companies being adamant about the security of WhatsApp consumer information, they went back on their promise.  Apparently, the financial reward of merging the two data sets was just too much to resist, and the expected backlash, loss of customers, and fines was not enough to impact the final decision.

Efficiencies

The efficiency gained by Facebook was 1) the repurposing of private data from WhatsApp into their own social network ‘graph’, and 2) the elimination of a for paid-subscription-model/hi-privacy competitor from the social networking landscape.  Both of these efficiencies hurt consumers to the benefit of Facebook.

Conclusion

Mergers and Acquisitions change the context of privacy for the user and can threaten

I think the FTC got it wrong – I think WhatsApp was a Maverick company per the FTC Horizontal Merger Guidelines document.  From that document (emphasis mine): [my comments included like this]

2.1.5  Disruptive Role of a Merging Party The Agencies consider whether a merger may lessen competition by eliminating a “maverick” firm, i.e., a firm that plays a disruptive role in the market to the benefit of customers. For example, if one of the merging firms [Facebook] has a strong incumbency position and the other merging firm [WhatsApp] threatens to disrupt market conditions with a new technology or business model [high-privacy], their merger can involve the loss of actual or potential competition. Likewise, one of the merging firms may have the incentive to take the lead in price cutting or other competitive conduct or to resist increases in industry prices. A firm that may discipline prices [sharing of personal information] based on its ability and incentive to expand production rapidly [because of consumer interest in high-privacy apps] using available capacity also can be a maverick, as can a firm that has often resisted otherwise prevailing industry norms to cooperate on price setting or other terms of competition.

Their maverick-ness was providing consumers with a paid $1-per-year service with the promise of high privacy.  Facebook’s merger forced them to adopt the no-privacy model, essentially a much higher price for using that platform.  Users and critics wrote that WhatsApp had betrayed them, and I agree.

NOTES

[1] Notice-and-consent relies upon the informed consent of the user, who often acknowledges the agreement with a simple click.  However, there are many well-documented reasons to believe that the consent is neither informed, nor undertaken with complete free will: 1) agreements are notoriously long documents, 2) agreements are seldom written in clear, accessible language, 3) they can be changed at will by company with notice, 4) the company often decides what “material” changes merit a notice, 5) the company essentially holds all the power in the relationship as switching costs for the consumer are very high given the utility-like status of many social networking apps.

[2] https://www.cnbc.com/2018/04/05/facebook-building-8-explored-data-sharing-agreement-with-hospitals.html

[3] https://www.biorxiv.org/content/early/2018/06/19/350231

[4] https://www.fastcompany.com/3022224/why-23andme-terrifies-health-insurance-companies

[5] https://www.ftc.gov/tips-advice/competition-guidance/guide-antitrust-laws/enforcers

Both the FTC and the U.S. Department of Justice (DOJ) Antitrust Division enforce the federal antitrust laws. In some respects, their authorities overlap, but in practice the two agencies complement each other. Over the years, the agencies have developed expertise in particular industries or markets. For example, the FTC devotes most of its resources to certain segments of the economy, including those where consumer spending is high: health care, pharmaceuticals, professional services, food, energy, and certain high-tech industries like computer technology and Internet services.

[6] https://www.ftc.gov/about-ftc/what-we-do

[7] https://www.ftc.gov/sites/default/files/attachments/merger-review/100819hmg.pdf

[8] https://www.ftc.gov/sites/default/files/attachments/merger-review/100819hmg.pdf

[9] Jaron Lanier, 2018, Ten Arguments for Deleting Your Social Media Accounts Right Now.

[10] https://www.ipsos.com/ipsos-mori/en-uk/personalisation-vs-privacy

[11] https://www.wired.com/2017/06/ntitrust-watchdogs-eye-big-techs-monopoly-data/

[12] https://www.wired.com/2017/06/ntitrust-watchdogs-eye-big-techs-monopoly-data/

[13] https://www.epic.org/privacy/internet/ftc/whatsapp/

[14] I think the verb assimilated is actually more apropos, especially because of the opt-in-by-default strategy of gained consent that earned them fines around the world.

[15] Solove.  A Taxonomy of Privacy.  https://www.law.upenn.edu/journals/lawreview/articles/volume154/issue3/Solove154U.Pa.L.Rev.477(2006).pdf

[16] https://www.epic.org/privacy/internet/ftc/whatsapp/

[17] https://www.ipsos.com/ipsos-mori/en-uk/personalisation-vs-privacy

[18] De-Anonymizing Social Networks.  Arvind Narayana and Vitaly Shmatikov. The University of Texas at Austin.

[19] http://www.sfu.ca/~palys/Zimmer-2010-EthicsOfResearchFromFacebook.pdf

[20] https://www.ftc.gov/news-events/press-releases/2014/04/ftc-notifies-facebook-whatsapp-privacy-obligations-light-proposed

[21] https://www.epic.org/privacy/internet/ftc/whatsapp/

[22] https://www.ftc.gov/news-events/blogs/business-blog/2015/03/mergers-privacy-promises

From Safety to Surveillance: When is it Okay to Spy on Your Kids? by Elizabeth Shulok

Imagine hiding a webcam in your teenager’s bedroom and recording them unaware. Most of us would recognize this as an invasion of privacy, and potentially child pornography if the camera records the child in a state of undress.

But install a webcam in your toddler’s bedroom, and it is an acceptable safety measure to make sure your child is safe when they are alone in their room.

At what point does recording your child transition from responsible parenting to an invasion of privacy?


Hello Barbie WiFi enabled doll

Nearly 3 years ago, parents filed suit against the makers of Hello Barbie, a WiFi enabled doll that uses voice recognition technology to allow a child to have a “conversation” with the doll. Although Mattel claims the doll complies with COPPA, or the Children’s Online Privacy Protection Act, the plaintiffs claim that Mattel did not do enough to protect the privacy of playmates of the child who owns the doll.

COPPA was created to protect the privacy of children under 13. One of the provisions required by COPPA is that any website or service geared towards children under 13, must get parental consent before collecting data on a child. And indeed Hello Barbie does require a parent to set up an account and consent to allowing the toy to record their child’s speech.

However, as the lawsuit contends, friends of the doll’s owner may also be recorded, despite their own parents not having consented to the collection of this personal data.

The debate on Hello Barbie has centered around this main privacy flaw. But does the toy, when used as intended, violate wiretapping laws?

To comply with COPPA, the makers of Hello Barbie must make the personal data collected from their child available for the parent to review. In this case, that data are the voice recordings. In essence, this toy allows a parent to eavesdrop on their child’s private conversations with the doll when they are not in the room.

Federally, a conversation can be recorded with the consent of only one party. Essentially, it is legal to record a conversation as long as you are a party to the conversation, even if others are unaware of the recording. In California and some other states, however, recording a conversation is illegal unless all parties are aware of and consent to the recording.


Recording consent laws by state
(http://www.golocalprov.com/news/anyone-can-tape-you)

Although recording another adult without their knowledge would clearly violate federal wiretapping laws, it is unclear if this applies to your own child. (Since the conversation is with a doll, this is essentially akin to recording someone talking to themselves, which would be illegal under federal law without their knowledge and consent.)

Looking to the law on recording children’s conversations, much of the legal precedence involves child custody or abuse cases, in which one parent surreptitiously records a phone conversation between their child and another adult (often the other parent). In most cases the courts have found that for one-party consent laws, the parent can vicariously consent on behalf of their child to the recording of the conversation. And in the case of two-party consent laws, the courts have ruled that the recordings, which would otherwise have been illegal according to state law, are permissible as evidence in court if the parent recorded the conversation in the child’s best interests.

In other words, recording your child’s conversation when you are not a party to the conversation is illegal – unless the recordings collect evidence of abuse. If you record the conversation and there is no evidence of abuse, then those recordings are illegal. Although as long as you do not use them, it is likely not to become an issue.

Looking at the Hello Barbie case, the parent consents to have recordings of their child collected by a third party. However, the doll can also be seen as a recording device by which the parent is listening in on their child’s private conversations. If the parent was a party to the conversation and included in the recordings, this would be legal under one-party consent laws, and likely legal in other cases as the parent is consenting on behalf of the child and is present during the conversation.

However, if the parent uses the recordings from Hello Barbie as a way to listen in on their child’s conversations when the child believes they are alone, this seems to amount to wiretapping. That could still be acceptable if the recording is done in the child’s best interests, but that would be a tough case to argue under normal circumstances.

As more toys become WiFi enabled and capture data on children, we need to consider what is really in the best interests of the child and whether some toys simply do not belong on the market.

REFERENCES

Neil, M. (2015, December 9). Moms sue Mattel, saying “Hello Barbie” doll violates privacy. Retrieved July 29, 2018, from http://www.abajournal.com/news/article/hello_barbie_violates_privacy_of_doll_owners_playmates_moms_say_in_lawsuit/

Dinger, Daniel R (2005). Should Parents Be Allowed to Record a Child’s Telephone Conversations When They Believe the Child Is in Danger?: An Examination of the Federal Wiretap Statute and the Doctrine of Vicarious Consent in the Context of a Criminal Prosecution. Seattle University Law Review, 28(4), 955-1027.

Adams, Allison B. (Fall 2013). War of the Wiretaps: Serving the Best Interests of the Children? Family Law Quarterly, 47(3), 485-504.

Children”s Online Privacy Protection Rule (“COPPA”). Retrieved July 29, 2018, from https://www.ftc.gov/enforcement/rules/rulemaking-regulatory-reform-proceedings/childrens-online-privacy-protection-rule

China’s Social Credit System: Using Data Science to Rebuild Trust in Chinese Society by Jason Hunsberger

From 1966-1976 Mao Zedong and the Chinese Communist Party (CCP) waged an idological war on its own citizens. Seeking to purge the country of “bourgeois” and “insufficiently revolutionary” elements, Mao closed the schools and set an army of high school and university students on the populace. The ensuing cultural strife placed neighbor against neighbor, young against old, and destroyed families. Hundreds of thousands died.

Forty years later, China is still trying to recover from the social damage. Facing widespread issues of government and corporate corruption, and lack of respect for the rule of law, the national party is seeking to transform Chinese society to be more “sincere”, “moral”, and “trustworthy.” The means by which the CCP seeks to do this is by creating a nationwide social credit system. Formally launched in 2014 after two decades of research and development, this system’s stated goal is:

“allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.”
NOTE: Under heaven is a rough translation of one of the historical names of China.


Image 1: Goals of China’s social credit system as found in the Chinese news media (Source: MERICS)

By 2020, China hopes to have the system fully deployed across the entire country.

How will this nationwide social credit system help the nation rebuild trust in its public and private institutions and its people? By using data science to analyze all aspects of public life. That will be used to create a social credit score that represents a citizen’s or a company’s contribution to society. If your actions are a detriment to society, your social credit score will go down. If your actions are beneficial to society, your score will go up. Depending upon your credit score, you will either be restricted from aspects of society or be granted access to certain benefits.

The social credit system is deeply integrated into local government databases and large technology companies. It collects data from automatic facial recognition cameras deployed at city intersections, video game systems, search engines, financial systems, social media, political and religious groups you engage in, and more. With all this data, the social credit system creates a universal social credit score which can be integrated into all aspects of Chinese life.

Beyond a mere pipedream, there are currently 36 pilot programs deployed in some of China’s largest cities.


Image 2: A map of the social credit pilot cities in China (Source: MERICS)

Here are some examples of the social credit system at work, collected from these pilot programs:

  • you wake up in the morning to find your face, name, and home address being placed on a billboard in your local neighborhood saying that you have a negative impact on society.
  • the ringtone on your phone is automatically changed so that all callers know that you have not paid a bill.
  • your girlfriend/boyfriend dumps you because your social credit score is integrated into your dating app.


Image 3: A cartoon on Credit China’s website on the impact of the social credit system on one’s dating life (Source: MERICS)

  • your company is restricted from public procurement systems and access to government land
  • you fly to a destination for business, but are unable to take your return flight because your social credit score decreased
  • you work for a business that gets punished for unscrupulous behavior and you are unable to move to another company because you are held responsible for the company’s behavior.
  • your company is blocked from having access to various government subsidies
  • you are able to rent apartments with no deposit, or rent bikes for 1.5 hours for free, while others have to pay extra.
  • you try to get away for that long awaited vacation only to find that you cannot fly or take the high-speed train to any of your desired destinations.
  • your company may not be allowed to participate on social media platforms
  • your academic performance, including whether you cheated on an exam or plagarized a paper can affect your social credit score

If that is not enough, anyone who runs afoul of the credit system will have their personal information added to a blacklist which is published in a searchable public database and on major news websites. This public shaming is considered a feature rather than a flaw of the system.


Image 4: A public billboard in Rongcheng showing citizens being shamed as a result of their social credit scores (SOURCE: Foreign Policy)

Obviously, the entire social credit system raises many issues. China is making a big bet that the citizenry will view this data collection, analysis, and scoring positively. To help, the Chinese government and national media have been actively promoting “big data-driven technological monitoring as providing objective, irrefutable measures of reality.” This approach seems to ignore the many issues present in information systems regarding the bias of categories and of the algorithms used to analyze the data these systems contain. Additionally, it fails to address the problems with erroneous data falsely rendering damaging reputational judgements against the people.

But putting aside weather or not these systems can measure what they seek to reliably, does creating a vast technological data collection system that is deeply integrated into all aspects of people’s lives and is used to calculate a single ‘trustworthiness’ score that is to be displayed publicly if the score is too low, sound like actions that are intended to build trust amongst people? On its face, it does not. It sounds more like a system built to control people. And systems that are built to control people, at their heart, do not trust the people they are trying to control. For if the people could be trusted to make the decisions that were good for society, why would such a system be needed in the first place? So, fundamentally, can the CCP build trust with its citizens by taking an action that loudly tells them that they don’t trust them? Will a system built on distrust foster trust amongst the populace? Or will it signal to the entire populace that their fellow citizens, neighbors, friends, and family members might not be trustworthy? Instead of promoting trust within society, it is very possible that China’s social credit system will actually further erode trust in China’s society.

References

Ohlberg, Mareike, Shazeda Ahmed, Bertram Lang. “Central Planning, Local Experiments: The complex implementation of China’s Social Credit System.” Mercator Institute for China Studies, December 12, 2017: https://www.merics.org/sites/default/files/2017-12/171212_China_Monitor_43_Social_Credit_System_Implementation.pdf

Mistreanu, Simina. “Life Inside China’s Social Credit Laboratory.” Foreign Policy, April 3, 2018: https://foreignpolicy.com/2018/04/03/life-inside-chinas-social-credit-laboratory/

Greenfeld, Adam. “China’s Dystopian Tech Could be Contagious.” The Atlantic, February 14, 2018: https://www.theatlantic.com/technology/archive/2018/02/chinas-dangerous-dream-of-urban-control/553097/

Bad Blood: Trusting Numbers with a Grain of Salt by Amy Lai

Digital health may be well on its way toward becoming the next “it” trend in technology. Over the past few years, the presence of consumer health technology companies has boomed. In 2010, digital health companies received roughly $1 billion in total investment funding, a less than hefty amount compared to other sectors (1). However, fast-forward just 6 years later, and that investment has jumped by nearly 810% (1). That’s right. In 2016, digital health companies received nearly $8.1 billion in investment funding (1), with significant investments in wearable and biosensing technology (2)—a move that perhaps echoes the increasing promise of digital healthcare.


Health investment categories

Indeed, the time seems ripe for a long-overdue revolution of traditional healthcare. With an ever-growing pool of data about our lifestyles captured through our smartphones, social media accounts, and even online shopping preferences, coupled with rapid advances in computing power and recommendation systems, it seems like technology is at the cusp of transforming how we think, perceive, and quantify our health. And we’re just starting to see its effects…and consequences.

Fitness trackers such as Fitbit and health-tracking apps like Apple Health Kit quantify an impressive range of our physical health. From our weight, to the number of steps we take, flights of stairs we climb, calories we burn, and to the duration and quality of our sleep, it appears that there are increasingly more tools to track nearly every aspect of our lives (3). Anyone else also slept for 7 hours, 18 minutes last night? As we curiously go through the colorful line graphs and bar charts that show our activity levels, have you ever wondered whether we can fully trust these metrics? How accurate are the numbers?

If a fitness app recorded that you burned 100 calories when you actually burned 90, how upset would you be? Probably not too upset because mistakes happen. However, if you learned that a medical device determined that you had diabetes when you really didn’t, how distraught would you be now? Most likely more than a little distraught. Notice the difference? Depending on context, it appears that consumers have different expectations of health-related product efficacy and tend to place greater trust in certain types of products such as medical devices. Although somewhat anticlimactic, results from medical devices should warrant some skepticism as they can (and do) have measurement error that goes wrong…and in some cases, very wrong.

Founded in 2003, Theranos was touted as a revolutionary breakthrough in the blood-testing market. The company reinvented how blood-testing worked by introducing a “proprietary technology” that purportedly could detect numerous medical conditions from high cholesterol to cancer using a finger pinprick that only needed 1/100 to 1/1,000 of the amount of blood required by current standard blood-testing procedures (4). Theranos seemed unstoppable. Valued at $10 billion, the company raised more than $700 million in venture capital and partnered with national pharmacy chains including Walgreens and Safeway to open testing clinics for consumers and patients (4). However, the company quickly unraveled as its product turned out to be nothing more than a facade. After probing by the US Food and Drug Administration, Securities and Exchange Commission, and Centers for Medicare and Medicaid Services, the “proprietary technology” was found to be underdeveloped and inaccurate, reporting erroneous blood-test results with marked error (4). Consumers worried about supposed new tumors while others celebrated their allegedly improved cardiac health by stopping medications (5). Theranos fooled us and we (just might have) helped them do that.


Theranos

Theranos teaches us a subtle yet important lesson about privacy as contextual integrity. Because consumers don’t often seem to question the efficacy of health-related products, it behooves corporate executives to scientifically and ethically validate their products. It’s important that such integrity plays a key role in organizational culture, and is embedded at all management levels to keep business leaders in-check and minimize consumer harm. Doing so helps prevent violations of consumer expectations and gives them a reason for continuing to place their trust in products. However, health-related products are not perfect and infallible. Because products inevitably have some margin of error, it also behooves consumers to understand that product metrics may not represent the whole truth and nothing but the truth. Those numbers aren’t likely to be wholly correct. It’s essential that we adopt a more realistic set of expectations about health-related products, as well as a healthier level of skepticism the next time we’re told we only burned 10 calories or only a few droplets of blood is needed to detect cancer.

These shifts in the mindset and expectations of businesses and consumers may be needed to help keep both sides accountable to each other.

References:
1. https://www.forbes.com/sites/forbestechcouncil/2017/05/05/why-digital-health-startups-have-yet-to-reach-unicorn-status/#3b5f23188cdb
2. https://rockhealth.com/reports/q1-2017-business-as-usual-for-digital-health/
3. https://www.nytimes.com/2017/12/26/technology/big-tech-health-care.html
4. https://www.vanityfair.com/news/2016/09/elizabeth-holmes-theranos-exclusive
5. https://www.wsj.com/articles/the-patients-hurt-by-theranos-1476973026

Social Credit: a Chinese experiment by Yang Yang Qian

Imagine applying for a loan, but first the bank must check your Facebook profile for a credit report. As odd as it feels for consumers in the United States, for consumers in China, this is already part of an experiment with social credit.

The Chinese government has had plans to implement a Social Credit System by 2020: a big data approach to regulating the behavior of individuals, companies, and other institutions such as NGOs. Essentially, under the Social Credit System, a company or individual would be given a set of ratings that summarizes how good behaving they are in various categories, such as propensity for major credit offenses. The platform is intended to aggregate a huge amount of data about the companies and individuals. This includes general information, compliance records, and even real-time data where possible. Eventually, the system will span both government data sources and incorporate commercial sources. If the platform can be implemented successfully, it should strengthen the Chinese government’s ability to enforce regulations and policies. Now, the system is not yet in place. Instead, the government has licensed private companies and some municipal governments to build their own social credit systems as pilot programs. One of the higher profile projects is Alibaba’s Sesame Credit.

As individual consumers in the United States, many of us are used to having personal credit scores. With the Social Credit System, however, it looks to be much more comprehensive. One key difference is that the scope of the system intends to cover all “market participants”. Both individuals and companies are subject to it. For instance, some of more ambitious objectives aim to track polluting companies through their real-time emissions records. Moreover, the stated focus of the system is to promote best practices in the marketplace. Proponents argue that such a system will help China overcome a multitude of societal ills: food safety scandals, public corruption, and tax evasion.

But on the other side of the coin, there are fears that such a system can be used as a mass disciplinary machine targeted at the citizenry. A good rating might allow users to borrow favorably on credit or find a good deal through Alibaba’s hotel partners. A bad rating might bar them from traveling. For instance, nine million low-score users found were barred from buying domestic plane tickets. With these risks for material harm on the mind, some have voiced fears that certain activities might be promoted or punished, a sort of subtle social coercion. Part of the problem is Alibaba isn’t too clear about the specific actions will be punished. On the one hand, they’ve released some high-level descriptions of the categories they score: credit history, online behavior, ability to fulfill contracts, personal profile completeness, and interpersonal relationships. On the other hand, the BBC reported that Sesame Credit makes no secret they will punish specific online behaviors:

“Someone who plays video games for 10 hours a day, for example, would be considered an idle person, and someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility,” Li Yingyun, Sesame’s technology director told Caixin, a Chinese magazine, in February.

Perhaps Sesame Credit just used this as an evocative example, or perhaps they meant it in all earnestness. In any case, the fact that a large private conglomerate, with encouragement from a government, is essentially piloting an opaque algorithm to enforce good behavior did not sit well with some human rights watch groups. And rather alarmingly, some of the current scoring systems supposedly also adjusts an individual’s scores based on the behaviors their social circle. This might encourage use of social pressure to turn discontents, into compliant citizens. Are we looking at the prototype for a future government social credit system that will leverage social pressure for mass citizen surveillance? Some sort of Scarlet Letter, meets Orwellian dystopia?

Wait. There is probably a too much alarmist speculation about the Social Credit System in Western media right now. As usual, there is a lot of nuance and context surrounding this experiment. After all, the large central system envisioned by Beijing is not yet implemented. The social credit platforms that do exist are either separate pilots run by local municipal governments, or by private companies like Alibaba or Tencent. We should also keep in mind that the current Sesame Credit system, along with its peculiarities, is designed to reward loyal Alipay users, instead of some abstract “citizen trustworthiness”. In Chinese media, citizens seem to be generally see the need for a social credit system. Additionally, there is an active media discussion within China about specific concerns, such as the risk for privacy invasions by the companies that host the data, or opinions on what kinds of data should be used to calculate the scores. It remains to be seen if the central government system will want to adopt any of the features of these pilot programs, and how much leeway it will allow for those companies to continuing this experiment.

Alternative measures of credit risk  by Simon Hodgkinson

People in developing economies can increasingly use their private information as a way to secure credit, but is this a good thing?

Easy access to credit is essential to the proper functioning of many high-income economies. Governments, corporations, and individuals all rely on the ability to borrow money. Lenders offer credit based upon verifiable income, expenses, and a person’s pattern of previous loan repayments.

Unfortunately, this system doesn’t fit the circumstances of many people in developing economies, who tend not to have bank accounts or a history of formal borrowing. This means that they are effectively excluded from getting credit, and may miss out on economic opportunities as a result.

In an attempt to address this, recent research has focused on identifying behaviors (other than loan repayment) that might provide an alternative way to predict someone’s creditworthiness.


“You call every day to check up on me. You are so responsible….”

A technological solution

People in developing economies may have had limited interactions with the formal banking system, but they generally have a long history with their cell phone operators. For example, in Ghana, only 40% of people have a bank account, but 83% of adults own a cell phone.

This provides a useful opportunity, because cell phone operators collect a rich data set that provides remarkable insight into many aspects of our lives. They know all the locations that we visit, when we go there and for how long, who we communicate with, how quickly those people respond to our messages, and so on.

Armed with this data, machine learning researchers have generated new insights that can outperform traditional credit scores. For example, it turns out that the size and strength of someone’s social network (as indicated by call records) is a good predictor of how likely they are to repay a loan. Another strong indicator is mobility – people who visit three or more locations every day have repayment rates that are better than those who stay at home or visit one other location.

These new models of behavior have given some lenders the confidence to offer credit based upon access to the borrower’s cell phone data. This is an avenue that didn’t exist before, and it can be transformational for those who benefit from it.

There is also a benefit to the wider population. The money that people borrow usually supports their local economy. In addition, if lenders can accurately identify people who are unlikely to repay their loans, they are able to cut their overall costs and can afford to charge lower rates to the remaining pool of borrowers.

Paying with privacy?

Although there are clear benefits to these advances, borrowers should also think about the costs and potential risks.

People who want to borrow in this way must submit to extensive and potentially invasive collections of data. By installing a tracking app on their phones, they allow lenders to see not only where they work, but what time they show up, when they leave, and where they go afterwards. Lenders can track where they shop, where their kids go to school, and which of their contacts they are closest to.


“You can trust us with your information…”

Of course, cell phone providers already collect this data and therefore have the same insights. The question is whether people still view it as appropriate in the context of lending rather than the provision of cell phone service.

It is possible that customers are inured to widespread data collection, or that they view it as reasonable when compared to the benefits they gain by being able to borrow money. They may be assuming that their data is secure, and won’t be sold or misused.

Another drawback is that while machine learning techniques are very good at making predictions, they can be complex and suffer from a lack of transparency. This makes it difficult to challenge the outcome. One particular algorithm used by lenders takes account of over 5,000 distinct data points, which means that borrowers are unlikely to be able to identify and correct errors, or to understand exactly how their data is being used to arrive at a decision.

Finally, relying on cell phone data may increase the overall pool of potential borrowers, but it does so unequally. Going back to the example of Ghana, there is a gender gap of approximately 16% in cell phone ownership, so this method of lending may embed or reinforce other inequalities.

In summary, advances in technology are helping people gain access to credit in new ways. While this is a positive development, it comes with potential privacy risks, and further work is required to ensure that benefits are extended fairly to all groups.

This blog post is a version of a talk I gave at the 2018 ACM Designing Interactive Systems (DIS) Conference based on a paper written with Nick Merrill and John Chuang, entitled When BCIs have APIs: Design Fictions of Everyday Brain-Computer Interface Adoption. Find out more on our project page, or download the paper: [PDF link] [ACM link]

In recent years, brain computer interfaces, or BCIs, have shifted from far-off science fiction, to medical research, to the realm of consumer-grade devices that can sense brainwaves and EEG signals. Brain computer interfaces have also featured more prominently in corporate and public imaginations, such as Elon Musk’s project that has been said to create a global shared brain, or fears that BCIs will result in thought control.

Most of these narratives and imaginings about BCIs tend to be utopian, or dystopian, imagining radical technological or social change. However, we instead aim to imagine futures that are not radically different from our own. In our project, we use design fiction to ask: how can we graft brain computer interfaces onto the everyday and mundane worlds we already live in? How can we explore how BCI uses, benefits, and labor practices may not be evenly distributed when they get adopted?

Brain computer interfaces allow the control of a computer from neural output. In recent years, several consumer-grade brain-computer interface devices have come to market. One example is the Neurable – it’s a headset used as an input device for virtual reality systems. It detects when a user recognizes an object that they want to select. It uses a phenomenon called the P300 – when a person either recognizes a stimulus, or receives a stimulus they are not expecting, electrical activity in their brain spikes approximately 300 milliseconds after the stimulus. This electrical spike can be detected by an EEG, and by several consumer BCI devices such as the Neurable. Applications utilizing the P300 phenomenon include hands-free ways to type or click.

Demo video of a text entry system using the P300

Neurable demonstration video

We base our analysis on this already-existing capability of brain computer interfaces, rather than the more fantastical narratives (at least for now) of computers being able to clearly read humans’ inner thoughts and emotions. Instead, we create a set of scenarios that makes use of the P300 phenomenon in new applications, combined with the adoption of consumer-grade BCIs by new groups and social systems.

Stories about BCI’s hypothetical future as a device to make life easier for “everyone” abound, particularly in Silicon Valley, as shown in recent research.  These tend to be very totalizing accounts, neglecting the nuance of multiple everyday experiences. However, past research shows that the introductions of new digital technologies end up unevenly shaping practices and arrangements of power and work – from the introduction of computers in workplaces in the 1980s, to the introduction of email, to forms of labor enabled algorithms and digital platforms. We use a set of a design fictions to interrogate these potential arrangements in BCI systems, situated in different types of workers’ everyday experiences.

Design Fictions

Design fiction is a practice of creating conceptual designs or artifacts that help create a fictional reality. We can use design fiction to ask questions about possible configurations of the world and to think through issues that have relevance and implications for present realities. (I’ve written more about design fiction in prior blog posts).

We build on Lindley et al.’s proposal to use design fiction to study the “implications for adoption” of emerging technologies. They argue that design fiction can “create plausible, mundane, and speculative futures, within which today’s emerging technologies may be prototyped as if they are domesticated and situated,” which we can then analyze with a range of lenses, such as those from science and technology studies. For us, this lets us think about technologies beyond ideal use cases. It lets us be attuned to the experiences of power and inequalities that people experience today, and interrogate how emerging technologies might get uptaken, reused, and reinterpreted in a variety of existing social relations and systems of power.

To explore this, we thus created a set of interconnected design fictions that exist within the same fictional universe, showing different sites of adoptions and interactions. We build on Coulton et al.’s insight that design fiction can be a “world-building” exercise; design fictions can simultaneously exist in the same imagined world and provide multiple “entry points” into that world.

We created 4 design fictions that exist in the same world: (1) a README for a fictional BCI API, (2) a programmer’s question on StackOverflow who is working with the API, (3) an internal business memo from an online dating company, (4) a set of forum posts by crowdworkers who use BCIs to do content moderation tasks. These are downloadable at our project page if you want to see them in more detail.  (I’ll also note that we conducted our work in the United States, and that our authorship of these fictions, as well as interpretations and analysis are informed by this sociocultural context.)

Design Fiction 1: README documentation of an API for identifying P300 spikes in a stream of EEG signals

 

First, this is README documentation of an API for identifying P300 spikes in a stream of EEG signals. The P300 response, or “oddball” response is a real phenomenon. It’s a spike in brain activity when a person is either surprised, or when see something that they’re looking for. This fictional API helps identify those spikes in EEG data. We made this fiction in the form of a GitHub page to emphasize the everyday nature of this documentation, from the viewpoint of a software developer. In the fiction, the algorithms underlying this API come from a specific set of training data from a controlled environment in a university research lab. The API discloses and openly links to the data that its algorithms were trained on.

In our creation and analysis of this fiction, for us it surfaces ambiguity and a tension about how generalizable the system’s model of the brain is. The API with a README implies that the system is meant to be generalizable, despite some indications based on its training dataset that it might be more limited. This fiction also gestures more broadly toward the involvement of academic research in larger technical infrastructures. The documentation notes that the API started as a research project by a professor at a University before becoming hosted and maintained by a large tech company. For us, this highlights how collaborations between research and industry may produce artifacts that move into broader contexts. Yet researchers may not be thinking about the potential effects or implications of their technical systems in these broader contexts.

Design Fiction 2: A question on StackOverflow

 

Second, a developer, Jay, is working with the BCI API to develop a tool for content moderation. He asks a question on Stack Overflow, a real website for developers to ask and answer technical questions. He questions the API’s applicability beyond lab-based stimuli, asking “do these ‘lab’ P300 responses really apply to other things? If you are looking over messages to see if any of them are abusive, will we really see the ‘same’ P300 response?” The answers from other developers suggest that they predominantly believe the API is generalizable to a broader class of tasks, with the most agreed-upon answer saying “The P300 is a general response, and should apply perfectly well to your problem.”

This fiction helps us explore how and where contestation may occur in technical communities, and where discussion of social values or social implications could arise. We imagine the first developer, Jay, as someone who is sensitive to the way the API was trained, and questions its applicability to a new domain. However, he encounters the commenters who believe that physiological signals are always generalizable, and don’t engage in questions of broader applicability. The community’s answers re-enforce notions not just of what the technical artifacts can do, but what the human brain can do. The stack overflow answers draw on a popular, though critiqued, notion of the “brain-as-computer,” framing the brain as a processing unit with generic processes that take inputs and produce outputs. Here, this notion is reinforced in the social realm on Stack Overflow.

Design Fiction 3: An internal business memo for a fictional online dating company

 

Meanwhile, SparkTheMatch.com, a fictional online dating service, is struggling to moderate and manage inappropriate user content on their platform. SparkTheMatch wants to utilize the P300 signal to tap into people’s tacit “gut feelings” to recognize inappropriate content. They are planning to implement a content moderation process using crowdsourced workers wearing BCIs.

In creating this fiction, we use the memo to provide insight into some of the practices and labor supporting the BCI-assisted review process from the company’s perspective. The memo suggests that the use of BCIs with Mechanical Turk will “help increase efficiency” for crowdworkers while still giving them a fair wage. The crowdworkers sit and watch a stream of flashing content, while wearing a BCI and the P300 response will subconsciously identity when workers recognize supposedly abnormal content. Yet we find it debatable whether or not this process improves the material conditions of the Turk workers. The amount of content to look at in order to make the supposedly fair wage may not actually be reasonable.

SparkTheMatch employees creating the Mechanical Turk tasks don’t directly interact with the BCI API. Instead they use pre-defined templates created by the company’s IT staff, a much more mediated interaction compared to the programmers and developers reading documentation and posting on Stack Overflow. By this point, the research lab origins of the P300 API underlying the service and questions about its broader applicability are hidden. From the viewpoint of SparkTheMatch staff, the BCI-aspects of their service just “works,” allowing managers to design their workflows around it, obfuscating the inner workings of the P300 API.

Design fiction 4: A crowdworker forum for workers who use BCIs

 

Fourth, the Mechanical Turk workers who do the SparkTheMatch content moderation work, share their experiences on a crowdworker forum. These crowd workers’ experiences and relationships to the P300 API is strikingly different from the people and organizations described in the other fictions—notably the API is something that they do not get to explicitly see. Aspects of the system are blackboxed or hidden away. While one poster discusses some errors that occurred, there’s ambiguity about whether fault lies with the BCI device or the data processing. EEG signals are not easily human-comprehensible, making feedback mechanisms difficult. Other posters blame the user for the errors. Which is problematic, given the preciousness of these workers’ positions, as crowd workers tend to have few forms of recourse when encountering problems with tasks.

For us, these forum accounts are interesting because they describe a situation in which the BCI user is not the person who obtains the real benefits of its use. It’s the company SparkTheMatch, not the BCI-end users, that is obtaining the most benefit from BCIs.

Some Emergent Themes and Reflections

From these design fictions, several salient themes arose for us. By looking at BCIs from the perspective of several everyday experiences, we can see different types of work done in relation to BCIs – whether that’s doing software development, being a client for a BCI-service, or using the BCI to conduct work. Our fictions are inspired by others’ research on the existing labor relationships and power dynamics in crowdwork and distributed content moderation (in particular work by scholars Lilly Irani and Sarah T. Roberts). Here we also critique utopian narratives of brain-controlled computing that suggest BCIs will create new efficiencies, seamless interactions, and increased productivity. We investigate a set of questions on the role of technology in shaping and reproducing social and economic inequalities.

Second, we use the design fiction to surface questions about the situatedness of brain sensing, questioning how generalizable and universal physiological signals are. Building on prior accounts of situated actions and extended cognition, we note the specific and the particular should be taken into account in the design of supposedly generalizable BCI systems.

These themes arose iteratively, and were somewhat surprising for us, particularly just how different the BCI system looks like from each of the different perspectives in the fictions. We initially set out to create a rather mundane fictional platform or infrastructure, an API for BCIs. With this starting point we brainstormed other types of direct and indirect relationships people might have with our BCI API to create multiple “entry points” into our API’s world. We iterated on various types of relationships and artifacts—there are end-users, but also clients, software engineers, app developers, each of whom might interact with an API in different ways, directly or indirectly. Through iterations of different scenarios (a BCI-assisted tax filing service was thought of at one point), and through discussions with our colleagues (some of whom posed questions about what labor in higher education might look like with BCIs), we slowly began to think that looking at the work practices implicated in these different relationships and artifacts would be a fruitful way to focus our designs.

Toward “Platform Fictions”

In part, we think that creating design fictions in mundane technical forms like documentation or stack overflow posts might help the artifacts be legible to software engineers and technical researchers. More generally, this leads us to think more about what it might mean to put platforms and infrastructures at the center of design fiction (as well as build on some of the insights from platform studies and infrastructure studies). Adoption and use does not occur in a vacuum. Rather, technologies get adopted into and by existing sociotechnical systems. We can use design fiction to open the “black boxes” of emerging sociotechnical systems. Given that infrastructures are often relegated to the background in everyday use, surfacing and focusing on an infrastructure helps us situate our design fictions in the everyday and mundane, rather than dystopia or utopia.

We find that using a digital infrastructure as a starting point helps surface multiple subject positions in relation to the system at different sites of interaction, beyond those of end-users. From each of these subject positions, we can see where contestation may occur, and how the system looks different. We can also see how assumptions, values, and practices surrounding the system at a particular place and time can be hidden, adapted, or changed by the time the system reaches others. Importantly, we also try to surface ways the system gets used in potentially unintended ways – we don’t think that the academic researchers who developed the API to detect brain signal spikes imagined that it would be used in a system of arguably exploitative crowd labor for content moderation.

Our fictions try to blur clear distinctions that might suggest what happens in “labs,” is separate from the “the outside world”, instead highlighting their entanglements. Given that much of BCI research currently exists in research labs, we raise this point to argue that BCI researchers and designers should also be concerned about the implications of adoption and application. This helps gives us insight into the responsibilities (and complicitness) of researchers and builders of technical systems. Some of the recent controversies around Cambridge Analytica’s use of Facebook’s API points to ways in which the building of platforms and infrastructures isn’t neutral, and that it’s incumbent upon designers, developers, and researchers to raise issues related to social concerns and potential inequalities related to adoption and appropriation by others.

Concluding Thoughts

This work isn’t meant to be predictive. The fictions and analysis present our specific viewpoints by focusing on several types of everyday experiences. One can read many themes into our fictions, and we encourage others to do so. But we find that focusing on potential adoptions of an emerging technology in the everyday and mundane helps surface contours of debates that might occur, which might not be immediately obvious when thinking about BCIs – and might not be immediately obvious if we think about social implications in terms of “worst case scenarios” or dystopias. We hope that this work can raise awareness among BCI researchers and designers about social responsibilities they may have for their technology’s adoption and use. In future work, we plan to use these fictions as research probes to understand how technical researchers envision BCI adoptions and their social responsibilities, building on some of our prior projects. And for design researchers, we show that using a fictional platform in design fiction can help raise important social issues about technology adoption and use from multiple perspectives beyond those of end-users, and help surface issues that might arise from unintended or unexpected adoption and use. Using design fiction to interrogate sociotechnical issues present in the everyday can better help us think about the futures we desire.


Crossposted with Richmond’s blog, The Bytegeist

 

California’s Trailblazing Consumer Privacy Law by Anamika Sinha

I’m sure when most of us stumble across a trivia question of sorts, the first answer key that crosses our minds is Google, the ultimate search engine. But have you ever wondered what exactly Google does with the data it gathers from the billions of searches that it is used for per day, (about 3.5 to be precise)? Well that’s a question that not even the ultimate search engine will answer perfectly for you. In fact, Alastair Mactaggart, one of the main masterminds behind the creation of the new California privacy bill, once reminisced about a time when he bumped into a Google engineer at a cocktail party, where the employee casually mentioned that if consumers had a glimpse into what the company knew about their users, they would be shocked. This ultimately gave Mactaggart a clear incentive to advocate for privacy rights for the general public, which resulted in a new piece of legislature known as the “California Consumer Privacy Act of 2018”.

Highlights of the Law: This law passed by CA government on June 28, 2018 will be enforced starting Jan 1st, 2020. It mandates businesses to disclose all categories of data that they have collected for a given user over the last twelve months including names of third party entities with whom they have shared the data. It also requires businesses to offer users a simple, intuitive way  to opt out of having their information stored, and shared. It also allows businesses to monetize aggregate data as long as individuals were unidentifiable from aggregate data. Lastly it also allows businesses to charge a different price to customers who have opted out ONLY if they can prove that the difference in charge is related to the value provided by the consumer’s data. Comparing it to the GDPR initiative that was ratified by Europe Union earlier this year, there are a lot of similarities. One key difference is that while GDPR levies huge fines on companies for non-compliance, the California law falls short and gives sweeping powers to the attorney general.

Reaction from Businesses: How do major businesses feel about having stricter privacy guidelines? While Facebook supposedly supports this law (truth be told, its mainly because the alternate measure for November CA ballot was more onerous. The ballot proposal was polling at 80% approval rate. Ballot measures are much harder to change than legislations) , most other businesses in Silicon Valley and the nation in general seem opposed to the legislature.

Tech giants like Google and Uber were frustrated that they were not consulted and such an important legislation was passed in record time without proper deliberation on the pros and cons. Even if their concerns may be somewhat valid, the reality is that they will have to make massive changes to support the law and risk a high percentage of their customers opting out from data collection and sharing. This puts their entire business model at risk.

Another major argument that opposers had was that the privacy issue should be addressed by US Congress, and not by individual state governments. This leads to an impending question, how will this law in world’s fifth largest economy affect the rest of the country? Well, due to many factors, this will likely influence a significant amount of companies to apply the same rules to all their customers. This is because the expectation that businesses will filter through their IP addresses in order to implement this law to only their California users is quite unreasonable, which means that users worldwide will benefit.

What’s Next? I’m sure that nobody imagined that the Google employee’s words at a cocktail party would have such a large domino effect on our tech world, just like we can’t predict the true extent of people that this law will impact. But it’s safe to say, that businesses will all use their money and influence to orchestrate some changes to the law. Its hard to imagine that tech giants will sit still and allow the law in its current form to take effect. Regardless of what happens next, when you combine Europe led GDPR initiative with this California initiative, one can rest assured that the the world wide web is about to endure some major changes from the standpoint of privacy.

Venmo is Conditioning Your Expectation of Privacy by James Beck

Add to your ever-growing list of apps and services to pay attention to: Venmo.

Venmo is an interesting service. It’s core ability is to quickly and conveniently facilitate transactions between you and your contacts list. Need to pay your friends back for a night at the bar that got a little out of control? Split a check at a restaurant, but don’t want to deal with the awkward pain of asking the server to deal with awkward fractions to get the amounts correct? Or maybe you have more illicit purposes and you’d rather just not deal with cash.


Who carries cash anymore? Just Venmo me

Regardless of your usage, Venmo is a wildly convenient means of moving money around. Users are fairly easy to find and setting up your account with your bank account information or a credit card is fairly straightforward as well.

So what’s the catch?

Well, there doesn’t seem to be one – for now.

For a long time there has been somewhat of an urban myth of sorts that Venmo is making money by micro-investing the cash that sits in their service while you wait for it to be transferred to your bank account (Users must specifically request that Venmo transfer their balance to their account so significant sums of money can be stuck in Venmo-limbo for long durations). However, for a long time Venmo wasn’t generating a lot of income for itself beyond the usual and expected credit card transaction fee.

The catch is that Venmo is following a model that has been paved out for it by many services before it. Attract a ludicrous volume of users, generate information, and figure out later how exactly to capitalize on those users and their information.

Venmo has now begun to partner with businesses to allow users to pay for real goods and services directly through the application rather than just serving as a means to pay your friends back for that late night burrito at that cash only place that you always forget is cash only. Venmo plans to charge a transaction fee to these businesses in exchange for the convenience of their service – the thought being that users have become so accustomed to the convenience of Venmo between their peers that they will begin to expect that same payment convenience from businesses. This also feels fairly reasonable.

However, Venmo has another facet to it’s service that is worth stopping to consider.

In a way that feels oddly tacked on Venmo also serves as a social media site of sorts. Transactions by default show up in a “news feed” style interface along with all of your contacts’ transactions. The amounts are hidden, but the participants and the user-entered descriptions of these transactions are visible. What you’re left with is a running history of who is paying who and for what.


Venmo’s Social Media Feed

It’s a strange and mostly benign feature. Transactions can be set to private if so chosen and even if you don’t choose to keep things private you still have the autonomy to choose the description of your transaction and keep it fairly innocuous.

What we should be concerned with though is how the addition of this social media dimension to a service that is just supposed to serve as a tool for monetary transactions is conditioning users for the future of Venmo. By incorporating a social media feed as the default behavior of the application, Venmo is slowly normalizing sharing our transactions publicly. This is not something that has traditionally been seen as “normal”.

Our credit card purchases have historically been seen as very private. However, now that we’ve normalized sharing payments between individuals will there be any protest when we start sharing our transactions with businesses by default? Will there be protest when Venmo starts using our past transactions to serve ads to us and our contacts? Or will we shrug our shoulders because the new de-facto business model is to attract users to a free or wildly inexpensive tool of convenience and then eventually introduce targeted advertisements based on our behavior with that service?

I fear it’s the latter and you should too – we’ve normalized sharing so many details of our lives and in doing so have gradually eroded our expectations of privacy. The way you move your money around is about to become the next pool of big data to analyze and the only fanfare to mark the occasion will be an update to a privacy policy that we’ll all forget we agreed to.