What happened to the Green Supply Chain?

What happened to the Green Supply Chain? by Dave Owen

In 2012 I wrote a whitepaper on the growing opportunity and in some cases requirement for companies to build a green supply chain. The premise was based on growing customer desire to understand where their products came from – and the reaction by some leading companies to use their best practices as an advertising opportunity.

The market moved quickly at the start of the decade – promoting sustainable sourcing and efficient transportation. Take for example the three apparel brands with PR moves in 2010-2012:

  • H&M Conscious Collection
  • Timberland’s Green Index
  • Patagonia’s Footprint Chronicles

Considering these moves today – they seem to have met very different results. On the H&M side, the company’s brand would likely receive low marks for sustainability given recent news. Timberland may come in quite neutral if customers were asked. Patagonia, on the other hand, has a brand synonymous with sustainability. That said, their Footprint Chronicles had very limited success/impact.

The speed at which the market was changing at the turn of the decade was quite remarkable. There was a window where eco branding was going mainstream. So what happened? While many eco-brands have taken hold as alternatives to the most popular brands, in most segments the eco option has not won the market as many predicted.

Business trends are definitely part of the story. I’d argue much of the shift has to do with the rise of our Amazon-culture. The equation changes when products are delivered in ubiquitous brown boxes compared to walking out of the store with the green packaging. The price spread between the most generic and eco-product now is greater in online environment. And, fads change. The 100-mile diet was a thing in 2010. Not so much anymore.


Google Trend

However, there have been market shocks that have made consumers wary. And those shocks relate more to data than to product. Take for example Volkswagen. The emissions scandal of 2015 was all about data. In effect, the company manipulated engine test data to cheat pollutant requirements. The story is one that’s unique. In fact, Nissan admitted this month that it had faked emissions data as well(1).

Back in at the time of the rise of green supply chain data there was a highly publicized event, where a sweatshop collapsed in Bangladesh that killed 112 in 2012. That factory manufactured for Walmart(2), Benetton(3), and others. I can recall at the same time hearing stories about conditions of factories where my new iPhone came from.


Aftermath the Tazreen Fashions Ltd. fire in Dhaka, Bangledesh on November 25th, 2012. Source: AP/Polash Khan2

The data component of this story is quite interesting to consider. I believe many things were at play in the evolution of the growth of sustainable supply chain reporting five plus years ago. Part of which had to do with companies realizing that the financial benefits weren’t equal to the costs, and that the reporting was best limited to annual sustainability reports. However, from a consumer perspective I am compelled to believe that a few events changed the public perception in trust in the system. As customers became warry of information and attentions moved to other areas, the ability to build a lasting system also changed.

These hypotheses are difficult to prove out. I believe further investigation in the limited impact of green supply chain thinking will help inform how the green supply chain 2.0 emerges (which I believe it will). The lesson is one that is relevant to data oriented systems. Trust in the system is paramount – designing with this trust in mind is an important job of the data scientist.

References:

1. (10 July 2018). “Nissan says emissions data for 19 models had been faked”. The Straits Times. Retrieved 25 July 2018

2. (26 November 2012). “Wal-Mart: Bangladesh factory in deadly fire made clothes without our knowledge”. CBS News. Retrieved 25 July 2018

3. Smithers, Rebecca (29 April 2013). “Benetton admits link with firm in collapsed Bangladesh building”. The Guardian. London. Retrieved 25 July 2018.

Privacy and Anti-Trust

Privacy and Anti-Trust by Todd Young

I have long been concerned about what mergers and acquisitions mean for privacy agreements.  Beyond my concerns about the notice-and-consent framework[1], it seemed to me that a change of ownership of the firm, as in the case of a merger or acquisition, is particularly problematic for individual privacy.  After all, people agree to share the personal information in a particular context, and a change of ownership of the firm can radically change that context.

I could think of particularly sensational hypothetical mergers to elicit lively discussion with friends and colleagues:  What if Facebook bought 23andme and used the genetic information to map out families, and cancer-proneness?  (Facebook has shown interest in medical information[2]. A recent study showed how large DNA database can be used to identify individuals.[3])   What if BlueCross signed an information sharing agreement with Ancestry?  (Insurance companies are terrified of cheap DNA tests for consumers.[4])  Or what if Google bought FedEx and made business document shipping free but you had to agree to allow them to scan the documents as part of their effort to organize the world’s information?  (OK, there’s no evidence I could find of this, but it’s basically the same deal you agree to with your Gmail account.)

How should we view such proposed mergers?  How should we decide if such a merger was harmful to consumers?  If such a merger was indeed likely to be harmful, who could we rely on to stop it from happening?  The Federal Trade Commission?  What monopoly would they be preventing?  What pricing power would the new company have if its apps were free all along?  Could it be considered a harm to accumulate too much personal information about people?

I will discuss each of these issues and then look at a real case from 2014, when Facebook purchased WhatsApp for $19B where examine the reactions to this merger in the EU and in the U.S. and offer my opinion on it.

Let’s start with legal framework of Antitrust, and its enforcers[5].  The FTC is a bipartisan federal agency with a unique dual mission to protect consumers and promote competition.  The FTC enforces anti-trust laws and will challenge anti-competitive mergers.[6]  Section 7 of the Clayton Act prohibits mergers that “may substantially lessen competition or tend to create a monopoly.”[7]  The process of evaluation is to: define the relevant market, test theories of harm, and examine efficiencies created by the merger.

Relevant Market

The FTC Horizontal Merger Guidelines[8], section 4, Market Definition specifies the line of commerce and the allows for the identification of the market participants, so market share can be considered.  I propose to use Jaron Lanier’s characterization of the social media companies as ones that collect personal information about consumers for the purpose of selling to other firms (chiefly advertising) the promise of behavior modification of those same users.[9]  The behavior modification could be to make a specific purchase of a product, or to “like” a particular tweet, or to read a certain news story, or even to attend a political rally.

To me, what is interesting about this market view is that it allows us to consider the actual harms that consumers are worried about[10], as well as the harms that typically get reviewed by the FTC – price and market power.  Let’s take a deeper look at price.

Price, in an economic sense, is the factor that balances supply and demand.  In a simpler way of thinking, price is what you give up for what you get, and Facebook users give up their personal information to use the platform ‘for free.’  According to Thomas Lenard, senior fellow and president emeritus at the Technology Policy Institute, “price is a proxy for a whole bunch of attributes.”  In Jason Lanier’s description “Let us spy on you and in return you’ll get free services.”  If Data is the new Oil, personal data is the jet fuel that powered Google, Amazon, and Facebook into the world’s most valuable companies top ten.[11]   The price users pay is to allow Facebook to use their personal information.

In other words, the “price” consumers pay for social networking apps is the privacy of their personal information.  Hence, we can look at the role of providing personal information as the “price” of social networking, and at the same time look at harms caused by the loss of privacy.  This interpretation is not mine alone.  Wired magazine’s June 2017 article “Digital Privacy is Making Antitrust Exciting Again” quotes Andreas Mundt, president of Germany’s antitrust agency, Bundeskartellant, saying he was “deeply concerned privacy is a competitive issue.”[12]

The FTC Horizontal Merger Guidelines clarify that non-price terms and conditions can also adversely affect consumers, and that non-price attributes can be considered as price.  Further if the market is one of buyer power, the same analysis can be conducted for the ‘monopsony’ situation.  So we can stay within the existing framework of the evaluation of mergers as being anti-competitive or not.

An anti-competitive merger would result in market power for the merged company, typically in terms of pricing power.  However, as ‘price’ in this case contains the notion of private personal information, we can describe Facebook’s  as in Figure 1 below.


Figure 1. Information and Money Flows for Facebook

The figure above illustrates that FaceBook is the purchaser of personal information from users, who, in exchange, get otherwise free services from FaceBook – a platform on which to build community and share (with that community, and with FaceBook).  Facebook rents this information to its customers who sell ads or otherwise attempt to modify the user’s behavior based on a personalized environment.

 

The determination of whether a merger of another company with FaceBook would be anti-competitive would thus include the effect of the price of service on other consumers.  Or, in the economics of social media, how would the required information sharing change for customers of the old company change under the Facebook merger?  Here are the pre-merger and post-merger information and money flows.


Figure 2. Pre-Merger Information and Money Flows for Facebook and WhatsApp


Figure 3. Post-Merger Information and Money Flows for Facebook and WhatsApp

I contend that through the merger, Facebook forced WhatsApp out of its hi-privacy, paid business model into a low-privacy “free” market, despite WhatsApp’s history of insisting on privacy for its customers.  In the words of Tom Grossman, a senior branding consulting at Brand Union, from the Guardian, and quoted in Wired[13]:

“One of the reasons why so many millions have flocked to WhatsApp is the added level of privacy the brand provides. In a world where your every word echoes endlessly across the internet it was a communication channel where sharing could take place on a more contained level. However, much like Google’s acquisition of Nest and Facebook’s of Instagram, with this purchase consumers are suddenly associated with, and have their information accessible by a brand that they didn’t buy into. It’s this intrusion that can make it feel uncomfortable, as both you and your data are seized without your say-so.”

Theories of Harm

As I pointed out in the previous section, there was no financial harm to WhatsApp users, as their paid service became free in time, as it was integrated[14] into the Facebook family.  As price includes a component of privacy, we need to look beyond monetary harm, and Daniel Solove’s Taxonomy of Privacy[15] is instructive in the evaluation of post-merger privacy harms. The Taxonomy describes the information flows and issues related to them.


Figure 4. Solove’s Taxonomy of Privacy

In comparing the two apps and their business/privacy models, areas of significant difference include: 1) Surveillance – WhatsApp’s Koum said “Respect for your privacy is coded into our DNA…   We don’t know your likes, what your search for on the internet or collect your GPS location…[16]”  Facebook’s business is built around ‘likes’ and tracking your online.  2) Aggregation – the power of adding another layer of data, another network, into the Facebook graph must have been a key driver for the decision to acquire Whatsapp.  3) Secondary use – this is the crux of the outrage by WhatsApp users.  They used the WhatsApp platform based on the user agreement, which showed a strong focus on privacy and security, and then all that data got sucked into the Facebook graph.  4) Breach of confidentiality – I could not reasonably include this as a harm had Facebook actually gotten the affirmative consent of WhatsApp users, but their insistence on defaulting the users into the agreement requires that I consider this to be a breach of confidentiality for the WhatsApp users.  5) Invasions of privacy.  It’s hard not to include this as well given the way the WhatsApp consumer were assimilated into the Facebook graph.

Consumers are much more willing to share personal data with the promise of anonymization[17].  However, as personal information about individuals accumulates in the hands of large corporations, promises of anonymization of data becomes less viable, and consumers’ trust becomes misplaced.  Narayanan and Shmatikov showed that they could successfully de-anonymize social network data based purely on topology.[18]  Michael Zimmer similarly showed how difficult it is to anonymize Facebook data used in research.  Despite researchers’ attempts to anonymize Facebook data for a college student population that was the subject of their research, the school and students were identified in a matter of days of the release of the study, putting the students’ privacy at risk.[19]  Also recall the DNA study referenced on Page 1 (reference 3).

Real Harm and a Slap on the Wrist

The FTC did not block the merger but focused on WhatsApp’s privacy promises to its users: “We want to make clear that, regardless of the acquisition, WhatsApp must continue to honor these promises to consumers. Further, if the acquisition is completed and WhatsApp fails to honor these promises, both companies could be in violation of Section 5 of the Federal Trade Commission (FTC) Act and, potentially, the FTC’s order against Facebook,” the letter states.[20]

In 2014, Mark Zuckerburg was quoted as saying “We are absolutely not going to change plans around WhatApp and the way it uses user data.  WhatsApp is going to operate completely autonomously.”[21]  But, all that changed, and consumers were left with a choice to leave or stay and share their personal information.  Even worse, despite the FTC’s admonition to “obtain consumers’ affirmative consent before doing so”, Facebook took the opposite approach and by default opted consumers in.

The FTC also used the Facebook/WhatsApp merger as a backdrop to provide guidance to other companies about keeping their pre-merger privacy promises post-merger.[22]

In the Facebook-WhatsApp merger, the EU and US both criticized and fined FaceBook, but the FTC did not prevent the merger.  The fines ($112M by the EU and EU and US ultimately focused on keeping privacy promises), but were tiny compared with the $19B they paid for WhatsApp.

However, in the end, it was not theoretical harms that lead to the criticism and fines from the EU and the FTC.  It was lies.  Despite both companies being adamant about the security of WhatsApp consumer information, they went back on their promise.  Apparently, the financial reward of merging the two data sets was just too much to resist, and the expected backlash, loss of customers, and fines was not enough to impact the final decision.

Efficiencies

The efficiency gained by Facebook was 1) the repurposing of private data from WhatsApp into their own social network ‘graph’, and 2) the elimination of a for paid-subscription-model/hi-privacy competitor from the social networking landscape.  Both of these efficiencies hurt consumers to the benefit of Facebook.

Conclusion

Mergers and Acquisitions change the context of privacy for the user and can threaten

I think the FTC got it wrong – I think WhatsApp was a Maverick company per the FTC Horizontal Merger Guidelines document.  From that document (emphasis mine): [my comments included like this]

2.1.5  Disruptive Role of a Merging Party The Agencies consider whether a merger may lessen competition by eliminating a “maverick” firm, i.e., a firm that plays a disruptive role in the market to the benefit of customers. For example, if one of the merging firms [Facebook] has a strong incumbency position and the other merging firm [WhatsApp] threatens to disrupt market conditions with a new technology or business model [high-privacy], their merger can involve the loss of actual or potential competition. Likewise, one of the merging firms may have the incentive to take the lead in price cutting or other competitive conduct or to resist increases in industry prices. A firm that may discipline prices [sharing of personal information] based on its ability and incentive to expand production rapidly [because of consumer interest in high-privacy apps] using available capacity also can be a maverick, as can a firm that has often resisted otherwise prevailing industry norms to cooperate on price setting or other terms of competition.

Their maverick-ness was providing consumers with a paid $1-per-year service with the promise of high privacy.  Facebook’s merger forced them to adopt the no-privacy model, essentially a much higher price for using that platform.  Users and critics wrote that WhatsApp had betrayed them, and I agree.

NOTES

[1] Notice-and-consent relies upon the informed consent of the user, who often acknowledges the agreement with a simple click.  However, there are many well-documented reasons to believe that the consent is neither informed, nor undertaken with complete free will: 1) agreements are notoriously long documents, 2) agreements are seldom written in clear, accessible language, 3) they can be changed at will by company with notice, 4) the company often decides what “material” changes merit a notice, 5) the company essentially holds all the power in the relationship as switching costs for the consumer are very high given the utility-like status of many social networking apps.

[2] https://www.cnbc.com/2018/04/05/facebook-building-8-explored-data-sharing-agreement-with-hospitals.html

[3] https://www.biorxiv.org/content/early/2018/06/19/350231

[4] https://www.fastcompany.com/3022224/why-23andme-terrifies-health-insurance-companies

[5] https://www.ftc.gov/tips-advice/competition-guidance/guide-antitrust-laws/enforcers

Both the FTC and the U.S. Department of Justice (DOJ) Antitrust Division enforce the federal antitrust laws. In some respects, their authorities overlap, but in practice the two agencies complement each other. Over the years, the agencies have developed expertise in particular industries or markets. For example, the FTC devotes most of its resources to certain segments of the economy, including those where consumer spending is high: health care, pharmaceuticals, professional services, food, energy, and certain high-tech industries like computer technology and Internet services.

[6] https://www.ftc.gov/about-ftc/what-we-do

[7] https://www.ftc.gov/sites/default/files/attachments/merger-review/100819hmg.pdf

[8] https://www.ftc.gov/sites/default/files/attachments/merger-review/100819hmg.pdf

[9] Jaron Lanier, 2018, Ten Arguments for Deleting Your Social Media Accounts Right Now.

[10] https://www.ipsos.com/ipsos-mori/en-uk/personalisation-vs-privacy

[11] https://www.wired.com/2017/06/ntitrust-watchdogs-eye-big-techs-monopoly-data/

[12] https://www.wired.com/2017/06/ntitrust-watchdogs-eye-big-techs-monopoly-data/

[13] https://www.epic.org/privacy/internet/ftc/whatsapp/

[14] I think the verb assimilated is actually more apropos, especially because of the opt-in-by-default strategy of gained consent that earned them fines around the world.

[15] Solove.  A Taxonomy of Privacy.  https://www.law.upenn.edu/journals/lawreview/articles/volume154/issue3/Solove154U.Pa.L.Rev.477(2006).pdf

[16] https://www.epic.org/privacy/internet/ftc/whatsapp/

[17] https://www.ipsos.com/ipsos-mori/en-uk/personalisation-vs-privacy

[18] De-Anonymizing Social Networks.  Arvind Narayana and Vitaly Shmatikov. The University of Texas at Austin.

[19] http://www.sfu.ca/~palys/Zimmer-2010-EthicsOfResearchFromFacebook.pdf

[20] https://www.ftc.gov/news-events/press-releases/2014/04/ftc-notifies-facebook-whatsapp-privacy-obligations-light-proposed

[21] https://www.epic.org/privacy/internet/ftc/whatsapp/

[22] https://www.ftc.gov/news-events/blogs/business-blog/2015/03/mergers-privacy-promises

From Safety to Surveillance: When is it Okay to Spy on Your Kids?

From Safety to Surveillance: When is it Okay to Spy on Your Kids? by Elizabeth Shulok

Imagine hiding a webcam in your teenager’s bedroom and recording them unaware. Most of us would recognize this as an invasion of privacy, and potentially child pornography if the camera records the child in a state of undress.

But install a webcam in your toddler’s bedroom, and it is an acceptable safety measure to make sure your child is safe when they are alone in their room.

At what point does recording your child transition from responsible parenting to an invasion of privacy?


Hello Barbie WiFi enabled doll

Nearly 3 years ago, parents filed suit against the makers of Hello Barbie, a WiFi enabled doll that uses voice recognition technology to allow a child to have a “conversation” with the doll. Although Mattel claims the doll complies with COPPA, or the Children’s Online Privacy Protection Act, the plaintiffs claim that Mattel did not do enough to protect the privacy of playmates of the child who owns the doll.

COPPA was created to protect the privacy of children under 13. One of the provisions required by COPPA is that any website or service geared towards children under 13, must get parental consent before collecting data on a child. And indeed Hello Barbie does require a parent to set up an account and consent to allowing the toy to record their child’s speech.

However, as the lawsuit contends, friends of the doll’s owner may also be recorded, despite their own parents not having consented to the collection of this personal data.

The debate on Hello Barbie has centered around this main privacy flaw. But does the toy, when used as intended, violate wiretapping laws?

To comply with COPPA, the makers of Hello Barbie must make the personal data collected from their child available for the parent to review. In this case, that data are the voice recordings. In essence, this toy allows a parent to eavesdrop on their child’s private conversations with the doll when they are not in the room.

Federally, a conversation can be recorded with the consent of only one party. Essentially, it is legal to record a conversation as long as you are a party to the conversation, even if others are unaware of the recording. In California and some other states, however, recording a conversation is illegal unless all parties are aware of and consent to the recording.


Recording consent laws by state
(http://www.golocalprov.com/news/anyone-can-tape-you)

Although recording another adult without their knowledge would clearly violate federal wiretapping laws, it is unclear if this applies to your own child. (Since the conversation is with a doll, this is essentially akin to recording someone talking to themselves, which would be illegal under federal law without their knowledge and consent.)

Looking to the law on recording children’s conversations, much of the legal precedence involves child custody or abuse cases, in which one parent surreptitiously records a phone conversation between their child and another adult (often the other parent). In most cases the courts have found that for one-party consent laws, the parent can vicariously consent on behalf of their child to the recording of the conversation. And in the case of two-party consent laws, the courts have ruled that the recordings, which would otherwise have been illegal according to state law, are permissible as evidence in court if the parent recorded the conversation in the child’s best interests.

In other words, recording your child’s conversation when you are not a party to the conversation is illegal – unless the recordings collect evidence of abuse. If you record the conversation and there is no evidence of abuse, then those recordings are illegal. Although as long as you do not use them, it is likely not to become an issue.

Looking at the Hello Barbie case, the parent consents to have recordings of their child collected by a third party. However, the doll can also be seen as a recording device by which the parent is listening in on their child’s private conversations. If the parent was a party to the conversation and included in the recordings, this would be legal under one-party consent laws, and likely legal in other cases as the parent is consenting on behalf of the child and is present during the conversation.

However, if the parent uses the recordings from Hello Barbie as a way to listen in on their child’s conversations when the child believes they are alone, this seems to amount to wiretapping. That could still be acceptable if the recording is done in the child’s best interests, but that would be a tough case to argue under normal circumstances.

As more toys become WiFi enabled and capture data on children, we need to consider what is really in the best interests of the child and whether some toys simply do not belong on the market.

REFERENCES

Neil, M. (2015, December 9). Moms sue Mattel, saying “Hello Barbie” doll violates privacy. Retrieved July 29, 2018, from http://www.abajournal.com/news/article/hello_barbie_violates_privacy_of_doll_owners_playmates_moms_say_in_lawsuit/

Dinger, Daniel R (2005). Should Parents Be Allowed to Record a Child’s Telephone Conversations When They Believe the Child Is in Danger?: An Examination of the Federal Wiretap Statute and the Doctrine of Vicarious Consent in the Context of a Criminal Prosecution. Seattle University Law Review, 28(4), 955-1027.

Adams, Allison B. (Fall 2013). War of the Wiretaps: Serving the Best Interests of the Children? Family Law Quarterly, 47(3), 485-504.

Children”s Online Privacy Protection Rule (“COPPA”). Retrieved July 29, 2018, from https://www.ftc.gov/enforcement/rules/rulemaking-regulatory-reform-proceedings/childrens-online-privacy-protection-rule

China’s Social Credit System: Using Data Science to Rebuild Trust in Chinese Society

China’s Social Credit System: Using Data Science to Rebuild Trust in Chinese Society by Jason Hunsberger

From 1966-1976 Mao Zedong and the Chinese Communist Party (CCP) waged an idological war on its own citizens. Seeking to purge the country of “bourgeois” and “insufficiently revolutionary” elements, Mao closed the schools and set an army of high school and university students on the populace. The ensuing cultural strife placed neighbor against neighbor, young against old, and destroyed families. Hundreds of thousands died.

Forty years later, China is still trying to recover from the social damage. Facing widespread issues of government and corporate corruption, and lack of respect for the rule of law, the national party is seeking to transform Chinese society to be more “sincere”, “moral”, and “trustworthy.” The means by which the CCP seeks to do this is by creating a nationwide social credit system. Formally launched in 2014 after two decades of research and development, this system’s stated goal is:

“allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.”
NOTE: Under heaven is a rough translation of one of the historical names of China.


Image 1: Goals of China’s social credit system as found in the Chinese news media (Source: MERICS)

By 2020, China hopes to have the system fully deployed across the entire country.

How will this nationwide social credit system help the nation rebuild trust in its public and private institutions and its people? By using data science to analyze all aspects of public life. That will be used to create a social credit score that represents a citizen’s or a company’s contribution to society. If your actions are a detriment to society, your social credit score will go down. If your actions are beneficial to society, your score will go up. Depending upon your credit score, you will either be restricted from aspects of society or be granted access to certain benefits.

The social credit system is deeply integrated into local government databases and large technology companies. It collects data from automatic facial recognition cameras deployed at city intersections, video game systems, search engines, financial systems, social media, political and religious groups you engage in, and more. With all this data, the social credit system creates a universal social credit score which can be integrated into all aspects of Chinese life.

Beyond a mere pipedream, there are currently 36 pilot programs deployed in some of China’s largest cities.


Image 2: A map of the social credit pilot cities in China (Source: MERICS)

Here are some examples of the social credit system at work, collected from these pilot programs:

  • you wake up in the morning to find your face, name, and home address being placed on a billboard in your local neighborhood saying that you have a negative impact on society.
  • the ringtone on your phone is automatically changed so that all callers know that you have not paid a bill.
  • your girlfriend/boyfriend dumps you because your social credit score is integrated into your dating app.


Image 3: A cartoon on Credit China’s website on the impact of the social credit system on one’s dating life (Source: MERICS)

  • your company is restricted from public procurement systems and access to government land
  • you fly to a destination for business, but are unable to take your return flight because your social credit score decreased
  • you work for a business that gets punished for unscrupulous behavior and you are unable to move to another company because you are held responsible for the company’s behavior.
  • your company is blocked from having access to various government subsidies
  • you are able to rent apartments with no deposit, or rent bikes for 1.5 hours for free, while others have to pay extra.
  • you try to get away for that long awaited vacation only to find that you cannot fly or take the high-speed train to any of your desired destinations.
  • your company may not be allowed to participate on social media platforms
  • your academic performance, including whether you cheated on an exam or plagarized a paper can affect your social credit score

If that is not enough, anyone who runs afoul of the credit system will have their personal information added to a blacklist which is published in a searchable public database and on major news websites. This public shaming is considered a feature rather than a flaw of the system.


Image 4: A public billboard in Rongcheng showing citizens being shamed as a result of their social credit scores (SOURCE: Foreign Policy)

Obviously, the entire social credit system raises many issues. China is making a big bet that the citizenry will view this data collection, analysis, and scoring positively. To help, the Chinese government and national media have been actively promoting “big data-driven technological monitoring as providing objective, irrefutable measures of reality.” This approach seems to ignore the many issues present in information systems regarding the bias of categories and of the algorithms used to analyze the data these systems contain. Additionally, it fails to address the problems with erroneous data falsely rendering damaging reputational judgements against the people.

But putting aside weather or not these systems can measure what they seek to reliably, does creating a vast technological data collection system that is deeply integrated into all aspects of people’s lives and is used to calculate a single ‘trustworthiness’ score that is to be displayed publicly if the score is too low, sound like actions that are intended to build trust amongst people? On its face, it does not. It sounds more like a system built to control people. And systems that are built to control people, at their heart, do not trust the people they are trying to control. For if the people could be trusted to make the decisions that were good for society, why would such a system be needed in the first place? So, fundamentally, can the CCP build trust with its citizens by taking an action that loudly tells them that they don’t trust them? Will a system built on distrust foster trust amongst the populace? Or will it signal to the entire populace that their fellow citizens, neighbors, friends, and family members might not be trustworthy? Instead of promoting trust within society, it is very possible that China’s social credit system will actually further erode trust in China’s society.

References

Ohlberg, Mareike, Shazeda Ahmed, Bertram Lang. “Central Planning, Local Experiments: The complex implementation of China’s Social Credit System.” Mercator Institute for China Studies, December 12, 2017: https://www.merics.org/sites/default/files/2017-12/171212_China_Monitor_43_Social_Credit_System_Implementation.pdf

Mistreanu, Simina. “Life Inside China’s Social Credit Laboratory.” Foreign Policy, April 3, 2018: https://foreignpolicy.com/2018/04/03/life-inside-chinas-social-credit-laboratory/

Greenfeld, Adam. “China’s Dystopian Tech Could be Contagious.” The Atlantic, February 14, 2018: https://www.theatlantic.com/technology/archive/2018/02/chinas-dangerous-dream-of-urban-control/553097/

Bad Blood: Trusting Numbers with a Grain of Salt

Bad Blood: Trusting Numbers with a Grain of Salt by Amy Lai

Digital health may be well on its way toward becoming the next “it” trend in technology. Over the past few years, the presence of consumer health technology companies has boomed. In 2010, digital health companies received roughly $1 billion in total investment funding, a less than hefty amount compared to other sectors (1). However, fast-forward just 6 years later, and that investment has jumped by nearly 810% (1). That’s right. In 2016, digital health companies received nearly $8.1 billion in investment funding (1), with significant investments in wearable and biosensing technology (2)—a move that perhaps echoes the increasing promise of digital healthcare.


Health investment categories

Indeed, the time seems ripe for a long-overdue revolution of traditional healthcare. With an ever-growing pool of data about our lifestyles captured through our smartphones, social media accounts, and even online shopping preferences, coupled with rapid advances in computing power and recommendation systems, it seems like technology is at the cusp of transforming how we think, perceive, and quantify our health. And we’re just starting to see its effects…and consequences.

Fitness trackers such as Fitbit and health-tracking apps like Apple Health Kit quantify an impressive range of our physical health. From our weight, to the number of steps we take, flights of stairs we climb, calories we burn, and to the duration and quality of our sleep, it appears that there are increasingly more tools to track nearly every aspect of our lives (3). Anyone else also slept for 7 hours, 18 minutes last night? As we curiously go through the colorful line graphs and bar charts that show our activity levels, have you ever wondered whether we can fully trust these metrics? How accurate are the numbers?

If a fitness app recorded that you burned 100 calories when you actually burned 90, how upset would you be? Probably not too upset because mistakes happen. However, if you learned that a medical device determined that you had diabetes when you really didn’t, how distraught would you be now? Most likely more than a little distraught. Notice the difference? Depending on context, it appears that consumers have different expectations of health-related product efficacy and tend to place greater trust in certain types of products such as medical devices. Although somewhat anticlimactic, results from medical devices should warrant some skepticism as they can (and do) have measurement error that goes wrong…and in some cases, very wrong.

Founded in 2003, Theranos was touted as a revolutionary breakthrough in the blood-testing market. The company reinvented how blood-testing worked by introducing a “proprietary technology” that purportedly could detect numerous medical conditions from high cholesterol to cancer using a finger pinprick that only needed 1/100 to 1/1,000 of the amount of blood required by current standard blood-testing procedures (4). Theranos seemed unstoppable. Valued at $10 billion, the company raised more than $700 million in venture capital and partnered with national pharmacy chains including Walgreens and Safeway to open testing clinics for consumers and patients (4). However, the company quickly unraveled as its product turned out to be nothing more than a facade. After probing by the US Food and Drug Administration, Securities and Exchange Commission, and Centers for Medicare and Medicaid Services, the “proprietary technology” was found to be underdeveloped and inaccurate, reporting erroneous blood-test results with marked error (4). Consumers worried about supposed new tumors while others celebrated their allegedly improved cardiac health by stopping medications (5). Theranos fooled us and we (just might have) helped them do that.


Theranos

Theranos teaches us a subtle yet important lesson about privacy as contextual integrity. Because consumers don’t often seem to question the efficacy of health-related products, it behooves corporate executives to scientifically and ethically validate their products. It’s important that such integrity plays a key role in organizational culture, and is embedded at all management levels to keep business leaders in-check and minimize consumer harm. Doing so helps prevent violations of consumer expectations and gives them a reason for continuing to place their trust in products. However, health-related products are not perfect and infallible. Because products inevitably have some margin of error, it also behooves consumers to understand that product metrics may not represent the whole truth and nothing but the truth. Those numbers aren’t likely to be wholly correct. It’s essential that we adopt a more realistic set of expectations about health-related products, as well as a healthier level of skepticism the next time we’re told we only burned 10 calories or only a few droplets of blood is needed to detect cancer.

These shifts in the mindset and expectations of businesses and consumers may be needed to help keep both sides accountable to each other.

References:
1. https://www.forbes.com/sites/forbestechcouncil/2017/05/05/why-digital-health-startups-have-yet-to-reach-unicorn-status/#3b5f23188cdb
2. https://rockhealth.com/reports/q1-2017-business-as-usual-for-digital-health/
3. https://www.nytimes.com/2017/12/26/technology/big-tech-health-care.html
4. https://www.vanityfair.com/news/2016/09/elizabeth-holmes-theranos-exclusive
5. https://www.wsj.com/articles/the-patients-hurt-by-theranos-1476973026

Social Credit: a Chinese experiment

Social Credit: a Chinese experiment by Yang Yang Qian

Imagine applying for a loan, but first the bank must check your Facebook profile for a credit report. As odd as it feels for consumers in the United States, for consumers in China, this is already part of an experiment with social credit.

The Chinese government has had plans to implement a Social Credit System by 2020: a big data approach to regulating the behavior of individuals, companies, and other institutions such as NGOs. Essentially, under the Social Credit System, a company or individual would be given a set of ratings that summarizes how good behaving they are in various categories, such as propensity for major credit offenses. The platform is intended to aggregate a huge amount of data about the companies and individuals. This includes general information, compliance records, and even real-time data where possible. Eventually, the system will span both government data sources and incorporate commercial sources. If the platform can be implemented successfully, it should strengthen the Chinese government’s ability to enforce regulations and policies. Now, the system is not yet in place. Instead, the government has licensed private companies and some municipal governments to build their own social credit systems as pilot programs. One of the higher profile projects is Alibaba’s Sesame Credit.

As individual consumers in the United States, many of us are used to having personal credit scores. With the Social Credit System, however, it looks to be much more comprehensive. One key difference is that the scope of the system intends to cover all “market participants”. Both individuals and companies are subject to it. For instance, some of more ambitious objectives aim to track polluting companies through their real-time emissions records. Moreover, the stated focus of the system is to promote best practices in the marketplace. Proponents argue that such a system will help China overcome a multitude of societal ills: food safety scandals, public corruption, and tax evasion.

But on the other side of the coin, there are fears that such a system can be used as a mass disciplinary machine targeted at the citizenry. A good rating might allow users to borrow favorably on credit or find a good deal through Alibaba’s hotel partners. A bad rating might bar them from traveling. For instance, nine million low-score users found were barred from buying domestic plane tickets. With these risks for material harm on the mind, some have voiced fears that certain activities might be promoted or punished, a sort of subtle social coercion. Part of the problem is Alibaba isn’t too clear about the specific actions will be punished. On the one hand, they’ve released some high-level descriptions of the categories they score: credit history, online behavior, ability to fulfill contracts, personal profile completeness, and interpersonal relationships. On the other hand, the BBC reported that Sesame Credit makes no secret they will punish specific online behaviors:

“Someone who plays video games for 10 hours a day, for example, would be considered an idle person, and someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility,” Li Yingyun, Sesame’s technology director told Caixin, a Chinese magazine, in February.

Perhaps Sesame Credit just used this as an evocative example, or perhaps they meant it in all earnestness. In any case, the fact that a large private conglomerate, with encouragement from a government, is essentially piloting an opaque algorithm to enforce good behavior did not sit well with some human rights watch groups. And rather alarmingly, some of the current scoring systems supposedly also adjusts an individual’s scores based on the behaviors their social circle. This might encourage use of social pressure to turn discontents, into compliant citizens. Are we looking at the prototype for a future government social credit system that will leverage social pressure for mass citizen surveillance? Some sort of Scarlet Letter, meets Orwellian dystopia?

Wait. There is probably a too much alarmist speculation about the Social Credit System in Western media right now. As usual, there is a lot of nuance and context surrounding this experiment. After all, the large central system envisioned by Beijing is not yet implemented. The social credit platforms that do exist are either separate pilots run by local municipal governments, or by private companies like Alibaba or Tencent. We should also keep in mind that the current Sesame Credit system, along with its peculiarities, is designed to reward loyal Alipay users, instead of some abstract “citizen trustworthiness”. In Chinese media, citizens seem to be generally see the need for a social credit system. Additionally, there is an active media discussion within China about specific concerns, such as the risk for privacy invasions by the companies that host the data, or opinions on what kinds of data should be used to calculate the scores. It remains to be seen if the central government system will want to adopt any of the features of these pilot programs, and how much leeway it will allow for those companies to continuing this experiment.

Alternative measures of credit risk

Alternative measures of credit risk  by Simon Hodgkinson

People in developing economies can increasingly use their private information as a way to secure credit, but is this a good thing?

Easy access to credit is essential to the proper functioning of many high-income economies. Governments, corporations, and individuals all rely on the ability to borrow money. Lenders offer credit based upon verifiable income, expenses, and a person’s pattern of previous loan repayments.

Unfortunately, this system doesn’t fit the circumstances of many people in developing economies, who tend not to have bank accounts or a history of formal borrowing. This means that they are effectively excluded from getting credit, and may miss out on economic opportunities as a result.

In an attempt to address this, recent research has focused on identifying behaviors (other than loan repayment) that might provide an alternative way to predict someone’s creditworthiness.


“You call every day to check up on me. You are so responsible….”

A technological solution

People in developing economies may have had limited interactions with the formal banking system, but they generally have a long history with their cell phone operators. For example, in Ghana, only 40% of people have a bank account, but 83% of adults own a cell phone.

This provides a useful opportunity, because cell phone operators collect a rich data set that provides remarkable insight into many aspects of our lives. They know all the locations that we visit, when we go there and for how long, who we communicate with, how quickly those people respond to our messages, and so on.

Armed with this data, machine learning researchers have generated new insights that can outperform traditional credit scores. For example, it turns out that the size and strength of someone’s social network (as indicated by call records) is a good predictor of how likely they are to repay a loan. Another strong indicator is mobility – people who visit three or more locations every day have repayment rates that are better than those who stay at home or visit one other location.

These new models of behavior have given some lenders the confidence to offer credit based upon access to the borrower’s cell phone data. This is an avenue that didn’t exist before, and it can be transformational for those who benefit from it.

There is also a benefit to the wider population. The money that people borrow usually supports their local economy. In addition, if lenders can accurately identify people who are unlikely to repay their loans, they are able to cut their overall costs and can afford to charge lower rates to the remaining pool of borrowers.

Paying with privacy?

Although there are clear benefits to these advances, borrowers should also think about the costs and potential risks.

People who want to borrow in this way must submit to extensive and potentially invasive collections of data. By installing a tracking app on their phones, they allow lenders to see not only where they work, but what time they show up, when they leave, and where they go afterwards. Lenders can track where they shop, where their kids go to school, and which of their contacts they are closest to.


“You can trust us with your information…”

Of course, cell phone providers already collect this data and therefore have the same insights. The question is whether people still view it as appropriate in the context of lending rather than the provision of cell phone service.

It is possible that customers are inured to widespread data collection, or that they view it as reasonable when compared to the benefits they gain by being able to borrow money. They may be assuming that their data is secure, and won’t be sold or misused.

Another drawback is that while machine learning techniques are very good at making predictions, they can be complex and suffer from a lack of transparency. This makes it difficult to challenge the outcome. One particular algorithm used by lenders takes account of over 5,000 distinct data points, which means that borrowers are unlikely to be able to identify and correct errors, or to understand exactly how their data is being used to arrive at a decision.

Finally, relying on cell phone data may increase the overall pool of potential borrowers, but it does so unequally. Going back to the example of Ghana, there is a gender gap of approximately 16% in cell phone ownership, so this method of lending may embed or reinforce other inequalities.

In summary, advances in technology are helping people gain access to credit in new ways. While this is a positive development, it comes with potential privacy risks, and further work is required to ensure that benefits are extended fairly to all groups.

California’s Trailblazing Consumer Privacy Law

California’s Trailblazing Consumer Privacy Law by Anamika Sinha

I’m sure when most of us stumble across a trivia question of sorts, the first answer key that crosses our minds is Google, the ultimate search engine. But have you ever wondered what exactly Google does with the data it gathers from the billions of searches that it is used for per day, (about 3.5 to be precise)? Well that’s a question that not even the ultimate search engine will answer perfectly for you. In fact, Alastair Mactaggart, one of the main masterminds behind the creation of the new California privacy bill, once reminisced about a time when he bumped into a Google engineer at a cocktail party, where the employee casually mentioned that if consumers had a glimpse into what the company knew about their users, they would be shocked. This ultimately gave Mactaggart a clear incentive to advocate for privacy rights for the general public, which resulted in a new piece of legislature known as the “California Consumer Privacy Act of 2018”.

Highlights of the Law: This law passed by CA government on June 28, 2018 will be enforced starting Jan 1st, 2020. It mandates businesses to disclose all categories of data that they have collected for a given user over the last twelve months including names of third party entities with whom they have shared the data. It also requires businesses to offer users a simple, intuitive way  to opt out of having their information stored, and shared. It also allows businesses to monetize aggregate data as long as individuals were unidentifiable from aggregate data. Lastly it also allows businesses to charge a different price to customers who have opted out ONLY if they can prove that the difference in charge is related to the value provided by the consumer’s data. Comparing it to the GDPR initiative that was ratified by Europe Union earlier this year, there are a lot of similarities. One key difference is that while GDPR levies huge fines on companies for non-compliance, the California law falls short and gives sweeping powers to the attorney general.

Reaction from Businesses: How do major businesses feel about having stricter privacy guidelines? While Facebook supposedly supports this law (truth be told, its mainly because the alternate measure for November CA ballot was more onerous. The ballot proposal was polling at 80% approval rate. Ballot measures are much harder to change than legislations) , most other businesses in Silicon Valley and the nation in general seem opposed to the legislature.

Tech giants like Google and Uber were frustrated that they were not consulted and such an important legislation was passed in record time without proper deliberation on the pros and cons. Even if their concerns may be somewhat valid, the reality is that they will have to make massive changes to support the law and risk a high percentage of their customers opting out from data collection and sharing. This puts their entire business model at risk.

Another major argument that opposers had was that the privacy issue should be addressed by US Congress, and not by individual state governments. This leads to an impending question, how will this law in world’s fifth largest economy affect the rest of the country? Well, due to many factors, this will likely influence a significant amount of companies to apply the same rules to all their customers. This is because the expectation that businesses will filter through their IP addresses in order to implement this law to only their California users is quite unreasonable, which means that users worldwide will benefit.

What’s Next? I’m sure that nobody imagined that the Google employee’s words at a cocktail party would have such a large domino effect on our tech world, just like we can’t predict the true extent of people that this law will impact. But it’s safe to say, that businesses will all use their money and influence to orchestrate some changes to the law. Its hard to imagine that tech giants will sit still and allow the law in its current form to take effect. Regardless of what happens next, when you combine Europe led GDPR initiative with this California initiative, one can rest assured that the the world wide web is about to endure some major changes from the standpoint of privacy.

Venmo is Conditioning Your Expectation of Privacy

Venmo is Conditioning Your Expectation of Privacy by James Beck

Add to your ever-growing list of apps and services to pay attention to: Venmo.

Venmo is an interesting service. It’s core ability is to quickly and conveniently facilitate transactions between you and your contacts list. Need to pay your friends back for a night at the bar that got a little out of control? Split a check at a restaurant, but don’t want to deal with the awkward pain of asking the server to deal with awkward fractions to get the amounts correct? Or maybe you have more illicit purposes and you’d rather just not deal with cash.


Who carries cash anymore? Just Venmo me

Regardless of your usage, Venmo is a wildly convenient means of moving money around. Users are fairly easy to find and setting up your account with your bank account information or a credit card is fairly straightforward as well.

So what’s the catch?

Well, there doesn’t seem to be one – for now.

For a long time there has been somewhat of an urban myth of sorts that Venmo is making money by micro-investing the cash that sits in their service while you wait for it to be transferred to your bank account (Users must specifically request that Venmo transfer their balance to their account so significant sums of money can be stuck in Venmo-limbo for long durations). However, for a long time Venmo wasn’t generating a lot of income for itself beyond the usual and expected credit card transaction fee.

The catch is that Venmo is following a model that has been paved out for it by many services before it. Attract a ludicrous volume of users, generate information, and figure out later how exactly to capitalize on those users and their information.

Venmo has now begun to partner with businesses to allow users to pay for real goods and services directly through the application rather than just serving as a means to pay your friends back for that late night burrito at that cash only place that you always forget is cash only. Venmo plans to charge a transaction fee to these businesses in exchange for the convenience of their service – the thought being that users have become so accustomed to the convenience of Venmo between their peers that they will begin to expect that same payment convenience from businesses. This also feels fairly reasonable.

However, Venmo has another facet to it’s service that is worth stopping to consider.

In a way that feels oddly tacked on Venmo also serves as a social media site of sorts. Transactions by default show up in a “news feed” style interface along with all of your contacts’ transactions. The amounts are hidden, but the participants and the user-entered descriptions of these transactions are visible. What you’re left with is a running history of who is paying who and for what.


Venmo’s Social Media Feed

It’s a strange and mostly benign feature. Transactions can be set to private if so chosen and even if you don’t choose to keep things private you still have the autonomy to choose the description of your transaction and keep it fairly innocuous.

What we should be concerned with though is how the addition of this social media dimension to a service that is just supposed to serve as a tool for monetary transactions is conditioning users for the future of Venmo. By incorporating a social media feed as the default behavior of the application, Venmo is slowly normalizing sharing our transactions publicly. This is not something that has traditionally been seen as “normal”.

Our credit card purchases have historically been seen as very private. However, now that we’ve normalized sharing payments between individuals will there be any protest when we start sharing our transactions with businesses by default? Will there be protest when Venmo starts using our past transactions to serve ads to us and our contacts? Or will we shrug our shoulders because the new de-facto business model is to attract users to a free or wildly inexpensive tool of convenience and then eventually introduce targeted advertisements based on our behavior with that service?

I fear it’s the latter and you should too – we’ve normalized sharing so many details of our lives and in doing so have gradually eroded our expectations of privacy. The way you move your money around is about to become the next pool of big data to analyze and the only fanfare to mark the occasion will be an update to a privacy policy that we’ll all forget we agreed to.

Drones: Privacy up in the air

Drones: Privacy up in the air by Elly Rath

Drones or unmanned aerial vehicles (UAVs) are flying devices which are capable of collecting a vast array of information on a daily basis if configured and equipped correctly. The basic function of most drones is for aerial videography and to capture images. Images are not the only data drones can gather. Equipped with an appropriate sensor, a drone can capture light, speed, sound, chemical composition and a myriad of information. Drones are now a ubiquitous and unlike earlier days not limited to military or police surveillance. They can be easily purchased online or at a local superstore just like we purchase any toy.

There are no laws controlling the data collection or restricting its usage. Any individual or private company with a properly equipped drone can collect and process huge amounts of potentially private information in a short time.

The privacy concerns surrounding drones are a concern for both the government and civilians. On Nov 29, 2017 the NY Times published that DJI is fighting a claim by one United States government office that its commercial drones and software may be sending sensitive information about American infrastructure back to China. DJI is the market leader on drones. In this blog I mostly focus on the perspective of civilian privacy.

Drones are affordable and the learning curve to operate it is mild. Even a novice can master it in a few days. While currently there are laws to protect citizens against people stalking or spying on them in their homes, there are no such federal laws that would protect individuals from being spied on specifically by a drone. A drone can fly overhead unnoticed, peer directly into someone’s house or record activities on a private property from the sky. Drone privacy data regulations first surfaced in 2012 when the Federal Aviation Administration (FAA) was tasked with integrating drones and UAVs into US airspace. The FAA, however, failed to consider privacy aspects. Electronic Privacy Information Center (EPIC), a privacy and civil liberties nonprofit, along with 100 organizations, experts, member of the public has filed multiple petitions against FAA since 2014. One of the petitions is still pending.

It is normal for citizens to expect certain amount of privacy on their own property. In recent years, there have been multiple incidents of citizens suspecting that they were being “watched” by someone operating a drone above their property. Sometimes it could be simply a land survey company but it is still unsolicited. In one instance a man called William Merideth in Hillview, Kentucky shot down a drone hovering over his sixteen year old daughter who was sunbathing in the garden. He was arrested for wanton endangerment and criminal mischief. A Kentucky judge dismissed all charges against William stating the drone was an invasion of his privacy.

With the multitude of images the drones can capture now we have companies that develop high-tech software that do the data analysis in one click of a button. Hence companies who had difficulty making sense of the data now have algorithms to help them. Hence the risks to privacy simply increase with further technological advancements.

The first law regarding personal airspace above one’s property was passed in 1946, when the Supreme Court ruled in the case United States v. Causby that a person’s property extended to 83 feet up in the air. The federal government prohibits the unauthorized use drones above national parks, military bases, above airports, and federal buildings. Civilian drones should fly at or below 500 feet and the maximum speed limit is 100 miles per hour. But that’s all we have as far as federal law. Drone laws are now mostly covered in State laws and vary from state to state. FAA has issued a fact-sheet for state and local law makers to help in creating non-federal drone laws. Many of the state laws pertaining to drones relate to interfering with emergency measures, filming someone without their permission, accident sites or crime scenes. Penalties range from fines to jail time, but again, it can be very difficult to enforce such laws.

The lack of clear and standardized drone privacy laws is glaring compared to the over 1 million FAA registered drones excluding light weight recreational drones. In 2017, Senator Edward Markey introduced a drone privacy bill which aims to create privacy protections and data reduction requirements about information a drone collects, disclosure provisions for when data collection is happening and warrant requirements for law enforcement.

On one hand drone advocacy group Small UAV Coalition, which represents companies like Google’s parent, Alphabet, and Amazon wants lax laws, while on the other, citizens want well defined boundaries. Hopefully, a mutual middle ground will be found in the near future, offering increased innovation for businesses while retaining citizens’ rights.