“That’s A Blatant Copy!”: Amazon’s Anticompetitive Behavior and Its Impacts on Privacy

“That’s A Blatant Copy!”: Amazon’s Anticompetitive Behavior and Its Impacts on Privacy
By Mike Varner | October 27, 2022

Amazon’s function as both a marketplace and competitor has been under the microscope of both European and American regulators for the past few years and just recently the company attempted to skirt fines, unsuccessfully so far, by promising to makes changes. In 2019, European regulators opened an investigation over concerns that the data Amazon collects on its merchants was being used to dimmish competition by advantaging Amazon’s own private-label products [1].

What’s Private-Label?

Private-label refers to a common practice among retailers to distribute their own products to compete with other sellers. A 2020 a Wall Street Journal investigation found that, in interviews with over 20 former private-label employees, Amazon uses merchants’ data when they develop and sell their own competing products. This evidence runs contrary to not only their stated policies, but also to what spokespeople for the company attested to in Congressional hearings. Merchant data helped employees to know which product features were important to copy, how to price each product, and anticipated profit margins. Former employees demonstrated how using merchant data, for a car trunk organizer, Amazon used sales and marketing data to ensure that private-label could deliver higher margins [2].

[Image 1] Mallory Brangan www.cnbc.com/2022/10/12/amazons-growing-private-label-business-is-challenge-for-small-brands.html

Privacy Harms

Amazon claimed that employees were prohibited from using these data in offering private-label products and launched an internal investigation into the matter. The company claimed there were restrictions in place to keep private-label executives from accessing merchant data, but interviews revealed that use of these data was common practice and openly discussed in meetings. Even when regulations were enforced, managers would often “go over the fence” by asking analysts to create reports which divulged the information or to even create fake “aggregated” data which would secretly only contain a single merchant. It’s clear that these business practices are unfair and deceptive, to merchants, as they are demonstrably false relative to the company’s written policies and verbal communication to Congress. The FTC should consider this in its ongoing investigations as unfair and deceptive business practices are within its purview [3]. Other agencies such as the SEC are looking into this matter and the USDOJ is investigating the company for obstructing justice in relation to their 2019 Congressional hearings [4].

[Image 2] Nate Sutton, an Amazon associate general counsel, told Congress in July: ‘We don’t use individual seller data directly to compete’. [2]

Search Rank Manipulation

Amazon has made similarly concerning statements about their search rank algorithm over the years. Amid internal dissent in 2019, Amazon changed its product search algorithm to highlight more profitable, for Amazon, products. Internal counsel initially rejected a proposal to directly add profit into the search rank algorithm amid ongoing European investigations and concerns that the change would not be in customers’ best interest (a guiding principle for Amazon). Despite explicit exclusion of profit as a variable into the algorithm, former employees indicated that engineers would simply add enough new variables to proxy for profit. To test this, engineers would run A-B tests to calculate how to proxy for profit and unsurprisingly they found something that worked. They backed solved, using a variety of variables, for profit to have the best of both worlds: to “truthfully” represent that they don’t use profit and to use a completely equivalent composite metric. As one of the many checks prior to changing the algorithm, engineers were explicitly prevented from including variables that decreased profitability metrics. In total, while it could be strictly true that the search rank algorithm does not include profit, Amazon has optimized for profitability through a series of incentive structures [5]. Regulators should consider this deceptive practice as part of their ongoing investigations as simply not including profit obfuscates the complexities Amazon has gone through to achieve their desired outcome.

[Image 3] Jessica Kuronen [5]

What’s Next?

Amazon attempted in July to end their European antitrust investigations by offering to stop collecting nonpublic data on merchants [6]. Amazon hopes that this concession would prevent regulators from issuing fines, but this proposal has been met with strong criticism from a variety of groups [7]. The outcome of this proposal is pending, but similar litigation continues. Just last week UK regulators filed similar antitrust claims against Amazon over its “buy box” [8]. Amazon has been considering getting out of the private-label business due to lower-than-expected sales and leadership has been slowly downsizing the private label business over the past few years [9].

References:

  1. ec.europa.eu/commission/presscorner/detail/pl/ip_19_4291
  2. www.wsj.com/articles/amazon-scooped-up-data-from-its-own-sellers-to-launch-competing-products-11587650015?mod=article_inline
  3. www.wsj.com/articles/amazon-competition-shopify-wayfair-allbirds-antitrust-11608235127?mod=article_inline
  4. www.retaildive.com/news/house-refers-amazon-to-doj-for-potential-criminal-conduct/620246/
  5. www.wsj.com/articles/amazon-changed-search-algorithm-in-ways-that-boost-its-own-products-11568645345
  6. www.nytimes.com/2022/07/14/business/amazon-europe-antitrust.html
  7. techcrunch.com/2022/09/12/amazon-eu-antitrust-probe-weak-offer/
  8. techcrunch.com/2022/10/20/amazon-uk-buy-box-claim-lawsuit/
  9. www.wsj.com/articles/amazon-has-been-slashing-private-label-selection-amid-weak-sales-11657849612

Closing the digital divide – how filtered data shapes society.

Closing the digital divide – how filtered data shapes society.
By Anonymous | October 27, 2022

Photo by NASA on Unsplash

You send a text, like a post, make an online purchase, or check your blood pressure on your smart device. The data you produce from these activities shapes ideas, opinions, policies, and more. Given this tremendous influence on society, data ingested by any means must be representative of all populations.

In reality, as of 2018, more than 20% of people in the United States are without Internet access (FCC, 2020), and as of 2021, only 63% globally are accessing the Internet (Statista, 2021).

Data is at the heart of decision-making. In the field of Data Science, Data Scientists rely on data to inform decisions in many industries that impact our daily lives, from Healthcare, Fraud Detection, Logistics, Transportation, etc. With such an influence on so many aspects of our daily lives, it’s critical for the technology we use to be inclusive and data-representative of all populations. Unfortunately, to a certain degree, Data Scientists are operating with a filtered view of the world based on the usage of data that’s not representative of all populations.

Data Scientists require data that will bring new insights to the world. Data points are carefully selected to feed algorithms that produce meaningful outcomes that benefit society. The fundamental problem with this is that the technology creating the input data is not in the hands of individuals equally among all populations, thus producing exclusionary data. Without fair representation, populations’ chances of influencing societal decisions do not exist. Reasons for this include a lack of affordable digital technology, physical disabilities that prevent technology from being accessible, and environments without Internet access.

While rural areas lack Internet access, Tribal lands, in particular, are severely impacted by the digital divide. In a study published by the FCC, as of 2018, “22.3% of Americans in rural areas and 27.7% of Americans in Tribal lands lack coverage from fixed terrestrial 25/3 Mbps broadband, as compared to only 1.5% of Americans in urban areas.”

Encouraging support has come recently from the USDA and FCC. The USDA’s ReConnect Program, has since 2018 invested over $1 billion to date to expand high-speed broadband infrastructure in unserved rural areas and tribal lands.” The Affordable Connectivity Program provides discounted Internet services to lower income families.

Internet users in 2015 as a percentage of a country’s population

Source: International Telecommunication Union. Wikipedia.

Before data driven decisions can be made, the data the must be inclusive of all populations. We need to increase deployment of infrastructure to enable Internet access in underserved communities and increase access to technology for those who are without. Below are a few volunteer resources if you’d like to help close the digital divide:

Photo by Alexander Sinn on Unsplash

References

  1. 2020 Broadband Deployment Report. Federal Communications Commission. FCC
  2. ReConnect Loan and Grant Program. U.S. Department of Agriculture. ReConnect
  3. List of countries by number of Internet users. Wikipedia
  4. Affordable Connectivity Program. Federal Communications Commission. FCC
  5. Percentage of global population accessing the internet from 2005 – 2021, by market maturity. Statista, 2022. Statista

Cops on your Front Porch

Cops on your Front Porch
By Irene Shaffer | October 24, 2022

Some technology companies, including Amazon and Google, will share footage from smart doorbells with law enforcement without a warrant and without consent of the owner.

In case of emergency

In a letter to Senator Edward Markey earlier this year, Amazon confirmed that it has provided Ring doorbell footage to law enforcement on several occasions without consent from the owner of the doorbell and without a warrant. Amazon claims that such disclosures are made only when there is “a good-faith determination that there was an imminent danger of death or serious physical injury to a person requiring disclosure of information without delay” (Amazon, 2022). However, this admission from Amazon created a whirlwind of news stories about the perceived invasion of privacy. Although this emergency usage of footage was in accordance with the privacy policy of the Ring device, it clearly violated users’ reasonable expectation of privacy for a device that they install at their home.

In this article, we explore two questions. First, should it ever be justifiable to share footage from a private recording device without consent and without a warrant? Second, what should a privacy-conscious user look for in the terms of service of a smart doorbell device?

Should video ever be shared without consent?

Amazon claims that it only shares videos in this way in the case of emergencies; however, there is no legal requirement for a company to disclose user data in the absence of a warrant or other court document requiring that the data be provided to law enforcement. In fact, several manufacturers of smart doorbells, including Arlo, Wyze, and Anker, have all confirmed that they will not share data without either consent or a warrant, even in the case of emergency (Crist, 2022). However, Google has the same policy as Amazon, although Google claims to have not yet shared any video footage in such an emergency situation (Crist, 2022).

Ideally, a company’s stance on this issue should be front and center in the privacy policy for the device so that no user is caught by surprise when their data is shared with law enforcement. Although Ring’s privacy policy does a good job of fulfilling the recommendations of the California Online Privacy Protect Act (CalOPPA) document, including “use plain, straightforward language” and “avoid technical legal jargon”, Ring’s published policy for sharing video with law enforcement is somewhat contradictory (California Department of Justice, 2014). In the first section, it states:

Ring does not disclose user information in response to government demands (i.e., legally valid and binding requests for information from law enforcement agencies such as search warrants, subpoenas and court orders) unless we’re required to comply and it is properly served on us.

However, in a later section titled “Other Information”, the emergency policy is specified:

Ring reserves the right to respond immediately to urgent law enforcement requests for information in cases involving imminent danger of death or serious physical injury to any person.

On the surface, this seems to contradict the first statement and may confuse users who do not read all the way to the bottom of the policy.

What should privacy-conscious consumers look for?

For users who are truly privacy conscious, the best solution is a closed-circuit camera which does not upload to the cloud. However, this is much less convenient than an internet connected device. To ensure privacy while also benefitting from the convenience of the cloud, the ideal solution is an end-to-end encrypted service which allows only the owner of the doorbell to decrypt and view recordings. In this way, the service provider is unable to access the video, even in the case of an emergency or court ordered request from law enforcement. End-to-end encryption is available from many major smart doorbell providers, including Ring, which has been rolling out this feature to its devices over the past year (Ring, 2022). Although Ring provides many warnings about features which will be lost due to encryption, a user who wants control over their data should gladly accept this tradeoff.

References

Amazon. (2022, July 1). Amazon Response to Senator Markey. Retrieved from United States Senator for Massachusetts Ed Markey: www.markey.senate.gov/imo/media/doc/amazon_response_to_senator_markey-july_13_2022.pdf

California Department of Justice. (2014). Making Your Privacy Practices Public.

Crist, R. (2022, July 26). Ring, Google and the Police: What to Know About Emergency Requests for Video Footage. Retrieved from CNET: www.cnet.com/home/security/ring-google-and-the-police-what-to-know-about-emergency-requests-for-video-footage/

Ring. (2022). Ring Law Enforcement Guideliens. Retrieved from Ring: support.ring.com/hc/en-us/articles/360001318523-Ring-Law-Enforcement-Guidelines

Ring. (2022). Understanding Video End-to-End Encryption (E2EE). Retrieved from Ring: support.ring.com/hc/en-us/articles/360054941511-Understanding-Video-End-to-End-Encryption-E2EE-

Senator Markey’s Probe into Amazon Ring Reveals New Privacy Problems. (2022, July 13). Retrieved from United States Senator for Massachusetts Ed Markey: www.markey.senate.gov/news/press-releases/senator-markeys-probe-into-amazon-ring-reveals-new-privacy-problems

Spying on friend’s financial transactions, it’s easy with Venmo

Spying on friend’s financial transactions, it’s easy with Venmo
By Anonymous | October 27, 2022

Venmo is a financial and social app, intended to allow friends to share funds while connecting socially with notes and emoji’s.  This puts Venmo in a unique position to enable sharing of personal information when the user intends it be shared and protecting the privacy of its users when the user expects it not to be shared. 

Venmo makes headlines with privacy violations

Venmo is no stranger to news headlines about unintended privacy violations.  One famous case is the exposure of President Joe Biden’s account.  A Buzzfeed journalist was able to find it within searching for 10 minutes.  It revealed Biden’s private social network, family relationships and raised security issues at the national level. (2021, Buzzfeed)

By default, when a user signs up for Venmo, their settings via the app are set to ‘Public – Visible to everyone on the internet’.  Countless people assume their information is shared only with friends they exchange funds with, and unknowingly are sharing information publicly.  Additionally, as per Venmo’s privacy policy, they track a user’s geo-location that may be used for several reasons which are vague such as ‘advertising, .. or other location-specific content’. (Venmo privacy policy, 2022).  This setting can be turned off, but many unsuspecting users remain unaware of these security options and assume their privacy is protected.  Nissenbaum’s contextual integrity, addresses context of information relative to information norms that should be respected (Nissenbaum, 2011).  Venmo, by this standard, has failed to address contextual integrity and the consequences can cause harm to unsuspecting users of Venmo.

How to change Privacy settings in Venmo

Changing the default Venmo settings is not difficult, but many users are unaware that their privacy is unprotected.  To change the settings, a user needs to open Venmo’s Privacy settings, change the settings to ‘Private’.  This will change the settings for all transactions going forward but will not change the privacy settings of historical transactions.  To change historical transaction privacy, a user needs to do an additional step to click into the ‘More’ section and change Past Transactions history to Private as well.  Additionally, if a user prefers that Venmo not track their geo-location at all times, they need to click into ‘Location’, ‘App location permissions’, then ‘Permissions’, then Location permission’ to change those default settings to the desired privacy setting.

Venmo and the FTC

Venmo has already been sued and settled with the FTC for violation of disclosing information to consumers about transferring funds and privacy settings in violation of the Gramm-Leach-Bliley Act (FTC, 2018).  Venmo misrepresented that user’s financial accounts were protected by “bank grade security systems”.   Venmo sent users confirmation that money had been credited to their account, when in fact it had not.  They did not disclose that Venmo could cancel or freeze fund transfer after a confirmation had been sent to the user.  As result of being found in violation of the law, Venmo is subject to a third-party assessment of compliance every other year, for the next 10 years.  One key point in the FTC principles includes recommending that companies increase their transparency with data practices, which is what the FTC supported in its ruling in this case.  (FTC, 2012)

Looking Forward

Venmo has made enhancements to their privacy policy and practices because of exposure of violations that caused harm.  The objective of this blog post is to raise awareness and encourage checking privacy policies and settings for applications used where personal information can be shared unexpectedly and to the detriment of the user.  Many companies are improving their attention to privacy, but as we can see in the case of Venmo, not until an embarrassing security exposure or court case requires them to do so.

References

  1. Venmo privacy policy (2022, September 14). venmo.com/legal/us-privacy-policy/
  2. Mac, Ryan, Notopoulos, Katie, Brooks, Ryan, McDonald, Logan. (2021, May 14). We Found Joe Bien’s Secret Venmo.  Here’s Why That’s A Privacy Nightmare For Everyone, BuzzFeedNews www.buzzfeednews.com/article/ryanmac/we-found-joe-bidens-secret-venmo
  3. Federal Trade Commission (FTC). (2018, February 27) PayPal Settles FTC Charges that Venmo Failed to Disclose Information to Consumers About the Ability to Transfer Funds and Privacy Settings; Violated Gramm-Leach-Bliley Act www.ftc.gov/news-events/news/press-releases/2018/02/paypal-settles-ftc-charges-venmo-failed-disclose-information-consumers-about-ability-transfer-funds
  4. Nissenbaum, Helen (2011). A contextual Approach to Privacy Online Daedalus140:4
  5. Federal Trade Commission (FTC). (2012, March) Protecting Privacy in an Era of Rapid Change, Recommendations for businesses and policymakers. Section VI, Subsections C, “Simplified Consumer Choice” and D, “Transparency” talk about notice and consent)

Credit-Score Algorithm Bias: Big Data vs. Human

Credit-Score Algorithm Bias: Big Data vs. Human
By Anonymous | October 27, 2022

Can Big Data eliminate human bias? The U.S. credit market shows minority groups disproportionately receiving unfavorable terms or outright denials on their loans, mortgages, or credit cards applications. Often, these groups tend to be subject to higher interest rates as opposed to their peer groups. Such decisions rely on the data available to lenders as well as their discretion, thus inserting human bias into the mix.

The reality is a stark contrast in access to credit for minorities, especially for African Americans on interests on business loans, bank branch density, fewer banking location in predominantly Black neighborhoods, and finally a stunted growth of local businesses in those areas. Several solutions were proposed to tackle this issue. The U.S. government signed the Community Reinvestment Act into law in 1977. Initiatives such as the African American minority depository institutions were put in place to increase access to banking for those underserved.

The ever-growing role of Big Data is an opportunity to remove prevalent selection biases in making lending decisions. Nonetheless, the limitations of Big Data are becoming apparent as these minority groups are still largely marginalized. Specifically, much of the existing machine learning models place a heavy emphasis on certain traits of the population to determine their credit worthiness. Demographic characteristics such as education, location, and income are closely intertwined with a population’s profile for instance. In some way, the human features translate into the data a specific way, which would then be validated on some outdated premise, scoring different groups various weights.

Big Data could eliminate human biases by assigning different weights to different population categories. Credit-Score Algorithms ensuring fair and non-biased decisions should put in place. In fact, demographic features used to calculate credit worthiness such as FICO scores may be beneficial if the algorithm used is fair, unbiased, and undergone a strict regulatory review. The key point is that there should not be a single standard that is applied to populations of such differing makeup.

A credit score is a mathematical model comparing an individual’s credit to millions of other people. It is an indication of credit worthiness based on review of past and current relationships with lenders, aiming to provide insight on how an individual manages their debts. Currently, credit scores algorithms leverage information on payment history (35%), credit utilization ratio (30%), length of credit history (15%), new credit (10%), and credit mix or types of credit (10%). The resulting number is then assigned into categories of very bad credit (300-499), bad credit (500–600), fair credit (601–660), good credit (661–780), and excellent credit (781–850).

Error in credit reporting for instance could lead to a long lasting and negative credit worthiness. This is particularly damaging to vulnerable groups since  the repercussion can last years.

References:

WARNING: Your Filter Bubble Could Kill You

WARNING: Your Filter Bubble Could Kill You
By Author: Anonymous | October 27, 2022

Filter bubbles can distort the health information accessible to you and result in misinformed medical decision making.

Filter bubbles are the talk of the town. Nowadays the political and social landscape seems to be increasingly polarized and it’s easy to point fingers at the social media sites and algorithms that are tailoring the information we all consume. Some of the filters we find ourselves in are harmlessly generating content we want to see – such as a new hat to go with the gloves we just bought. On the other hand, the dark side of filter bubbles can be dangerous in situations where personal health information is customized for us in a way that leads to misinformed decisions.

The ‘Filter Bubble’ is a term coined by Eli Parisor in his book The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. He describes our new “era of personalization” where social media platforms and search engines are using advanced models to craft content it predicts a user would like to see, based on past browsing history and site engagement. This virtually conjured world is intended to perfectly match a user’s preferences and opinions so that they’ll enjoy what they’re seeing and will keep coming back. All of this aims to increase user traffic and pad their bottom line.

An illustration of filter bubble isolation

The negative implications of filter bubbles have been extensively researched, especially as they relate to political polarization. A more serious consequence of tailored content and search results is when the filter interferes with the quality of health information produced. In his article The filter bubble and its effect on online personal health information, Harald Holone describes how the technology can present serious new challenges for doctors and their patients. People are increasingly turning to ‘Doctor Google’ or other social forums to ask questions and become informed on decisions that will affect their health instead of their primary care, licensed professional. Holone describes how the relationship between doctors and patients is shifting because people are starting to draw their own conclusions before stepping foot in a doctor’s office, which can diminish their medical authority. The real issue occurs when people start using their personalized feeds or biased public forums for guidance and are still expecting objective results. As touched on before, the reality is that search results can be heavily skewed by prior browsing history and there’s no guarantee that what turns up is dependable for medical direction and the results could give only a distorted version of reality. An effective example he used is a scenario in which a person is deciding whether or not to have their child vaccinated. A publicized case of this is the 2014 measles outbreak in California; many blamed the root cause on a spread of misinformation leading to lower vaccination rates. Health misinformation can also transcend to other decisions relating to cancer treatment, diets or epidemic outbreaks.

There is no simple answer to combat this issue because challenges arise that prevent people from accessing more trustworthy medical information. One problem is the opaqueness of the filter algorithms can make people oblivious to the fact that what they are seeing is only a sliver of the real world and not objective. It is also not evident to people where they fall within the realm of filter bubbles and in what direction their information is biased, especially if they have been in a siloe for a long time. Another dilemma that Halone presents is our lack of control over the content we see. Even if we do become conscious of our bubble, many of the prevalent social media sites don’t offer a way to intentionally shift back to center.

So what can we do to become objectively informed, especially when it comes to medical information? Several solutions have been proposed to help us regain our power. A browser tool named Balancer was created by Munson et al that can track the sites you visit and provide an overview of your reading habits and biases to increase awareness. Another interesting tool that could help combat misinformation is a Chrome add-on called Rbutr that informs a user if what they are currently viewing has been previously disputed or contradicted elsewhere. A more simple place to start could be deleting your browser history or using DuckDuckGo when searching for information that will be used for health decisions.

The conversations surrounding filter bubbles seem to be mainly political in nature but a scary reality is that these bubbles have more grave implications. If you’re not paying attention they can be what’s sneakily driving your medical decisions, not you. Luckily there are steps that you can – and should – take so that you’re privy to all the information you need to make an informed decision within your own best interest.

Sources

www.ncbi.nlm.nih.gov/pmc/articles/PMC4937233/#R7

link.springer.com/article/10.1007/s10676-015-9380-y

www.wired.com/story/facebook-vortex-political-polarization/

fs.blog/filter-bubbles/

 

 

 

 

 

 

The search for the Holy Grail, privacy, profit & global good:  Patagonia

The search for the Holy Grail, privacy, profit & global good:  Patagonia
By Luis Delgado | October 27, 2022

As we focus on Mulligan, Koopman, et. al and their “Analytic for mapping privacy” it is easy to mistake the lofty goals of privacy as ideals not compatible with business, profits, and especially the most ambitious capitalistic goals.  Yet a company has led the path of doing good while breaking revenue records for itself.  They lead with a spirit of transparency and good will, not without fault, but with a clear desire to outweigh the negative effects on the earth.  Despite inflationary and consumer discretionary spending pressures, those profits continue to rise.  I explore the structure of Patagonia, and how it could map to modern tech companies and the careful balance of privacy and top/bottom lines.  (Mulligan, Koopman, et. al., 2016)

Patagonia’s core values include:

  1. “Build the best product”
  2. “Cause no unnecessary harm”
  3. “Use business to protect nature”
  4. “Not bound by convention” (Patagonia.com/core-values, 2022)

Within these principles we see a clear venture into the execution of business and sales.  How are these balanced with a desire to good?  Is that even possible?

Companies like Patagonia are not necessary.  No one really needs expensive brand jackets and gear.  They are created for the sake of making profits for a company.  The innovation comes in being clear about that goal while creating plans to balance that impact through innovation and measurable revenue going towards doing “good”.  The concrete actions Patagonia has taken include:

  1. Historically giving away 1% of profits to earth preservation causes.
  2. Creating a trust with the sole intent of global protection.
  3. Giving away the 2% share owned by the founder’s company.
  4. Giving away the remaining 98% of common shares to the same trust.
  5. Focusing on creating long lasting products.
  6. Creating channels for secondhand sales.
  7. Providing all products with lifelong repair support.

(Outside Magazine, Oct 2022)

These actions could easily adversely affect the bottom line of the company, but it is arguably exactly why their revenue has risen.  The company’s honest execution, transparency, and focus away from profits has created a customer following attracted waves of new business.

These principles mirror the underlying goals of the Federal Trade Commission’s views for current and future legislation (FTC, Mar 2012).  Pillars such as clarity of privacy practices, clear consent, and stronger transparency guide Patagonia’s business model.  It is no surprise that Patagonia’s privacy policy also reflects these values.  It includes clearly defined systems for information safeguarding, user correction, and customer choice (Patagonia.com/privacy-policy.html, 2022).

With this in mind, I attempt to create a parallel structure that utilizes “dimensions” (Mulligan, Koopman, Doty, Oct 2016), to describe company policy and execution in terms of privacy and sales:

  1. Object: What is privacy for? / Who does revenue benefit?
  2. Justification: Why should this be private?  /  Why care about the environment and customers?
  3. Contrast concept: What is not private?  /  What unavoidable harm does the company produce?
  4. Target: What is privacy about and of what?  /  Besides top and bottom line, who and what benefits from revenue?
  5. Action / Offender: What/who defines a privacy violation?  /  What does a failure to act ethically look like?  To customers?  Towards global preservation?

If Patagonia can essentially reinvent itself with a focus on the earth and on customers and be incredibly successful, could that create a framework for other companies?  Could it become a framework for legislation at the micro and macro level?  I believe it can, and the case for it has been proven by many companies who are pivoting in this direction.  If these companies could shift the status quo of what it means to be a profitable corporation, then could this influence the greater public to not only endorse, but expect this from modern business?

I believe that at many points, the study of clearcut ethical privacy practices can seem overwhelming to execute and to enforce, but examples like Patagonia show us that not only is it very much doable but can create greatly benefit not just the environment affected by a company, but the company itself.

References:

  1. Mulligan D., Koopman, C., Doty, N. (October 2016), Privacy is an essentially contested concept:  a multi-dimensional analytic for mapping privacy.  The Royal Society Publishing.
  2. Federal Trade Commission (March 2012), Protecting Consumer Privacy in an Era of Rapid Change.
  3. Peterson, C., (Oct 2022), Patagonia Will Donate $100 Million a year to Fight Climate Change. What Can that Money Accomplish?,  Outside Magazine.  www.outsideonline.com/business-journal/brands/patagonia-new-corporate-structure-analysis/
  4. Patagonia’s privacy policy (Oct 2022), www.patagonia.com/privacy-policy.html
  5. Patagonia’s core values (2022), www.patagonia.com/core-values/

 

We are the Product

We are the Product
By Anonymous | October 24, 2022

E-commerce companies want to sell you something, but they really want your digital personas as their bonus. E-commerce websites made a fortune in the past two years with YoY growth above 20% from 2020 and it’s only expected to increase from here on out. With all of this potential revenue floating around, it’s no wonder that e-commerce websites spend millions analyzing their customer behavior in order to figure out the secret recipe that causes a user to make a purchase.

After establishing a product flow and generating some revenue, the next idea that e-commerce companies have is user tracking behavior. Understanding why some people make purchases and using that data to influence those who have not purchased their product is the fastest way to grow their revenue. There are a few companies that are big in the space such as Google Analytics, Pendo, Mixpanel and Amplitude, but the one this blog will focus on is Heap. The core feature of Heap is its “Autocapture” technology which is advertised by Heap as “Heap provides the easiest and most comprehensive way to automatically capture the user interactions on your site, from the moment of installation forward. A single snippet grabs every click, swipe, tap, pageview, and fill — forever.” This data would give e-commerce companies the ability to analyze their user base and understand how to better advertise and market themselves in order to increase their revenue by moving the needle of the non-converted purchasers to become buyers. This in turn reveals a third source of revenue for these companies, the user behaviors themselves.

Here’s the basic flow for understanding how Heap works. First, you go and create a sign-up to get access to your company’s environment. After you sign-up you’re given a random “environment_id” which is a random integer value. You use this with their javascript snippet to push data from your website to the Heap Platform. Once you’ve installed the Heap javascript snippet (takes two seconds) you’ll start to see your users flood in through the Heap system.

From here you can define any particular event you are looking for to analyze it more carefully. Want to see how many users looked at a pop-up on your site before making a purchase, that’s easily done. Want to see how many users came from search companies like Google or Bing, that’s easily done. Want to see the geographic location of where your users are located, that’s also easily done. Through this tool you can get a one-stop shot to answer almost all of your analytical needs to understand your users and therefore take actions to nudge your users into a particular behavior. For example, say that you notice through the Heap dataset that if your users are exposed to certain pop-ups they’re 90% more likely to make a purchase on that pop-up. Well now you’ll show that pop-up to all of your users to see how much more revenue that generates. All of this is done with ease and that’s what makes this tool incredibly powerful and terrifying.

The privacy and ethical concerns around these companies (Google Analytics, Pendo, Mixpanel, Amplitude and Heap) are numerous. First, when visiting company websites that utilize these types of tracking companies, the mention of them is often small within the Cookie Banner consent and oftentimes there’s so much information in these cookie banner consent that it’s hard to know what each script is doing. Second, there’s no option to opt-out of having these companies track your usage around the company website unless you’re using an ad-blocker as that was found to prevent these companies’ snippets from running. Third, without going through and reading the privacy policy of these companies it’s hard to know what that data collection could be used for. The policies surrounding these companies are often vague, and are left open to interpretation. The ethical implications revolve around consent and whether there was an adequate amount of informed consent from the user as a data subject to be used in this regard for analysis later on.

All of these details point to the new future for e-commerce companies, where the product they’re selling is actually secondary to what they desire. What they really want is our data interacting with their website. All of those clicks, scrolls, form fill in are valuable to them in order to market their next product and idea for us to purchase. In this ever increasing advertising world, we must stay wary that the product we are purchasing isn’t at the cost of our digital personas.

 

Data Privacy in the Metaverse: Real Threats in a Virtual World

Data Privacy in the Metaverse: Real Threats in a Virtual World
By Alexa Coughlin | October 24, 2022

The metaverse promises to revolutionize the way we interact with each other and the world – but at what cost?

media.cybernews.com/images/featured/2022/01/metaverse-featured-image.jpg

Imagine having the ability to connect, work, play, learn, and even shop all from the comfort of your own home. Now imagine doing it all in 3D with a giant pair of goggles strapped to your face. Welcome to the metaverse.

Dubbed ‘the successor to the mobile internet’, the metaverse promises to revolutionize the way we engage with the world around us through a network of 3D virtual worlds. While Meta (formerly Facebook) is leading the charge into this wild west of virtual reality, plenty of other companies are along for the ride. Everyone from tech giants like Microsoft and NVIDIA, to retailers like Nike and Ralph Lauren is eager to try their hand at navigating cyberspace.

cdn.searchenginejournal.com/wp-content/uploads/2022/02/companies-active-in-the-metaverse-621cdb2c5fb11-sej-480×384.jpeg

Boundless as the promise of this new technology may seem, there’s no such thing as a free lunch when it comes to an industry where people (and their data) have historically been the product. The metaverse is no exception.

Through the core metaverse technology of VR and AR headsets, users will provide companies like Meta with access to new categories of personal information that have historically been extremely difficult, if not totally impossible, to track. Purveyors of this cyberworld will have access to extremely sensitive biometric data such as facial expressions, eye movements, and even a person’s gait at their fingertips. Medical conditions, emotions, preferences, mental states, and even subconscious thoughts – it’s all there. All just inferences waiting to be extracted and monetized. Having recently lost $10 billion in ad revenue to changes in Apple’s App Tracking Transparency feature, it’s not hard to fathom what plans a company like Meta might have in store for this new treasure trove of prized data.

venturebeat.com/wp-content/uploads/2022/01/5-Identity-and-authentication-in-the-metaverse.jpg?fit=2160%2C1078&strip=all 

Given the extremely personal and potentially compromising nature of this data, some users have already started thinking about how they might combat this invasion of privacy. Some have chosen to rely on established data privacy concepts like differential privacy (i.e. adding noise to different VR tracking measures) to obfuscate their identities in the metaverse. Others still have turned to physical means of intervention, like privacy shields, to prevent eye movement tracking.

Creative as these approaches to privacy might be, users should not have to rely on the equivalent of statistical party tricks or glorified sunglasses to protect themselves from exploitation in the metaverse. That said, given the extreme variance in robustness of data regulations across the world, glorified sunglasses may not in fact be the worst option for some. For example, while most of this biometrically derived data may classify as ‘special category’ under the broad categorization of the EU’s GDPR regulation, it may not warrant special protections under the narrower definition of Illinois’ BIPA (Biometric Information Privacy Act) for instance. And that’s to say nothing of the 44 U.S. states with no active data protection laws at all.

With the metaverse still in its fledgling stages, awaiting mass market adoption, it is crucial that regulators take this opportunity to develop new laws that will protect users from these novel and specific threats to their personal data and safety. Until then, consumer education on data practices in the metaverse remains hugely important. It’s essential that users stay informed about what’s at stake for them in the real world when they enter the virtual one.

Do the financial markets care about data privacy?

Do the financial markets care about data privacy?
By Finnian Meagher | October 24, 2022

While long-term outperformance of large tech companies may indicate that privacy policy mishaps aren’t priced into stock prices, recent stock performances and relevant pieces of research may suggest otherwise – companies may be forced by the demands of the market and shareholders into adopting more rigid privacy policy practices.
The data privacy practices and policies of companies, especially large tech and social media organizations, are often scrutinized by the public, academics, and government regulators. However, as these companies have seen years of significant outperformance over the rest of the market, as exemplified by the below chart of the FAANG ETF (Facebook, Apple, Amazon, Netflix, and Google), it begs the question: do the financial markets care, and if so, what is the impact of negative press about privacy on market sentiment and stock prices?


Source: portfolioslab.com/portfolio/faang

Paul Bischoff of Comparitech believes that there exists a relationship between data breaches and falls in company share prices, including an average share price fall of -3.5% and underperformance of -3.5% compared to the NASDAQ in the months following a data breach. Bischoff observes that “in the long term, breached companies underperformed the market,” with share prices falling -8.6% on average one year after a data breach. Bischoff also notes that more severe data breaches (i.e., leaking highly sensitive information) see “more immediate drops in share price performance.” (Bischell)

However, not every data breach is equally punished. As an example of a deviation from the observations that Bischoff made, Facebook outperformed the NASDAQ after their April 2019 data breach of over 533 million people’s data:

Similarly, Harvard Business Review found that “a good corporate privacy policy can shield firms from the financial harm posed by a data breach… while a flawed policy can exacerbate the problems caused by a breach” (Martin et al.). HBS notes that firms that have high control and transparency in terms of data are “buffered from stock price damage during data breaches,” but “only 10% of Fortune 500 firms fit this profile” (Martin et al.). Additionally, it is observed that data breaches in neighboring companies within one’s industry can have a negative effect on your own company’s stock prices, but those that have strong control and transparency weather those storms better than others.

More recently, Apple announced that their privacy features could amount to billions of dollars in costs for the firm, and this rippled out into their neighboring competitors such as Twitter, Meta, Pinterest, and Snapchat. Zuckerberg, of Meta, noted that Apple’s newly introduced privacy features “could cost… $10 billion in lost sales [to Meta] this year” (Conger and Chen) which contributed to a drop of 26% of the company’s stock. Given that “people can’t really be targeted the way they were before,” (Conger and Chen) companies like Meta will have to go through a comprehensive rebuild of their business plan which could cause financial distress and ultimately share prices to drop. While there have been many factors in the general macroeconomic environment that have contributed to a pull back in share prices of large tech companies this year, implications of evolving privacy policies and practices have contributed to a sharp pullback in stock prices in the sector:

While looking at the long-term outperformance and growth of large tech companies’ financials and stock prices may lead one to take a cynical capitalist view of ‘profit over anything’ even at the expense of user privacy, research in the space and the recent exemplification of stock prices dropping amidst Apple’s change in privacy features shows that maybe these tech giants are not immune. Taking the other view, as ‘money makes the world go around,’ if markets signal the need for adoption of more strict privacy practices and policies, even without regulatory pressure, then companies could be influenced to have a more comprehensive adoption of the principals of data privacy practices.

Works Cited: