“That’s A Blatant Copy!”: Amazon’s Anticompetitive Behavior and Its Impacts on Privacy

“That’s A Blatant Copy!”: Amazon’s Anticompetitive Behavior and Its Impacts on Privacy
By Mike Varner | October 27, 2022

Amazon’s function as both a marketplace and competitor has been under the microscope of both European and American regulators for the past few years and just recently the company attempted to skirt fines, unsuccessfully so far, by promising to makes changes. In 2019, European regulators opened an investigation over concerns that the data Amazon collects on its merchants was being used to dimmish competition by advantaging Amazon’s own private-label products [1].

What’s Private-Label?

Private-label refers to a common practice among retailers to distribute their own products to compete with other sellers. A 2020 a Wall Street Journal investigation found that, in interviews with over 20 former private-label employees, Amazon uses merchants’ data when they develop and sell their own competing products. This evidence runs contrary to not only their stated policies, but also to what spokespeople for the company attested to in Congressional hearings. Merchant data helped employees to know which product features were important to copy, how to price each product, and anticipated profit margins. Former employees demonstrated how using merchant data, for a car trunk organizer, Amazon used sales and marketing data to ensure that private-label could deliver higher margins [2].

[Image 1] Mallory Brangan https://www.cnbc.com/2022/10/12/amazons-growing-private-label-business-is-challenge-for-small-brands.html

Privacy Harms

Amazon claimed that employees were prohibited from using these data in offering private-label products and launched an internal investigation into the matter. The company claimed there were restrictions in place to keep private-label executives from accessing merchant data, but interviews revealed that use of these data was common practice and openly discussed in meetings. Even when regulations were enforced, managers would often “go over the fence” by asking analysts to create reports which divulged the information or to even create fake “aggregated” data which would secretly only contain a single merchant. It’s clear that these business practices are unfair and deceptive, to merchants, as they are demonstrably false relative to the company’s written policies and verbal communication to Congress. The FTC should consider this in its ongoing investigations as unfair and deceptive business practices are within its purview [3]. Other agencies such as the SEC are looking into this matter and the USDOJ is investigating the company for obstructing justice in relation to their 2019 Congressional hearings [4].

[Image 2] Nate Sutton, an Amazon associate general counsel, told Congress in July: ‘We don’t use individual seller data directly to compete’. [2]

Search Rank Manipulation

Amazon has made similarly concerning statements about their search rank algorithm over the years. Amid internal dissent in 2019, Amazon changed its product search algorithm to highlight more profitable, for Amazon, products. Internal counsel initially rejected a proposal to directly add profit into the search rank algorithm amid ongoing European investigations and concerns that the change would not be in customers’ best interest (a guiding principle for Amazon). Despite explicit exclusion of profit as a variable into the algorithm, former employees indicated that engineers would simply add enough new variables to proxy for profit. To test this, engineers would run A-B tests to calculate how to proxy for profit and unsurprisingly they found something that worked. They backed solved, using a variety of variables, for profit to have the best of both worlds: to “truthfully” represent that they don’t use profit and to use a completely equivalent composite metric. As one of the many checks prior to changing the algorithm, engineers were explicitly prevented from including variables that decreased profitability metrics. In total, while it could be strictly true that the search rank algorithm does not include profit, Amazon has optimized for profitability through a series of incentive structures [5]. Regulators should consider this deceptive practice as part of their ongoing investigations as simply not including profit obfuscates the complexities Amazon has gone through to achieve their desired outcome.

[Image 3] Jessica Kuronen [5]

What’s Next?

Amazon attempted in July to end their European antitrust investigations by offering to stop collecting nonpublic data on merchants [6]. Amazon hopes that this concession would prevent regulators from issuing fines, but this proposal has been met with strong criticism from a variety of groups [7]. The outcome of this proposal is pending, but similar litigation continues. Just last week UK regulators filed similar antitrust claims against Amazon over its “buy box” [8]. Amazon has been considering getting out of the private-label business due to lower-than-expected sales and leadership has been slowly downsizing the private label business over the past few years [9].

References:

  1. https://ec.europa.eu/commission/presscorner/detail/pl/ip_19_4291
  2. https://www.wsj.com/articles/amazon-scooped-up-data-from-its-own-sellers-to-launch-competing-products-11587650015?mod=article_inline
  3. https://www.wsj.com/articles/amazon-competition-shopify-wayfair-allbirds-antitrust-11608235127?mod=article_inline
  4. https://www.retaildive.com/news/house-refers-amazon-to-doj-for-potential-criminal-conduct/620246/
  5. https://www.wsj.com/articles/amazon-changed-search-algorithm-in-ways-that-boost-its-own-products-11568645345
  6. https://www.nytimes.com/2022/07/14/business/amazon-europe-antitrust.html
  7. https://techcrunch.com/2022/09/12/amazon-eu-antitrust-probe-weak-offer/
  8. https://techcrunch.com/2022/10/20/amazon-uk-buy-box-claim-lawsuit/
  9. https://www.wsj.com/articles/amazon-has-been-slashing-private-label-selection-amid-weak-sales-11657849612

Closing the digital divide – how filtered data shapes society.

Closing the digital divide – how filtered data shapes society.
By Anonymous | October 27, 2022

Photo by NASA on Unsplash

You send a text, like a post, make an online purchase, or check your blood pressure on your smart device. The data you produce from these activities shapes ideas, opinions, policies, and more. Given this tremendous influence on society, data ingested by any means must be representative of all populations.

In reality, as of 2018, more than 20% of people in the United States are without Internet access (FCC, 2020), and as of 2021, only 63% globally are accessing the Internet (Statista, 2021).

Data is at the heart of decision-making. In the field of Data Science, Data Scientists rely on data to inform decisions in many industries that impact our daily lives, from Healthcare, Fraud Detection, Logistics, Transportation, etc. With such an influence on so many aspects of our daily lives, it’s critical for the technology we use to be inclusive and data-representative of all populations. Unfortunately, to a certain degree, Data Scientists are operating with a filtered view of the world based on the usage of data that’s not representative of all populations.

Data Scientists require data that will bring new insights to the world. Data points are carefully selected to feed algorithms that produce meaningful outcomes that benefit society. The fundamental problem with this is that the technology creating the input data is not in the hands of individuals equally among all populations, thus producing exclusionary data. Without fair representation, populations’ chances of influencing societal decisions do not exist. Reasons for this include a lack of affordable digital technology, physical disabilities that prevent technology from being accessible, and environments without Internet access.

While rural areas lack Internet access, Tribal lands, in particular, are severely impacted by the digital divide. In a study published by the FCC, as of 2018, “22.3% of Americans in rural areas and 27.7% of Americans in Tribal lands lack coverage from fixed terrestrial 25/3 Mbps broadband, as compared to only 1.5% of Americans in urban areas.”

Encouraging support has come recently from the USDA and FCC. The USDA’s ReConnect Program, has since 2018 invested over $1 billion to date to expand high-speed broadband infrastructure in unserved rural areas and tribal lands.” The Affordable Connectivity Program provides discounted Internet services to lower income families.

Internet users in 2015 as a percentage of a country’s population

Source: International Telecommunication Union. Wikipedia.

Before data driven decisions can be made, the data the must be inclusive of all populations. We need to increase deployment of infrastructure to enable Internet access in underserved communities and increase access to technology for those who are without. Below are a few volunteer resources if you’d like to help close the digital divide:

Photo by Alexander Sinn on Unsplash

References

  1. 2020 Broadband Deployment Report. Federal Communications Commission. FCC
  2. ReConnect Loan and Grant Program. U.S. Department of Agriculture. ReConnect
  3. List of countries by number of Internet users. Wikipedia
  4. Affordable Connectivity Program. Federal Communications Commission. FCC
  5. Percentage of global population accessing the internet from 2005 – 2021, by market maturity. Statista, 2022. Statista

Cops on your Front Porch

Cops on your Front Porch
By Irene Shaffer | October 24, 2022

Some technology companies, including Amazon and Google, will share footage from smart doorbells with law enforcement without a warrant and without consent of the owner.

In case of emergency

In a letter to Senator Edward Markey earlier this year, Amazon confirmed that it has provided Ring doorbell footage to law enforcement on several occasions without consent from the owner of the doorbell and without a warrant. Amazon claims that such disclosures are made only when there is “a good-faith determination that there was an imminent danger of death or serious physical injury to a person requiring disclosure of information without delay” (Amazon, 2022). However, this admission from Amazon created a whirlwind of news stories about the perceived invasion of privacy. Although this emergency usage of footage was in accordance with the privacy policy of the Ring device, it clearly violated users’ reasonable expectation of privacy for a device that they install at their home.

In this article, we explore two questions. First, should it ever be justifiable to share footage from a private recording device without consent and without a warrant? Second, what should a privacy-conscious user look for in the terms of service of a smart doorbell device?

Should video ever be shared without consent?

Amazon claims that it only shares videos in this way in the case of emergencies; however, there is no legal requirement for a company to disclose user data in the absence of a warrant or other court document requiring that the data be provided to law enforcement. In fact, several manufacturers of smart doorbells, including Arlo, Wyze, and Anker, have all confirmed that they will not share data without either consent or a warrant, even in the case of emergency (Crist, 2022). However, Google has the same policy as Amazon, although Google claims to have not yet shared any video footage in such an emergency situation (Crist, 2022).

Ideally, a company’s stance on this issue should be front and center in the privacy policy for the device so that no user is caught by surprise when their data is shared with law enforcement. Although Ring’s privacy policy does a good job of fulfilling the recommendations of the California Online Privacy Protect Act (CalOPPA) document, including “use plain, straightforward language” and “avoid technical legal jargon”, Ring’s published policy for sharing video with law enforcement is somewhat contradictory (California Department of Justice, 2014). In the first section, it states:

Ring does not disclose user information in response to government demands (i.e., legally valid and binding requests for information from law enforcement agencies such as search warrants, subpoenas and court orders) unless we’re required to comply and it is properly served on us.

However, in a later section titled “Other Information”, the emergency policy is specified:

Ring reserves the right to respond immediately to urgent law enforcement requests for information in cases involving imminent danger of death or serious physical injury to any person.

On the surface, this seems to contradict the first statement and may confuse users who do not read all the way to the bottom of the policy.

What should privacy-conscious consumers look for?

For users who are truly privacy conscious, the best solution is a closed-circuit camera which does not upload to the cloud. However, this is much less convenient than an internet connected device. To ensure privacy while also benefitting from the convenience of the cloud, the ideal solution is an end-to-end encrypted service which allows only the owner of the doorbell to decrypt and view recordings. In this way, the service provider is unable to access the video, even in the case of an emergency or court ordered request from law enforcement. End-to-end encryption is available from many major smart doorbell providers, including Ring, which has been rolling out this feature to its devices over the past year (Ring, 2022). Although Ring provides many warnings about features which will be lost due to encryption, a user who wants control over their data should gladly accept this tradeoff.

References

Amazon. (2022, July 1). Amazon Response to Senator Markey. Retrieved from United States Senator for Massachusetts Ed Markey: https://www.markey.senate.gov/imo/media/doc/amazon_response_to_senator_markey-july_13_2022.pdf

California Department of Justice. (2014). Making Your Privacy Practices Public.

Crist, R. (2022, July 26). Ring, Google and the Police: What to Know About Emergency Requests for Video Footage. Retrieved from CNET: https://www.cnet.com/home/security/ring-google-and-the-police-what-to-know-about-emergency-requests-for-video-footage/

Ring. (2022). Ring Law Enforcement Guideliens. Retrieved from Ring: https://support.ring.com/hc/en-us/articles/360001318523-Ring-Law-Enforcement-Guidelines

Ring. (2022). Understanding Video End-to-End Encryption (E2EE). Retrieved from Ring: https://support.ring.com/hc/en-us/articles/360054941511-Understanding-Video-End-to-End-Encryption-E2EE-

Senator Markey’s Probe into Amazon Ring Reveals New Privacy Problems. (2022, July 13). Retrieved from United States Senator for Massachusetts Ed Markey: https://www.markey.senate.gov/news/press-releases/senator-markeys-probe-into-amazon-ring-reveals-new-privacy-problems

Spying on friend’s financial transactions, it’s easy with Venmo

Spying on friend’s financial transactions, it’s easy with Venmo
By Anonymous | October 27, 2022

Venmo is a financial and social app, intended to allow friends to share funds while connecting socially with notes and emoji’s.  This puts Venmo in a unique position to enable sharing of personal information when the user intends it be shared and protecting the privacy of its users when the user expects it not to be shared. 

Venmo makes headlines with privacy violations

Venmo is no stranger to news headlines about unintended privacy violations.  One famous case is the exposure of President Joe Biden’s account.  A Buzzfeed journalist was able to find it within searching for 10 minutes.  It revealed Biden’s private social network, family relationships and raised security issues at the national level. (2021, Buzzfeed)

By default, when a user signs up for Venmo, their settings via the app are set to ‘Public – Visible to everyone on the internet’.  Countless people assume their information is shared only with friends they exchange funds with, and unknowingly are sharing information publicly.  Additionally, as per Venmo’s privacy policy, they track a user’s geo-location that may be used for several reasons which are vague such as ‘advertising, .. or other location-specific content’. (Venmo privacy policy, 2022).  This setting can be turned off, but many unsuspecting users remain unaware of these security options and assume their privacy is protected.  Nissenbaum’s contextual integrity, addresses context of information relative to information norms that should be respected (Nissenbaum, 2011).  Venmo, by this standard, has failed to address contextual integrity and the consequences can cause harm to unsuspecting users of Venmo.

How to change Privacy settings in Venmo

Changing the default Venmo settings is not difficult, but many users are unaware that their privacy is unprotected.  To change the settings, a user needs to open Venmo’s Privacy settings, change the settings to ‘Private’.  This will change the settings for all transactions going forward but will not change the privacy settings of historical transactions.  To change historical transaction privacy, a user needs to do an additional step to click into the ‘More’ section and change Past Transactions history to Private as well.  Additionally, if a user prefers that Venmo not track their geo-location at all times, they need to click into ‘Location’, ‘App location permissions’, then ‘Permissions’, then Location permission’ to change those default settings to the desired privacy setting.

Venmo and the FTC

Venmo has already been sued and settled with the FTC for violation of disclosing information to consumers about transferring funds and privacy settings in violation of the Gramm-Leach-Bliley Act (FTC, 2018).  Venmo misrepresented that user’s financial accounts were protected by “bank grade security systems”.   Venmo sent users confirmation that money had been credited to their account, when in fact it had not.  They did not disclose that Venmo could cancel or freeze fund transfer after a confirmation had been sent to the user.  As result of being found in violation of the law, Venmo is subject to a third-party assessment of compliance every other year, for the next 10 years.  One key point in the FTC principles includes recommending that companies increase their transparency with data practices, which is what the FTC supported in its ruling in this case.  (FTC, 2012)

Looking Forward

Venmo has made enhancements to their privacy policy and practices because of exposure of violations that caused harm.  The objective of this blog post is to raise awareness and encourage checking privacy policies and settings for applications used where personal information can be shared unexpectedly and to the detriment of the user.  Many companies are improving their attention to privacy, but as we can see in the case of Venmo, not until an embarrassing security exposure or court case requires them to do so.

References

  1. Venmo privacy policy (2022, September 14). https://venmo.com/legal/us-privacy-policy/
  2. Mac, Ryan, Notopoulos, Katie, Brooks, Ryan, McDonald, Logan. (2021, May 14). We Found Joe Bien’s Secret Venmo.  Here’s Why That’s A Privacy Nightmare For Everyone, BuzzFeedNews https://www.buzzfeednews.com/article/ryanmac/we-found-joe-bidens-secret-venmo
  3. Federal Trade Commission (FTC). (2018, February 27) PayPal Settles FTC Charges that Venmo Failed to Disclose Information to Consumers About the Ability to Transfer Funds and Privacy Settings; Violated Gramm-Leach-Bliley Act https://www.ftc.gov/news-events/news/press-releases/2018/02/paypal-settles-ftc-charges-venmo-failed-disclose-information-consumers-about-ability-transfer-funds
  4. Nissenbaum, Helen (2011). A contextual Approach to Privacy Online Daedalus140:4
  5. Federal Trade Commission (FTC). (2012, March) Protecting Privacy in an Era of Rapid Change, Recommendations for businesses and policymakers. Section VI, Subsections C, “Simplified Consumer Choice” and D, “Transparency” talk about notice and consent)

Credit-Score Algorithm Bias: Big Data vs. Human

Credit-Score Algorithm Bias: Big Data vs. Human
By Anonymous | October 27, 2022

Can Big Data eliminate human bias? The U.S. credit market shows minority groups disproportionately receiving unfavorable terms or outright denials on their loans, mortgages, or credit cards applications. Often, these groups tend to be subject to higher interest rates as opposed to their peer groups. Such decisions rely on the data available to lenders as well as their discretion, thus inserting human bias into the mix.

The reality is a stark contrast in access to credit for minorities, especially for African Americans on interests on business loans, bank branch density, fewer banking location in predominantly Black neighborhoods, and finally a stunted growth of local businesses in those areas. Several solutions were proposed to tackle this issue. The U.S. government signed the Community Reinvestment Act into law in 1977. Initiatives such as the African American minority depository institutions were put in place to increase access to banking for those underserved.

The ever-growing role of Big Data is an opportunity to remove prevalent selection biases in making lending decisions. Nonetheless, the limitations of Big Data are becoming apparent as these minority groups are still largely marginalized. Specifically, much of the existing machine learning models place a heavy emphasis on certain traits of the population to determine their credit worthiness. Demographic characteristics such as education, location, and income are closely intertwined with a population’s profile for instance. In some way, the human features translate into the data a specific way, which would then be validated on some outdated premise, scoring different groups various weights.

Big Data could eliminate human biases by assigning different weights to different population categories. Credit-Score Algorithms ensuring fair and non-biased decisions should put in place. In fact, demographic features used to calculate credit worthiness such as FICO scores may be beneficial if the algorithm used is fair, unbiased, and undergone a strict regulatory review. The key point is that there should not be a single standard that is applied to populations of such differing makeup.

A credit score is a mathematical model comparing an individual’s credit to millions of other people. It is an indication of credit worthiness based on review of past and current relationships with lenders, aiming to provide insight on how an individual manages their debts. Currently, credit scores algorithms leverage information on payment history (35%), credit utilization ratio (30%), length of credit history (15%), new credit (10%), and credit mix or types of credit (10%). The resulting number is then assigned into categories of very bad credit (300-499), bad credit (500–600), fair credit (601–660), good credit (661–780), and excellent credit (781–850).

Error in credit reporting for instance could lead to a long lasting and negative credit worthiness. This is particularly damaging to vulnerable groups since  the repercussion can last years.

References:

WARNING: Your Filter Bubble Could Kill You

WARNING: Your Filter Bubble Could Kill You
By Author: Anonymous | October 27, 2022

Filter bubbles can distort the health information accessible to you and result in misinformed medical decision making.

Filter bubbles are the talk of the town. Nowadays the political and social landscape seems to be increasingly polarized and it’s easy to point fingers at the social media sites and algorithms that are tailoring the information we all consume. Some of the filters we find ourselves in are harmlessly generating content we want to see – such as a new hat to go with the gloves we just bought. On the other hand, the dark side of filter bubbles can be dangerous in situations where personal health information is customized for us in a way that leads to misinformed decisions.

The ‘Filter Bubble’ is a term coined by Eli Parisor in his book The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. He describes our new “era of personalization” where social media platforms and search engines are using advanced models to craft content it predicts a user would like to see, based on past browsing history and site engagement. This virtually conjured world is intended to perfectly match a user’s preferences and opinions so that they’ll enjoy what they’re seeing and will keep coming back. All of this aims to increase user traffic and pad their bottom line.

An illustration of filter bubble isolation

The negative implications of filter bubbles have been extensively researched, especially as they relate to political polarization. A more serious consequence of tailored content and search results is when the filter interferes with the quality of health information produced. In his article The filter bubble and its effect on online personal health information, Harald Holone describes how the technology can present serious new challenges for doctors and their patients. People are increasingly turning to ‘Doctor Google’ or other social forums to ask questions and become informed on decisions that will affect their health instead of their primary care, licensed professional. Holone describes how the relationship between doctors and patients is shifting because people are starting to draw their own conclusions before stepping foot in a doctor’s office, which can diminish their medical authority. The real issue occurs when people start using their personalized feeds or biased public forums for guidance and are still expecting objective results. As touched on before, the reality is that search results can be heavily skewed by prior browsing history and there’s no guarantee that what turns up is dependable for medical direction and the results could give only a distorted version of reality. An effective example he used is a scenario in which a person is deciding whether or not to have their child vaccinated. A publicized case of this is the 2014 measles outbreak in California; many blamed the root cause on a spread of misinformation leading to lower vaccination rates. Health misinformation can also transcend to other decisions relating to cancer treatment, diets or epidemic outbreaks.

There is no simple answer to combat this issue because challenges arise that prevent people from accessing more trustworthy medical information. One problem is the opaqueness of the filter algorithms can make people oblivious to the fact that what they are seeing is only a sliver of the real world and not objective. It is also not evident to people where they fall within the realm of filter bubbles and in what direction their information is biased, especially if they have been in a siloe for a long time. Another dilemma that Halone presents is our lack of control over the content we see. Even if we do become conscious of our bubble, many of the prevalent social media sites don’t offer a way to intentionally shift back to center.

So what can we do to become objectively informed, especially when it comes to medical information? Several solutions have been proposed to help us regain our power. A browser tool named Balancer was created by Munson et al that can track the sites you visit and provide an overview of your reading habits and biases to increase awareness. Another interesting tool that could help combat misinformation is a Chrome add-on called Rbutr that informs a user if what they are currently viewing has been previously disputed or contradicted elsewhere. A more simple place to start could be deleting your browser history or using DuckDuckGo when searching for information that will be used for health decisions.

The conversations surrounding filter bubbles seem to be mainly political in nature but a scary reality is that these bubbles have more grave implications. If you’re not paying attention they can be what’s sneakily driving your medical decisions, not you. Luckily there are steps that you can – and should – take so that you’re privy to all the information you need to make an informed decision within your own best interest.

Sources

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4937233/#R7

https://link.springer.com/article/10.1007/s10676-015-9380-y

https://www.wired.com/story/facebook-vortex-political-polarization/

https://fs.blog/filter-bubbles/

 

 

 

 

 

 

The search for the Holy Grail, privacy, profit & global good:  Patagonia

The search for the Holy Grail, privacy, profit & global good:  Patagonia
By Luis Delgado | October 27, 2022

As we focus on Mulligan, Koopman, et. al and their “Analytic for mapping privacy” it is easy to mistake the lofty goals of privacy as ideals not compatible with business, profits, and especially the most ambitious capitalistic goals.  Yet a company has led the path of doing good while breaking revenue records for itself.  They lead with a spirit of transparency and good will, not without fault, but with a clear desire to outweigh the negative effects on the earth.  Despite inflationary and consumer discretionary spending pressures, those profits continue to rise.  I explore the structure of Patagonia, and how it could map to modern tech companies and the careful balance of privacy and top/bottom lines.  (Mulligan, Koopman, et. al., 2016)

Patagonia’s core values include:

  1. “Build the best product”
  2. “Cause no unnecessary harm”
  3. “Use business to protect nature”
  4. “Not bound by convention” (Patagonia.com/core-values, 2022)

Within these principles we see a clear venture into the execution of business and sales.  How are these balanced with a desire to good?  Is that even possible?

Companies like Patagonia are not necessary.  No one really needs expensive brand jackets and gear.  They are created for the sake of making profits for a company.  The innovation comes in being clear about that goal while creating plans to balance that impact through innovation and measurable revenue going towards doing “good”.  The concrete actions Patagonia has taken include:

  1. Historically giving away 1% of profits to earth preservation causes.
  2. Creating a trust with the sole intent of global protection.
  3. Giving away the 2% share owned by the founder’s company.
  4. Giving away the remaining 98% of common shares to the same trust.
  5. Focusing on creating long lasting products.
  6. Creating channels for secondhand sales.
  7. Providing all products with lifelong repair support.

(Outside Magazine, Oct 2022)

These actions could easily adversely affect the bottom line of the company, but it is arguably exactly why their revenue has risen.  The company’s honest execution, transparency, and focus away from profits has created a customer following attracted waves of new business.

These principles mirror the underlying goals of the Federal Trade Commission’s views for current and future legislation (FTC, Mar 2012).  Pillars such as clarity of privacy practices, clear consent, and stronger transparency guide Patagonia’s business model.  It is no surprise that Patagonia’s privacy policy also reflects these values.  It includes clearly defined systems for information safeguarding, user correction, and customer choice (Patagonia.com/privacy-policy.html, 2022).

With this in mind, I attempt to create a parallel structure that utilizes “dimensions” (Mulligan, Koopman, Doty, Oct 2016), to describe company policy and execution in terms of privacy and sales:

  1. Object: What is privacy for? / Who does revenue benefit?
  2. Justification: Why should this be private?  /  Why care about the environment and customers?
  3. Contrast concept: What is not private?  /  What unavoidable harm does the company produce?
  4. Target: What is privacy about and of what?  /  Besides top and bottom line, who and what benefits from revenue?
  5. Action / Offender: What/who defines a privacy violation?  /  What does a failure to act ethically look like?  To customers?  Towards global preservation?

If Patagonia can essentially reinvent itself with a focus on the earth and on customers and be incredibly successful, could that create a framework for other companies?  Could it become a framework for legislation at the micro and macro level?  I believe it can, and the case for it has been proven by many companies who are pivoting in this direction.  If these companies could shift the status quo of what it means to be a profitable corporation, then could this influence the greater public to not only endorse, but expect this from modern business?

I believe that at many points, the study of clearcut ethical privacy practices can seem overwhelming to execute and to enforce, but examples like Patagonia show us that not only is it very much doable but can create greatly benefit not just the environment affected by a company, but the company itself.

References:

  1. Mulligan D., Koopman, C., Doty, N. (October 2016), Privacy is an essentially contested concept:  a multi-dimensional analytic for mapping privacy.  The Royal Society Publishing.
  2. Federal Trade Commission (March 2012), Protecting Consumer Privacy in an Era of Rapid Change.
  3. Peterson, C., (Oct 2022), Patagonia Will Donate $100 Million a year to Fight Climate Change. What Can that Money Accomplish?,  Outside Magazine.  https://www.outsideonline.com/business-journal/brands/patagonia-new-corporate-structure-analysis/
  4. Patagonia’s privacy policy (Oct 2022), https://www.patagonia.com/privacy-policy.html
  5. Patagonia’s core values (2022), https://www.patagonia.com/core-values/

 

We are the Product

We are the Product
By Anonymous | October 24, 2022

E-commerce companies want to sell you something, but they really want your digital personas as their bonus. E-commerce websites made a fortune in the past two years with YoY growth above 20% from 2020 and it’s only expected to increase from here on out. With all of this potential revenue floating around, it’s no wonder that e-commerce websites spend millions analyzing their customer behavior in order to figure out the secret recipe that causes a user to make a purchase.

After establishing a product flow and generating some revenue, the next idea that e-commerce companies have is user tracking behavior. Understanding why some people make purchases and using that data to influence those who have not purchased their product is the fastest way to grow their revenue. There are a few companies that are big in the space such as Google Analytics, Pendo, Mixpanel and Amplitude, but the one this blog will focus on is Heap. The core feature of Heap is its “Autocapture” technology which is advertised by Heap as “Heap provides the easiest and most comprehensive way to automatically capture the user interactions on your site, from the moment of installation forward. A single snippet grabs every click, swipe, tap, pageview, and fill — forever.” This data would give e-commerce companies the ability to analyze their user base and understand how to better advertise and market themselves in order to increase their revenue by moving the needle of the non-converted purchasers to become buyers. This in turn reveals a third source of revenue for these companies, the user behaviors themselves.

Here’s the basic flow for understanding how Heap works. First, you go and create a sign-up to get access to your company’s environment. After you sign-up you’re given a random “environment_id” which is a random integer value. You use this with their javascript snippet to push data from your website to the Heap Platform. Once you’ve installed the Heap javascript snippet (takes two seconds) you’ll start to see your users flood in through the Heap system.

From here you can define any particular event you are looking for to analyze it more carefully. Want to see how many users looked at a pop-up on your site before making a purchase, that’s easily done. Want to see how many users came from search companies like Google or Bing, that’s easily done. Want to see the geographic location of where your users are located, that’s also easily done. Through this tool you can get a one-stop shot to answer almost all of your analytical needs to understand your users and therefore take actions to nudge your users into a particular behavior. For example, say that you notice through the Heap dataset that if your users are exposed to certain pop-ups they’re 90% more likely to make a purchase on that pop-up. Well now you’ll show that pop-up to all of your users to see how much more revenue that generates. All of this is done with ease and that’s what makes this tool incredibly powerful and terrifying.

The privacy and ethical concerns around these companies (Google Analytics, Pendo, Mixpanel, Amplitude and Heap) are numerous. First, when visiting company websites that utilize these types of tracking companies, the mention of them is often small within the Cookie Banner consent and oftentimes there’s so much information in these cookie banner consent that it’s hard to know what each script is doing. Second, there’s no option to opt-out of having these companies track your usage around the company website unless you’re using an ad-blocker as that was found to prevent these companies’ snippets from running. Third, without going through and reading the privacy policy of these companies it’s hard to know what that data collection could be used for. The policies surrounding these companies are often vague, and are left open to interpretation. The ethical implications revolve around consent and whether there was an adequate amount of informed consent from the user as a data subject to be used in this regard for analysis later on.

All of these details point to the new future for e-commerce companies, where the product they’re selling is actually secondary to what they desire. What they really want is our data interacting with their website. All of those clicks, scrolls, form fill in are valuable to them in order to market their next product and idea for us to purchase. In this ever increasing advertising world, we must stay wary that the product we are purchasing isn’t at the cost of our digital personas.

 

Fear? AI is coming for your career, is there a different point of view

Fear? AI is coming for your career, is there a different point of view
Anonymous | October 14, 2022

The robots are coming, and in just a few short years, everyone’s mundane job tasks will be taken by advancements in AI. Human-like intelligence required to do more complex tasks can now be given to AI.

As seen by many humans today, the rise of machines…

However, the history of AI is automation, and that subject has proven such a declaration of doom a bit farse. Though many claim its negative downside without ever showing the whole truth, such as “a short history of jobs and automation” by the World Economic Forum. (WEF Sep 3, 2020). Over millennia’s past, humans have invented numerous solutions to problems, many of them of a static nature. They are easier to achieve automation in. However, we have never stopped working to achieve it with dynamic systems, and AI is just the latest success in this category of automation. I submit many exhibits of the past to this realization, and we can prove to ourselves that AI is here to replace less creative thinking jobs with ones more creative and intellectual, having more basis in human culture. This point of view is well expressed in the McKinsey & Company blog from Susan Lund and James Manyika (Mc Nov 29, 2017).

History of automata

In the ancient world of our ancestors, we had more advancements than many people today know. For instance, the Greek Hellenistic world created prototypes to demonstrate ideas on basic scientific principles. These demonstrations provided many mechanisms, hydraulics, pneumatics, and the programmable cart. The article from New Scientist on “The programmable robot of ancient Greece” by Noel Sharkey, describes this in deeper detail (NS, July 4, 2007). This programable cart was the first effort to mechanize something more dynamic in function. A mechanical-like beginning to AI. Further advancement to programmability took place around 150BCE with the Antikythera mechanism, which calculated the positions of astronomical objects. This was not a process humans thought could be done with mechanical devices. This is another more advanced mechanism made within ancient times, and the article by Smithsonian Magazine titled “Decoding the Antikythera Mechanism, the First Computer” by Jo Marchant, gives greater insight into this device (SM, Feb 2015). While these automata became feasibilities and demonstrations for many centuries, they still led to the replacement of human jobs with innovations from human creative endeavors that would be classifiable as divergent. Ultimately these devices replaced jobs as it took much more labor to produce the value outcomes by hand. However, these inventions opened new opportunities for jobs, such as university professors, inventors, researchers, and the like. Job’s that created devices like these to improve humanity’s daily lives, with less convergent work and allowing the greatness of the human mind to wander the immense open unknown, to explore!

Ethical implications

Seeing that automation brings the displacement of jobs from one skill set to another, there is an ethical implication for companies and cultures to consider. This notation is well articulated in MIT Sloan school of management’s article, “Ethics and automation: What to do when workers are displaced” by Tracy Mayor (MIT, July 8th, 2019). While companies can find new technologies quickly in automation, the labor force is not as quick to change and adapt. Thus, the advocation of companies having the upper hand in capital revenue generation at larger magnitudes suggests the opportunity to be responsible weights on the technology and adopting companies. So should companies pay the bill for change, or the workforce, perhaps a deeper discussion on fairness and sharing the responsibilities of society?

Conclusion

Today, we can see that the ML/AI revolution is repeating history, and humans are worried again about the incoming change. This time it will be cultural, legal, and ethical risks for business. The difference in our society will be more dramatic than ever before. Many economists suggest that a government jobs guarantee would help settle community uncertainty. Bill Gates has suggested that a robot tax would be helpful to offset job loss, making the robot a cheaper option while still providing the government revenue to support job guarantees. Perhaps the era of Star Trek is upon us, a utopian society where all the machines do the mundane of producing our needs and wants. While we focus on human culture and meaningfulness but is mundane meaningless? We can find more questions to unpack, but it is easy to see immediately that AI might just benefit us in future jobs!

 

The Credit Conundrum

The Credit Conundrum
Anonymous | October 14, 2022

A person’s credit score is an important personal data feature that lenders use to evaluate a borrower’s ability to pay back a loan (i.e. creditworthiness). The unfortunate reality is that most consumers don’t have a grasp on the nuances of the credit score model, the most prominent of which was developed by Fair Isaac Corp. (FICO). Credit scores can determine both an individual’s ability to get a loan (e.g. auto loan, mortgage, business loan, student loan, etc.) the interest rate associated with that loan, and the amount of deposit required for larger purchases. FICO has categorized estimated creditworthiness with the following ranges: Excellent: 800–850, Very Good: 740–799, Good: 670–739, Fair: 580–669, Poor: 300–579.

Features of the credit score algorithm [2]

Rarely does the average consumer comprehend the factors that affect their FICO credit score, and it’s quite possible that many consumers don’t know their FICO score or how to check for it. The reality is, a credit score below 640 is usually considered “subprime” and puts the borrower in a dangerous position of falling into a debt trap. “Data shows that more than 1 in 5 Black consumers and 1 in 9 Hispanic consumers have FICO scores below 620; meanwhile, 1 out of every 19 white people are in the sub-620 category” [1]. Subprime borrowers frequently become the target of predatory lending which only exacerbates the unfortunate situation. A Forbes article, written by Natalie Campisi, asserts that the current credit scoring model has been negatively influenced by a long history of discriminatory practices. Algorithmic bias in the credit industry was acknowledged in 1974 when the “Equal Credit Opportunity Act disallowed credit-score systems from using information like sex, race, marital status, national origin and religion” [1]. However, the new credit score model evaluation criteria doesn’t take into account generations of socioeconomic inequity. Federal legislation has been passed in addition to the Equal Credit Opportunity Act to make the credit industry more transparent and equitable.

Despite these efforts on a federal level, the issue of algorithmic bias remains when credit agencies aggregate data points into individual credit scores. Generational wealth has passed disproportionately to white people, so the concept of creditworthiness should be reimagined with feature engineering for equity and inclusion. For example, “FinReg Labs, a nonprofit data testing center, analyzed cash-flow underwriting and the results showed that head-to-head it was more predictive than traditional FICO scoring. Their analysis also showed that using both the FICO score and cash-flow underwriting together offered the most accurate predictive model” [1].

Enhancing the fairness of the credit industry could prove pivotal to the advancement of disenfranchised communities. Credit scoring models ignore rental payment history, but they take housing payments into account when generating credit scores. This prevents many otherwise credit worthy individuals from improving their credit score due to the massive gap in homeownership between whites (74.5% by end of 2020) and non-white communities (44% by end of 2020) [1]. The FICO credit scoring model has gone through many iterations and a variant is used in about 90% lending cases [2]. However, lenders may use different versions of the algorithm to determine loan amounts, interest rates, payback period, and any deposits. Therefore, there’s a need for uniformity of credit standards across different lending opportunities to prevent lending bias. A recent Pew Research paper found that in New York City, over half of debt claims judgments/lawsuits affected individuals in predominantly black or hispanic communities, and 95% of the lawsuits affected people in low- to moderate-income neighborhoods [1]. “Using data that reflects bias perpetuates the bias, critics say. A recent report by CitiGroup states that the racial gap between white and Black borrowers has cost the economy some $16 trillion over the past two decades. The report offers some striking statistics:

● Eliminating disparities between Black and white consumers could have added $2.7 trillion in income or +0.2% to GDP per year.

● Expanding housing credit availability to Black borrowers would have expanded Black homeownership by an additional 770,000 homeowners, increasing home sales by $218 billion.

● Giving Black entrepreneurs access to fair and equitable access to business loans could have added $13 trillion in business revenue and potentially created 6.1 million jobs per year.” [1]

Data taken from this 2010 survey by the Federal Reserve. A more recent survey is available from the Urban Institute, although Asian-Americans aren’t included in their data. [6]

I’ve worked as a financial consultant for both Merrill Lynch Wealth Management and UBS Private Wealth Managment, so I have first-hand insight into the credit conundrum. The credit industry could be enhanced through the development of structured lending products. Furthermore, the bankers who develop these lending products should form criteria that accounts for both years of economic inequality and the reinterpretation of “creditworthiness”. Also, institutional banks should do financial literacy and credit workshops in disenfranchised communities and publish relevant content to remedy the credit disparity. Clients who employ financial consulting services are educated on how to leverage the banking system to reach their financial goals, but the vast majority of the U.S. population doesn’t qualify for personalized financial services. However, these same financial services organizations interface with the masses. Banks should cater to the masses to empower rather than to exploit the proletariat through discriminatory or predatory lending practices.

References

[1] From Inherent Racial Bias to Incorrect Data—The Problems With Current Credit Scoring Models – Forbes Advisor

[2] Credit Score: Definition, Factors, and Improving It

[3] What are the Texas Fair Lending Acts?

[4] Credit History Definition

[5] Subprime Borrower Definition

[6] Average Credit Score by Race: Statistics and Trends