AI technology in my doctor’s office: Have I given consent?

AI technology in my doctor’s office: Have I given consent?

By Anonymous | October 28, 2022

Should patients have the right to know what AI technologies are being used to provide their health care and what their potential risks are? Should they be able to opt out of these technologies without being denied health care services? These questions become extremely critical over the next decade as policy makers start regulating this technology.

Have you ever wondered how doctors and nurses spend lesser and lesser time with you these days in the examination room but still can make a diagnosis faster than ever before? You can thank the proliferation of AI technology in the healthcare industry. To clarify, we are not referring to the Alexa or Google Home device your doctor may use to set their reminders. We refer to the wide array of AI tools and technology that powers every aspect of the healthcare system, be it for diagnosis and treatments, routine admin tasks, or insurance claims processing.


Image Source: Adobe Stock Image

As explained by Davenport et al. in their Future Heathcare Journal publication, AI technologies in healthcare has the potential to bring tremendous benefits to the healthcare community and patients over the next decade. It can help not only help automate several tasks that previously need intervention of trained medical staff but also significantly reduce costs. The technology clearly has so much potential and can be extremely beneficial to patients. But what are its risks? Will it be equally fair to all patients, or will the technology benefit some groups of people over others? In other words, is the AI technology unbiased and fair?

Even though full-scale adoption on AI in healthcare is expected to be 5-10 years away, there are already ample documented evidence to show lack of fairness in healthcare AI. According to this Postively Aware article, a 2019 study showed that an algorithm used by hospitals and insurers to manage 200 million people in US was less likely to refer black people for extra care than white people. Similarly, a Harvard School of Public Health article on algorithmic biases speaks about how the Framingham Heart Study performed much better for Caucasians than Black patients when calculating cardiovascular risk score.

Unsurprisingly there is very little regulation or oversight over these AI tools in healthcare. A recently published ACLU article on how algorithmic decision making in healthcare can deepen racial bias states that unlike medical devices that are regulated by FDA, “algorithmic decision-making tools used in clinical, administrative, and public health settings — such as those that predict risk of mortality, likelihood of readmission, and in-home care needs — are not required to be reviewed or regulated by the FDA or any regulatory body.”

 


Image Source: Adobe Stock Image

While we trust that the AI researchers and healthcare community will continue to reduce algorithmic biases and strive to make healthcare AI fair for all, the question we ask ourselves today is should the patient have the right to consent, contest, or opt out of AI technologies in their medical care? Given there is so little regulation and oversight about these technologies, we strongly believe that the right way forward is to create policies that treat AI technology’s use in healthcare in the same way as a surgical procedure or drug treatment. Educating customers about the risk of these technologies and getting their informed consent is critical. Heathcare providers should explain to patients clearly about what AI technologies are being used as part of their care, what are their potential risks/drawbacks, and seek informed consent from patients to opt-in. Specific data around algorithmic accuracy against various demographic groups should also be made available to the patient so that they can clearly assess the risk applicable to them. If the patient is not comfortable with these technologies, they should have the right to opt out and still be entitled to alternate course of treatment when possible.

 

Credit Scoring Apartheid in Post-Apartheid South Africa

Credit Scoring Apartheid in Post-Apartheid South Africa

By Anonymous | October 28, 2022

More than 20 years after the end of apartheid in South Africa access to credit is still skewed by race towards the white community of South Africa. Apartheid was a policy or system of segregation or discrimination with the ruling minority whites enjoying more access to resources at the expense of the majority black in South African prior to 1994. Nonwhites included blacks, Indians and coloureds with blacks or African as the majority. This meant that black South Africans were excluded from certain educational institutions, neighborhoods, type of education. This resulted in Africans earning less a third of white South Africans.

Financial institutions in South Africa now use Big Data to develop credit models or algorithms to make credit decisions. It is argued that the use of Big Data and related credit models eliminate human bias in their decision. This is critical in South Africa as such decisions are more likely to be made white person and so any system that purports to eliminate human bias will be seen in good light by Africans. The South Africa credit market shows that Africans are either denied mortgages or receive unfavorable terms like high interest rates or lower mortgage amounts. This is similar for African South African owned businesses who have access to fewer and lower amount of loans.

Access to credit enables investment in human capital and businesses and has the potential to reduce inequality in society and drive economic growth. To support growth for all, South Africa through the bill of rights enshrines the constitutional goals of achieving human dignity, equality for all races and freedom. Several laws have been passed to give effect to these constitutional goals and to enforce equality and prevent discrimination, such as, the Employment Equity Act and the Promotion of Equality and Prevention of Unfair Discrimination Act (PEPUDA). These laws do not apply to the data used in credit scoring and leaves a gap in the application of Big Data credit decisioning.

These credit scoring models are either build using generic statistical analysis or machine learning models. The models will essentially be based on historical credit related data to determine the credit worthiness of individuals. These would include aspects like the amount of credit that can be extended to the person, the period the loan can be granted for, the interest rate payable on the loan and the type and need of collateral for the loan. These models will place significant value on traits in the available Big Data universe using weights. These traits will be factors like education, level and source of income, employment type (formal or informal), assets owned, but all of this will be on a data set made up of predominantly white people given the history of the country. By implications the modelled traits resemble the way of life of white people and how they have structurally benefited from apartheid. Credit models used for all races will largely be a byproduct of apartheid.

The table below is an illustration of the change in labour force participation by race since 1993 a year before apartheid was formally abolished. There is a significant increase in labour force participation for Africans (Blacks) from 1993 to 2008.

The table below shows average income by race and its clear that Africans have continued to receive less than a third of the income received by white people 1993 – 2008 even though they have seen an increase income..

The racial implications of credit apartheid are observed in several areas of everyday life. White South Africans constitute less than 10% of the population but own more than 72% of the available land and 37% of residential property. 69% of white people have internet access, compared to just 48% of black. 53% of entrepreneurs are white and with only 38% African. Black South Africans owned only 28% of available listed shares.

To reduce this bias, part of the solution is to build models for credit scoring based on data from digital platforms and applications that resembles all consumer behavior. This be done by combining data from multiple sources, like airtime usage, mobile money usage, geolocation, bills payment history, and social media usage. This could help to eliminate Big Data bias by race as it would result in new models that are more inclusive. The models can also introduce different weights to different population groups and income level categories. This would improve the fairness of the credit scoring models. It is also an opportunity for regulators to look closely at these models to reduce racial bias which perpetuates apartheid and by implication racial inequality.

Such changes would ensure that the credit scoring models are not only based on data from credit bureaus which is mainly based on white South Africans who have always had access to credit.

Positionality

I’m a black South African raised in a low-income home. At the beginning of my career, I was only awarded 50% mortgage because I had no credit history and probably because of my racial classification.

 

References

  • https://www.oecd.org/employment/emp/45282868.pdf
  • https://www.marklives.com/2017/09/media-future-internet-access-in-south-africa-has-many-divides/#:~:text=%E2%80%9COver%20two%2Dthirds%20of%20white,%2C8%25%20of%20coloured%20users.
  • https://www.gov.za/sites/default/files/gcis_document/201903/national-action-plan.pdf
  • https://mg.co.za/article/2011-12-09-who-owns-what-by-race/
  • https://www.news24.com/citypress/personal-finance/does-your-race-affect-your-interest-rate-20190315

In a Digital Playground, Who Has Access To Our Children?

In a Digital Playground, Who Has Access To Our Children?
By Anonymous | October 27, 2022

On one particularly lazy Saturday afternoon, I heard my then-8 year old daughter call, “Hey dad! Someone’s talking to me on my iPad. Can I talk back?” Taking a look, I was curious to see that another unknown user’s avatar was initiating a freeform text chat in a popular educational app. I didn’t realize it even had a chat feature. My dad-spidey-sense kicked in and I asked my daughter what she thought. She replied, “I don’t know this person, they’re a stranger, I’ll block them.”


Source: https://www.adventureacademy.com

Most parents these days have a good general idea about how to moderate “pull content” (where users choose content to retrieve) for their kids as they use net-connected tablets at home, but for me this situation represents a different kind of concern — “how apps may allow the public at large access to communicate live and directly, and at any time, with our kids at home without our knowledge, either through text-based chat, or more concerning, video or audio.

Solove’s Privacy Taxonomy

In 2006, privacy law professor Daniel Solove published a taxonomy to help illustrate harms from privacy violations, modeled across a complete lifecycle flowing from information collection, to processing and dissemination, to invasion. My concern lies in “intrusion” within the invasion stage, which he describes as “invasive acts that disturb one’s tranquility or solitude”.


Source: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=667622

This intrusion makes children, already vulnerable due to lack of maturity and autonomy, even more vulnerable to a myriad of both online and physical harms, such as scams, phishing attacks, child predation, kidnapping, cyberbullying, and identity theft.

Fourth Amendment Rights

The Fourth Amendment constitutionally protects, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures”. The framers of the constitution, of course, didn’t have the Internet in mind in the 18th century, however a landmark case in 1967, Katz v. United States extended the interpretation of the Fourth Amendment to locations where a person would have a “reasonable expectation of privacy I would argue that the right to defend one’s household, and family within, from such intrusion would apply to the digital realm also.

Cleveland State University’s Online Test Practices

A recent example of the Fourth Amendment being applied within the digital realm involves a ruling by a federal judge against Cleveland State University. While administering a remote online test to student Aaron Ogletree, the university required Aaron to provide test proctors with live webcam footage of a wide scanning view of his bedroom. The judge ruled that the university violated Aaron’s constitutional right against unreasonable search and seizure due Aaron’s lack of choice on the matter. Similarly, younger kids subjected to invasive app access by default may not know how to prevent this access.

Children’s Online Privacy Protection Act (COPPA)

The COPPA rule became effective in 2020, in order to further protect the online privacy of children under the age of 13 by requiring verifiable parental consent (VPC) before any collection of personally identifiable information (PII) can take place. Enforced primarily by the FTC, this legislation is important in providing parents with a basic standard of protection and a workflow to proactively opt in and control the online experience. Despite the best efforts of app developers to follow COPPA and build safe communities, anyone could still be on the other end of unsolicited communication if bad actors falsify sign-up information or if accounts get hacked.

Internet Crimes Against Children Task Force

In 1998, the Office of Juvenile Justice and Delinquency Prevention (OJJDP) created the Internet Crimes Against Children Task Force Program (ICAC), in part due to “heightened online activity by predators seeking unsupervised contact with potential underage victims”. Their website shares, “Since 1998, ICAC Task Forces have reviewed more than 1,452,040 reports of online child exploitation, resulting in the arrest of more than 123,790 suspects.”

Conclusion

Fast forward to today, my younger child is now 6 and has asked for the same educational app. I’ve done enough due diligence to be comfortable installing it for him, but only because the app gives us the option to fully disable the “push chat” feature until he’s old enough to safely use it. If he’s like his older sister, he’ll outgrow the educational content well before then.


Source: https://pbs.twimg.com/media/FSV4TiPWYAEdWXx?format=jpg

References

[1] https://www.techopedia.com/definition/21548/push-media
[2] https://constitution.congress.gov/constitution/amendment-4/
[3] https://www.pewresearch.org/internet/2020/07/28/parenting-approaches-and-concerns-related-to-digital-devices/
[4] https://ssrn.com/abstract=667622
[5] https://www.americanbar.org/groups/judicial/publications/judges_journal/2016/spring/telephone_technology_versus_the_fourth_amendment/
[6] https://www.law.cornell.edu/constitution-conan/amendment-4/fourth-amendment-historical-background
[7] https://www.theverge.com/2022/8/23/23318067/cleveland-state-university-online-proctoring-decision-room-scan
[8] https://fpf.org/verifiable-parental-consent-the-state-of-play/
[9] https://ojjdp.ojp.gov/programs/internet-crimes-against-children-task-force-program

Is Immigration and Customs Enforcement above the law?

Is Immigration and Customs Enforcement above the law?

By Anonymous | October 28, 2022

ICE’s surveillance on the American population has gone under the radar and kept a secret.

The Immigration and Customs Enforcement or ICE is a government agency that might know more about you than you think. When the topic of government surveillance comes up ICE might not be the first government agency that comes to mind to the general population but it is the first agency that comes to mind among the undocument immigrant community. ICE may be the government agency that has access to more information on the population out of all the government agencies.

ICE was founded in 2003 and since then it has gathered more and more information on the population living in the United States on the basis of driving deportations. This gathering of information has led to mass surveillance on the population. ICE has accessed data sets containing information on the vast majority of the population in the US. These records come from DMVs in the states that have allowed undocumented immigrants to have a drivers license in the state. ICE has also accessed records from agencies that provide the essential services like electricity, gas and water. ICE has been able to access and purchase these data sets seemingly without any other government or public oversight. Another key access to data is the information gathered from unaccompanied minors trying to get to the US, the information provided by the children is then used to drive deportations of that child’s family.

Immigration and Customs Enforcement officer

This is not only a legal matter but also an ethical issue. There are privacy laws in place that should protect not only citizens but everyone living in the US and ICE seems to be above those laws and gathering information on the entire population. There does not seem to be any enforcement of the laws to hold ICE or agencies providing the data to ICE accountable on the data they have accessed. How can these laws be enforced when ICE is able to do all their surveillance in secret? States and especially government agencies should have to have the data they have stored strictly regulated. These are government agencies that the population is trusting to keep their data safe and this data is being shared without consent or notification.

ICE has access to drivers license data on roughly 75% of the US population. This means that the surveillance that ICE has is not only limited to undocumented immigrants but also extends to American citizens. ICE is also able to locate 75% of adults in the US based only on utility records. ICE has been using facial recognition technology as early as 2008 when they accessed the Rhode Island DMV database to do facial recognition on their database.

ICE takes advantage of the basic necessities that a person living in the US needs to access data on the US population with deportation as the leading reason for the data. The state governments that have allowed undocumented immigrants to have access to a drivers license in their state should also have a responsibility to protect the information of the individuals that have trusted their information to them. Having access to the basic necessities like utilities should not come with the cost of having your data shared with government agencies without your knowledge or consent.

ICE and the data the ICE has access to should be more strictly regulated to be compliant with state and federal privacy laws. While this is already a concern among the undocumented community in the US this should be a greater concern among the entire population because ICE has be able to access information on the entire population and not just the undocumented immigrant population.

Citation:

Nina Wang, Allison McDonald, Daniel Bateyko & Emily Tucker, American Dragnet: Data-Driven Deportation in the 21st Century, Center on Privacy & Technology at Georgetown Law (2022).

My Browser is a Psychic

My Browser is a Psychic

By Savita Chari | October 28, 2022

The other day I accompanied a friend to the local drug store. While she was shopping, I was browsing. I was tempted to buy my favorite chocolate, but the calorie count made me put it back. Next day as I opened the facebook app, an ad for that very brand of chocolate popped up with a coupon. Was it just a coincidence or can facebook read my mind?

Then, one day I started seeing email deals and ads promoting cruise ship adventures. I was dumbfounded because the previous night on the dinner-table my family was discussing our next vacation. Is my home bugged? How did my browser know that I wanted a cruise vacation? We had not yet started searching on the internet. This left me very uncomfortable.

Like all smart people, I decided to check if other people had similar experiences. I whipped out my phone and asked my trusted ‘Hey Google’ to search. Then it struck me, if google can listen to my question, can it also listen to my other conversations? Could it be possible that without my knowledge, all my conversation was being tracked by my android devices? Turns out, I am not the only one experiencing this. Chat forums such as Reddit and Quora have many people describing their own experience of seeing ads based on private in-person or online conversations. Google vehemently denies that it eavesdrops and states that according to its content policy for app developers, apps cannot listen unless specifically allowed to do so and in active use.

As reported by BBC, two Cybersecurity experts Ken Munro and David Lodge of Pen Test Partners , performed an experiment1 to see if it was physically possible for an android phone to listen to conversation, when not in use. They used APIs provided by Android platform to write a small prototype app with mic enabled. Low and behold, it started recording entire conversation within the vicinity of the phone, even though the experimental app was not in use. There was no spike in data usage because the wi-fi was enabled and the battery did not drain much to raise any alarms.

A Bloomberg report5 cited people with inside knowledge that Facebook and Google hired many contractors to transcribe audio clippings of its users.
An article by USAtoday6 reported an investigation done by a Dutch broadcaster VRT where they proved how Google Smart Home devices eavesdrop and record all private conversations happening in our homes. Google contractors listen to these audio recordings and use them to enhance their offerings (aka serve targeted ads).
Yes, our homes are bugged!

Serving ads online can only go so far to entice us. Companies know that in order to increase revenue, ads should be served at the time when we are shopping. This is made possible with the advent of Bluetooth Beacon.
These small bluetooth devices are strategically placed in various locations in
a store or an enclosed mall and they communicate with our smartphone via bluetooth. Due to bluetooth technology they know our precise location and are able to serve us ads or promotional deals in realtime. No wonder, these days all stores encourage us to download their apps. These beacons communicate with the app and send information like, what are we browsing, where are we lingering etc. Armed with this knowledge the apps send us notifications related to promotions on those items to encourage us to buy them while we are in the store and are more likely to spend extra.
Even if we don’t have the store app installed, there is a stealth-industry of third-party location-marketing companies that collect data based on where we pause in an aisle to examine a product, which translates to our possible interest. These companies take their beacon tracking code and bundle it into a toolkit that is used by other app developers. They are paid in cash or kind ( free reports) to combine these toolkits with the apps. The idea is to get to as many mobile phones as possible.
Mostly these third-party companies are very small and go bankrupt at the drop of a lawsuit and another one crops up in its place. The collected data is sold to big consumers like Facebook, Google etc. who already have a plethora of data on us from various sources. In their privacy statements, they mention these “thirdparty” partners, who help them serve targeted ads and promotion to us.

Picture Credit NyTimes11
Now I know how I got the ad for that chocolate.
So much for clairvoyance!

Games are Dangerous! – Robinhood and Youth Gambling

Games are Dangerous! – Robinhood and Youth Gambling

By John Mah | October 28, 2022

Disclaimer: This post mentions an instance of suicide.

Learning to invest and understand the stock market are fundamental in wealth building. Unfortunately, only 58% of Americans reported owning a stock in 2022 [1]. Not surprising considering it seems incredibly terrifying to enter the stock market world due to indecipherable indexes, mind boggling charts, and confusing financial jargon. However, with gamification all these barriers of entry are thrown out the window due to the addictive psychological mechanism of variable rewards [2]. Teenagers and young adults are especially susceptible and are entering the financial world without proper risk control and developing an addiction, akin to gambling, towards excessive trading for quick profits. Robinhood, a commission free trading platform, is leading the charge for this abuse and making a conscious decision to disregard the well-being of the youth for their own gains.

 

What is Gamification?

Gamification refers to the development of features on applications that make the user experience more visually appealing and intuitive. For Robinhood, the purpose is to make stock trading more fun for the average consumer, it is like playing a video game where it’s appealing to see fun visual graphics when performing an action [3]. These techniques are dangerous as they go beyond just reinforcing a “fun” action to take; instead, they create addictive habits for users.

So why are technology companies like Robinhood using these techniques? It’s simple – to retain users on their platform, which leads to increased revenue and opportunities to procure data. These techniques ultimately tie into the “hooked model” [4], a pattern that allows companies to hook users into their applications as a habit in their daily lives rather than as a tool. The hooked model isn’t just apparent in Robinhood, it is deeply embedded into many of today’s most popular applications like Instagram and Uber.

 [6]

 So Why Target the Youth?

Gamification techniques are particularly effective on the youth because they are more familiar with game like interfaces. As such, it’s not a surprise that roughly 40% of Robinhood’s userbase are between the ages of 19 and 34, in fact the median age of their users is 31 [3]. It’s important to understand that Robinhood is a broker who makes money from trades executed on their platform. In essence, the more trades done on their platform, the more money they make. Robinhood’s use of gamification is now very clear. Teenagers and young adults have little to no experience with investing. When you combine this lack of knowledge with extremely high-risk tolerance that comes from youth brashness and years of playing video games, you get the perfect targets for brokers. Robinhood is not in the business of education, they violate any aspect of justice as specified in the Belmont Report by exploiting the youth for profit.

 [7]

 Inadequate Disclaimers and Disclosures

It’s without a doubt that the financial sector is extremely regulated. A lot of this regulation has to do with information and data that providers are generally transparent with customers. Unfortunately, with an application like Robinhood with heavily gamified features, these disclaimers and disclosures are extremely easy to overlook. The game-like designs leave little room for any disclaimers or disclosures about the risks users are taking when performing trades. Users essentially lose their ability to make rational decisions and we can argue their ability to “consent” sensibly is taken away. There are clear dangers to this. One such example is the case of Alexander Kearns, a 20-year-old from Naperville Illinois, who committed suicide after mistakenly believing he had lost $730,000 dollars [5]. His lack of knowledge on how options worked made him believe he had lost money and, in the end, took his own life. It’s not farfetched to say this will likely not be a rare occurrence anymore in this age of technology.

 Conclusion

Addiction is a real problem. As companies continue to develop new applications, it’s only logical that they are going to want to rope in as many users as they can, and gamified features fit perfectly. The youth are the biggest victims as they grew up with technology. So the next time you open your favorite application, try to see if you’ve been hooked into a gamified model.

 

Sources

[1] Dhawan, S. (2022, August 11). Financialexpress. The Financial Express Stories. Retrieved October 28, 2022, from https://www.financialexpress.com/investing-abroad/featured-stories/58-of-americans-reported-owning-stock-in-april-2022-with-an-ownership-rate-of-89-for-adults-earning-100000-or-more/2626249/

[2] Eyal, N. (2022, August 25). Variable rewards: Want to hook users? Drive them crazy. Nir and Far. Retrieved October 28, 2022, from https://www.nirandfar.com/want-to-hook-your-users-drive-them-crazy/

[3] Gallo, N. (2022, June 29). Robinhood and the gamification of investing. FinMasters. Retrieved October 28, 2022, from https://finmasters.com/gamification-of-investing/

[4] Eyal, N. (2022, September 11). The hooked model: How to manufacture desire in 4 steps. Nir and Far. Retrieved October 28, 2022, from https://www.nirandfar.com/how-to-manufacture-desire/

[5] CNBC. (2021, February 9). Robinhood sued by family of 20-year-old trader who killed himself after believing he racked up huge losses. CNBC. Retrieved October 28, 2022, from https://www.cnbc.com/2021/02/08/robinhood-sued-by-family-of-alex-kearns-20-year-old-trader-who-killed-himself-.html

[6] https://edtechnology.co.uk/dashboard2/wp-content/uploads/2021/09/3726585-scaled.jpg

[7] https://forexillustrated.com/wp-content/uploads/2022/03/Biggest-trading-losses-from-Wallstreetbets-800×533.png

“That’s A Blatant Copy!”: Amazon’s Anticompetitive Behavior and Its Impacts on Privacy

“That’s A Blatant Copy!”: Amazon’s Anticompetitive Behavior and Its Impacts on Privacy
By Mike Varner | October 27, 2022

Amazon’s function as both a marketplace and competitor has been under the microscope of both European and American regulators for the past few years and just recently the company attempted to skirt fines, unsuccessfully so far, by promising to makes changes. In 2019, European regulators opened an investigation over concerns that the data Amazon collects on its merchants was being used to dimmish competition by advantaging Amazon’s own private-label products [1].

What’s Private-Label?

Private-label refers to a common practice among retailers to distribute their own products to compete with other sellers. A 2020 a Wall Street Journal investigation found that, in interviews with over 20 former private-label employees, Amazon uses merchants’ data when they develop and sell their own competing products. This evidence runs contrary to not only their stated policies, but also to what spokespeople for the company attested to in Congressional hearings. Merchant data helped employees to know which product features were important to copy, how to price each product, and anticipated profit margins. Former employees demonstrated how using merchant data, for a car trunk organizer, Amazon used sales and marketing data to ensure that private-label could deliver higher margins [2].

[Image 1] Mallory Brangan https://www.cnbc.com/2022/10/12/amazons-growing-private-label-business-is-challenge-for-small-brands.html

Privacy Harms

Amazon claimed that employees were prohibited from using these data in offering private-label products and launched an internal investigation into the matter. The company claimed there were restrictions in place to keep private-label executives from accessing merchant data, but interviews revealed that use of these data was common practice and openly discussed in meetings. Even when regulations were enforced, managers would often “go over the fence” by asking analysts to create reports which divulged the information or to even create fake “aggregated” data which would secretly only contain a single merchant. It’s clear that these business practices are unfair and deceptive, to merchants, as they are demonstrably false relative to the company’s written policies and verbal communication to Congress. The FTC should consider this in its ongoing investigations as unfair and deceptive business practices are within its purview [3]. Other agencies such as the SEC are looking into this matter and the USDOJ is investigating the company for obstructing justice in relation to their 2019 Congressional hearings [4].

[Image 2] Nate Sutton, an Amazon associate general counsel, told Congress in July: ‘We don’t use individual seller data directly to compete’. [2]

Search Rank Manipulation

Amazon has made similarly concerning statements about their search rank algorithm over the years. Amid internal dissent in 2019, Amazon changed its product search algorithm to highlight more profitable, for Amazon, products. Internal counsel initially rejected a proposal to directly add profit into the search rank algorithm amid ongoing European investigations and concerns that the change would not be in customers’ best interest (a guiding principle for Amazon). Despite explicit exclusion of profit as a variable into the algorithm, former employees indicated that engineers would simply add enough new variables to proxy for profit. To test this, engineers would run A-B tests to calculate how to proxy for profit and unsurprisingly they found something that worked. They backed solved, using a variety of variables, for profit to have the best of both worlds: to “truthfully” represent that they don’t use profit and to use a completely equivalent composite metric. As one of the many checks prior to changing the algorithm, engineers were explicitly prevented from including variables that decreased profitability metrics. In total, while it could be strictly true that the search rank algorithm does not include profit, Amazon has optimized for profitability through a series of incentive structures [5]. Regulators should consider this deceptive practice as part of their ongoing investigations as simply not including profit obfuscates the complexities Amazon has gone through to achieve their desired outcome.

[Image 3] Jessica Kuronen [5]

What’s Next?

Amazon attempted in July to end their European antitrust investigations by offering to stop collecting nonpublic data on merchants [6]. Amazon hopes that this concession would prevent regulators from issuing fines, but this proposal has been met with strong criticism from a variety of groups [7]. The outcome of this proposal is pending, but similar litigation continues. Just last week UK regulators filed similar antitrust claims against Amazon over its “buy box” [8]. Amazon has been considering getting out of the private-label business due to lower-than-expected sales and leadership has been slowly downsizing the private label business over the past few years [9].

References:

  1. https://ec.europa.eu/commission/presscorner/detail/pl/ip_19_4291
  2. https://www.wsj.com/articles/amazon-scooped-up-data-from-its-own-sellers-to-launch-competing-products-11587650015?mod=article_inline
  3. https://www.wsj.com/articles/amazon-competition-shopify-wayfair-allbirds-antitrust-11608235127?mod=article_inline
  4. https://www.retaildive.com/news/house-refers-amazon-to-doj-for-potential-criminal-conduct/620246/
  5. https://www.wsj.com/articles/amazon-changed-search-algorithm-in-ways-that-boost-its-own-products-11568645345
  6. https://www.nytimes.com/2022/07/14/business/amazon-europe-antitrust.html
  7. https://techcrunch.com/2022/09/12/amazon-eu-antitrust-probe-weak-offer/
  8. https://techcrunch.com/2022/10/20/amazon-uk-buy-box-claim-lawsuit/
  9. https://www.wsj.com/articles/amazon-has-been-slashing-private-label-selection-amid-weak-sales-11657849612

Closing the digital divide – how filtered data shapes society.

Closing the digital divide – how filtered data shapes society.
By Anonymous | October 27, 2022

Photo by NASA on Unsplash

You send a text, like a post, make an online purchase, or check your blood pressure on your smart device. The data you produce from these activities shapes ideas, opinions, policies, and more. Given this tremendous influence on society, data ingested by any means must be representative of all populations.

In reality, as of 2018, more than 20% of people in the United States are without Internet access (FCC, 2020), and as of 2021, only 63% globally are accessing the Internet (Statista, 2021).

Data is at the heart of decision-making. In the field of Data Science, Data Scientists rely on data to inform decisions in many industries that impact our daily lives, from Healthcare, Fraud Detection, Logistics, Transportation, etc. With such an influence on so many aspects of our daily lives, it’s critical for the technology we use to be inclusive and data-representative of all populations. Unfortunately, to a certain degree, Data Scientists are operating with a filtered view of the world based on the usage of data that’s not representative of all populations.

Data Scientists require data that will bring new insights to the world. Data points are carefully selected to feed algorithms that produce meaningful outcomes that benefit society. The fundamental problem with this is that the technology creating the input data is not in the hands of individuals equally among all populations, thus producing exclusionary data. Without fair representation, populations’ chances of influencing societal decisions do not exist. Reasons for this include a lack of affordable digital technology, physical disabilities that prevent technology from being accessible, and environments without Internet access.

While rural areas lack Internet access, Tribal lands, in particular, are severely impacted by the digital divide. In a study published by the FCC, as of 2018, “22.3% of Americans in rural areas and 27.7% of Americans in Tribal lands lack coverage from fixed terrestrial 25/3 Mbps broadband, as compared to only 1.5% of Americans in urban areas.”

Encouraging support has come recently from the USDA and FCC. The USDA’s ReConnect Program, has since 2018 invested over $1 billion to date to expand high-speed broadband infrastructure in unserved rural areas and tribal lands.” The Affordable Connectivity Program provides discounted Internet services to lower income families.

Internet users in 2015 as a percentage of a country’s population

Source: International Telecommunication Union. Wikipedia.

Before data driven decisions can be made, the data the must be inclusive of all populations. We need to increase deployment of infrastructure to enable Internet access in underserved communities and increase access to technology for those who are without. Below are a few volunteer resources if you’d like to help close the digital divide:

Photo by Alexander Sinn on Unsplash

References

  1. 2020 Broadband Deployment Report. Federal Communications Commission. FCC
  2. ReConnect Loan and Grant Program. U.S. Department of Agriculture. ReConnect
  3. List of countries by number of Internet users. Wikipedia
  4. Affordable Connectivity Program. Federal Communications Commission. FCC
  5. Percentage of global population accessing the internet from 2005 – 2021, by market maturity. Statista, 2022. Statista

Cops on your Front Porch

Cops on your Front Porch
By Irene Shaffer | October 24, 2022

Some technology companies, including Amazon and Google, will share footage from smart doorbells with law enforcement without a warrant and without consent of the owner.

In case of emergency

In a letter to Senator Edward Markey earlier this year, Amazon confirmed that it has provided Ring doorbell footage to law enforcement on several occasions without consent from the owner of the doorbell and without a warrant. Amazon claims that such disclosures are made only when there is “a good-faith determination that there was an imminent danger of death or serious physical injury to a person requiring disclosure of information without delay” (Amazon, 2022). However, this admission from Amazon created a whirlwind of news stories about the perceived invasion of privacy. Although this emergency usage of footage was in accordance with the privacy policy of the Ring device, it clearly violated users’ reasonable expectation of privacy for a device that they install at their home.

In this article, we explore two questions. First, should it ever be justifiable to share footage from a private recording device without consent and without a warrant? Second, what should a privacy-conscious user look for in the terms of service of a smart doorbell device?

Should video ever be shared without consent?

Amazon claims that it only shares videos in this way in the case of emergencies; however, there is no legal requirement for a company to disclose user data in the absence of a warrant or other court document requiring that the data be provided to law enforcement. In fact, several manufacturers of smart doorbells, including Arlo, Wyze, and Anker, have all confirmed that they will not share data without either consent or a warrant, even in the case of emergency (Crist, 2022). However, Google has the same policy as Amazon, although Google claims to have not yet shared any video footage in such an emergency situation (Crist, 2022).

Ideally, a company’s stance on this issue should be front and center in the privacy policy for the device so that no user is caught by surprise when their data is shared with law enforcement. Although Ring’s privacy policy does a good job of fulfilling the recommendations of the California Online Privacy Protect Act (CalOPPA) document, including “use plain, straightforward language” and “avoid technical legal jargon”, Ring’s published policy for sharing video with law enforcement is somewhat contradictory (California Department of Justice, 2014). In the first section, it states:

Ring does not disclose user information in response to government demands (i.e., legally valid and binding requests for information from law enforcement agencies such as search warrants, subpoenas and court orders) unless we’re required to comply and it is properly served on us.

However, in a later section titled “Other Information”, the emergency policy is specified:

Ring reserves the right to respond immediately to urgent law enforcement requests for information in cases involving imminent danger of death or serious physical injury to any person.

On the surface, this seems to contradict the first statement and may confuse users who do not read all the way to the bottom of the policy.

What should privacy-conscious consumers look for?

For users who are truly privacy conscious, the best solution is a closed-circuit camera which does not upload to the cloud. However, this is much less convenient than an internet connected device. To ensure privacy while also benefitting from the convenience of the cloud, the ideal solution is an end-to-end encrypted service which allows only the owner of the doorbell to decrypt and view recordings. In this way, the service provider is unable to access the video, even in the case of an emergency or court ordered request from law enforcement. End-to-end encryption is available from many major smart doorbell providers, including Ring, which has been rolling out this feature to its devices over the past year (Ring, 2022). Although Ring provides many warnings about features which will be lost due to encryption, a user who wants control over their data should gladly accept this tradeoff.

References

Amazon. (2022, July 1). Amazon Response to Senator Markey. Retrieved from United States Senator for Massachusetts Ed Markey: https://www.markey.senate.gov/imo/media/doc/amazon_response_to_senator_markey-july_13_2022.pdf

California Department of Justice. (2014). Making Your Privacy Practices Public.

Crist, R. (2022, July 26). Ring, Google and the Police: What to Know About Emergency Requests for Video Footage. Retrieved from CNET: https://www.cnet.com/home/security/ring-google-and-the-police-what-to-know-about-emergency-requests-for-video-footage/

Ring. (2022). Ring Law Enforcement Guideliens. Retrieved from Ring: https://support.ring.com/hc/en-us/articles/360001318523-Ring-Law-Enforcement-Guidelines

Ring. (2022). Understanding Video End-to-End Encryption (E2EE). Retrieved from Ring: https://support.ring.com/hc/en-us/articles/360054941511-Understanding-Video-End-to-End-Encryption-E2EE-

Senator Markey’s Probe into Amazon Ring Reveals New Privacy Problems. (2022, July 13). Retrieved from United States Senator for Massachusetts Ed Markey: https://www.markey.senate.gov/news/press-releases/senator-markeys-probe-into-amazon-ring-reveals-new-privacy-problems

Spying on friend’s financial transactions, it’s easy with Venmo

Spying on friend’s financial transactions, it’s easy with Venmo
By Anonymous | October 27, 2022

Venmo is a financial and social app, intended to allow friends to share funds while connecting socially with notes and emoji’s.  This puts Venmo in a unique position to enable sharing of personal information when the user intends it be shared and protecting the privacy of its users when the user expects it not to be shared. 

Venmo makes headlines with privacy violations

Venmo is no stranger to news headlines about unintended privacy violations.  One famous case is the exposure of President Joe Biden’s account.  A Buzzfeed journalist was able to find it within searching for 10 minutes.  It revealed Biden’s private social network, family relationships and raised security issues at the national level. (2021, Buzzfeed)

By default, when a user signs up for Venmo, their settings via the app are set to ‘Public – Visible to everyone on the internet’.  Countless people assume their information is shared only with friends they exchange funds with, and unknowingly are sharing information publicly.  Additionally, as per Venmo’s privacy policy, they track a user’s geo-location that may be used for several reasons which are vague such as ‘advertising, .. or other location-specific content’. (Venmo privacy policy, 2022).  This setting can be turned off, but many unsuspecting users remain unaware of these security options and assume their privacy is protected.  Nissenbaum’s contextual integrity, addresses context of information relative to information norms that should be respected (Nissenbaum, 2011).  Venmo, by this standard, has failed to address contextual integrity and the consequences can cause harm to unsuspecting users of Venmo.

How to change Privacy settings in Venmo

Changing the default Venmo settings is not difficult, but many users are unaware that their privacy is unprotected.  To change the settings, a user needs to open Venmo’s Privacy settings, change the settings to ‘Private’.  This will change the settings for all transactions going forward but will not change the privacy settings of historical transactions.  To change historical transaction privacy, a user needs to do an additional step to click into the ‘More’ section and change Past Transactions history to Private as well.  Additionally, if a user prefers that Venmo not track their geo-location at all times, they need to click into ‘Location’, ‘App location permissions’, then ‘Permissions’, then Location permission’ to change those default settings to the desired privacy setting.

Venmo and the FTC

Venmo has already been sued and settled with the FTC for violation of disclosing information to consumers about transferring funds and privacy settings in violation of the Gramm-Leach-Bliley Act (FTC, 2018).  Venmo misrepresented that user’s financial accounts were protected by “bank grade security systems”.   Venmo sent users confirmation that money had been credited to their account, when in fact it had not.  They did not disclose that Venmo could cancel or freeze fund transfer after a confirmation had been sent to the user.  As result of being found in violation of the law, Venmo is subject to a third-party assessment of compliance every other year, for the next 10 years.  One key point in the FTC principles includes recommending that companies increase their transparency with data practices, which is what the FTC supported in its ruling in this case.  (FTC, 2012)

Looking Forward

Venmo has made enhancements to their privacy policy and practices because of exposure of violations that caused harm.  The objective of this blog post is to raise awareness and encourage checking privacy policies and settings for applications used where personal information can be shared unexpectedly and to the detriment of the user.  Many companies are improving their attention to privacy, but as we can see in the case of Venmo, not until an embarrassing security exposure or court case requires them to do so.

References

  1. Venmo privacy policy (2022, September 14). https://venmo.com/legal/us-privacy-policy/
  2. Mac, Ryan, Notopoulos, Katie, Brooks, Ryan, McDonald, Logan. (2021, May 14). We Found Joe Bien’s Secret Venmo.  Here’s Why That’s A Privacy Nightmare For Everyone, BuzzFeedNews https://www.buzzfeednews.com/article/ryanmac/we-found-joe-bidens-secret-venmo
  3. Federal Trade Commission (FTC). (2018, February 27) PayPal Settles FTC Charges that Venmo Failed to Disclose Information to Consumers About the Ability to Transfer Funds and Privacy Settings; Violated Gramm-Leach-Bliley Act https://www.ftc.gov/news-events/news/press-releases/2018/02/paypal-settles-ftc-charges-venmo-failed-disclose-information-consumers-about-ability-transfer-funds
  4. Nissenbaum, Helen (2011). A contextual Approach to Privacy Online Daedalus140:4
  5. Federal Trade Commission (FTC). (2012, March) Protecting Privacy in an Era of Rapid Change, Recommendations for businesses and policymakers. Section VI, Subsections C, “Simplified Consumer Choice” and D, “Transparency” talk about notice and consent)