Public Privacy: Is Digital Privacy Truly Attainable?

Public Privacy: Is Digital Privacy Truly Attainable?
Shanie Hsieh | June 24, 2022

With the rise of the digital age, it becomes increasingly difficult to prevent digital footprints. In order to keep up with new technologies, users share more and more of their data to companies and products. The question is how can we properly protect data and privacy, while evolving new innovations.

Privacy Policies and Frameworks

There are currently guidelines implemented to guide and protect privacy. The Federal Trade Commission Act (FTCA) [1] is one that is commonly referenced. Some important principles include what can be considered deceptive practices, consumer choice, and transparency in privacy policies. The General Data Protection Regulation (GDPR) [2] and California Consumer Privacy Act (CCPA) [3] serve a similar purpose in giving users the power over their data. There are multiple other frameworks that detail more into protecting user information, but these are meant as general guidelines where sites can easily bypass.

Who Reads Privacy Policies

Every application, product, and website these days all have some sort of Terms of Service and Privacy Policy written up in order to gain a user’s trust and grow their platform. These documents are often thousands of words, ex. Twitter’s 4,500 word privacy policy. It is common knowledge, and even joked about, that most people click the “I accept” without even taking a second glance at what they are agreeing to. An article from The Washington Post [4] details this issue and offers a new approach to persuade companies to change the format of privacy policies in a way that can allow a user to better understand what they are consenting to. If consumers aren’t reading these policies that are put in place for their own protection, what purpose do these privacy policies truly hold?

Image 1: Pew Research shows how often do users read privacy policies. (From Pew Research Center)

Trust In The Digital-verse

An article published in Forbes [5] writes, “In the digital world, trust issues are often data issues.” This article goes on to advocate for companies to execute their work ethically so it does not breach any user’s trust, so in the long term, all users could trust in what they are agreeing to across the web. In an ideal world, this is our course of action to respect all people and their privacy. However, realistically, we have seen evidence of breaches in privacy and manipulation through the use of personal data. We cannot solely rely on respect and trust to enact effective privacy policies and protection.

The Balance Between Privacy And Innovation

Based in algorithms and development, data is the backbone of technological advancements. Our biggest companies today, Google, Apple, Amazon, etc., have created some of the most influential products our world has ever seen, but at the cost of analyzing their users’ data, some of whomst had no idea they had consented to. A quote from an interview on CBN news [6] says, “We don’t realize how that data can be used to manipulate us, to control us, to shape our perception of truth and reality, which I think is some of the biggest questions in terms of technology and what it’s doing to us: is altering our perception of reality.” The leading question now is, “Is privacy… real?”

Image 2: Cartoon depicting a person who had unawarely consented to sharing personal information. (Created by Steve Sack on Cagle Cartoons)

What Can We Do?

As users, all we can do is read these policies carefully. If we assume that we can trust the companies who published their privacy policies, then it is our job to read what is written and not obliviously consent to something we may not truly agree to. As for companies, they should make it simpler to understand their policies. Twitter, for example, had tried to turn their privacy policy into a game in attempts to help their users understand the document. Overall, as we work towards a better future, we should share a mutual respect between users and the company in order to find this balance between privacy and technology.

References

[1] Federal Trade Commission Act Section 5: Unfair or Deceptive Acts or Practices 

[2] GDPR

[3] California Consumer Privacy Act (CCPA)

[4] I tried to read all my app privacy policies. It was 1 million words.

[5] Trust In The Digital Age Requires Data Ethics And A Data Literate Workforce 

[6] Is Privacy the Tradeoff for Convenience in the Age of Digital Worship? | CBN News 

Images

[1] Pew Research 

[2] Cartoon 

How many company data breaches will affect you in your lifetime?

How many company data breaches will affect you in your lifetime?
Tessa de Vries | June 23, 2022

While the title may scare you, this question is important to consider; many of you are likely users or clients of a company that has faced a cyber attack, breaching user data, and now facing a class action lawsuit. Some of you, may even be users or clients of multiple of these companies.

It’s difficult for the average user to understand exactly how a company may be misusing data, not only due to lack of domain knowledge and legal nuances, but the overwhelming tendency for companies to use vague terms in their policies. However, when a lawsuit breaks the news and affects millions of users, it can be a major concern that immediately grabs your attention. And these days, with user data at the forefront of any and every company, it seems as though we blindly put our trust in systems that may be weak to security.

You may wonder, so what companies and lawsuits are you talking about?

To name a few of the biggest, most widespread:

Capital One: In July 2019, Capitol One was fined $80 million by the U.S Treasury Department for careless network security practices that enabled a hacker to access the personal information of 106 million of the bank’s credit card holders. Among the largest of its kind on record at the time, the breach compromised about 140,000 Social Security numbers and 80,000 bank account numbers (Press).

T-Mobile: In August 2021, T-Mobile announced that their systems were subject to a criminal cyberattack that compromised data of millions of customers, former customers, and prospective customers. Fortunately, the breach did not expose any customer financial information, but, like so many breaches, some SSN, name, address, DOB and driver’s license/ID information were compromised (Sievert).

TikTok: In November 2021, TikTok agreed to pay $92 million to settle over a dozen lawsuits alleging that the platform harvested personal user data, including information using facial recognition technology, without consent. TikTok allegedly tracked and sold the data of 89 million users to advertisers and third parties in China in violation of state and federal law (Allyn). 

TikTok Notification – Lawsuit

Are there more?

The short answer is yes. There are numerous big and small companies that have faced legal consequences and lawsuits over data malpractice. Most of these you can find and read about online. However, a big concern remains how many companies are there that are using malpractice and getting away with it, flying under the radar? How many companies have weak security systems that are at risk of being hacked and facing a data breach? Of course, and frustratingly enough, this question is impossible to answer.

Well what can I do about it?

First, if you are or were a user at one of the companies that has faced a lawsuit or fine for data malpractice, you can file a claim and join the class action lawsuit.

Second, you can read up on company data policies in detail. Many of us hit the agree button without taking any time to read the actual data policy. Additionally, while reading the policy, you can evaluate it against state and federal privacy regulations.

Third, although this is very challenging, you can try to be selective with which companies you choose to subscribe to. The good thing about this day and age, there are countless companies providing the same service. For example, as of 2022, there are 747 Wireless Telecommunications Carriers businesses.

Closing remarks

While we on the user end cannot control how a company operates around our data or how strong their security system is, it’s important to be aware and educate yourself on potential risks. And when you are put in a situation where your data has been compromised, you seek justice or some sort of compensation to hold these companies accountable.

References

Jewett, Abraham. “Data Breaches: Current Open Lawsuits and Settlements.” Top Class Actions, 26 Apr. 2022, https://topclassactions.com/lawsuit-settlements/lawsuit-news/data-breaches-current-open-lawsuits-and-settlements/.

Allyn, Bobby. “Tiktok to Pay $92 Million to Settle Class-Action Suit over ‘Theft’ of Personal Data.” NPR, NPR, 25 Feb. 2021, https://www.npr.org/2021/02/25/971460327/tiktok-to-pay-92-million-to-settle-class-action-suit-over-theft-of-personal-data.

Press, The Associated. “Capital One Is Fined $80 Million for Huge Data Breach.” Fortune, Fortune, 7 Aug. 2020, https://fortune.com/2020/08/07/capital-one-fined-80-million-data-breach/.

Sievert, Mike. “The Cyberattack against t‑Mobile and Our Customers: What Happened, and What We Are Doing about It. ‑ t‑Mobile Newsroom.” TMobile, 27 Aug. 2021, https://www.t-mobile.com/news/network/cyberattack-against-tmobile-and-our-customers.

 

Fourth Amendment is For Sale

Fourth Amendment is For Sale
Jeremy Yeung | June 23, 2022

In the age of information, government agencies are subverting data protection laws by buying your sensitive and “protected” data.

How comfortable would you be to show your friends and family your entire browser history? If you wouldn’t want your friends or family to know something, would you be comfortable with the government knowing just cause? Actually, the government already has access to this data and they don’t need any reason to obtain it. Moreover, Snowden revealed that the intelligence agencies collect text messages, contact lists, photos, and video chats from social networks directly without your consent or court orders [1]. It seems that the phrase, “Big Brother is watching you,” from the book Nineteen Eighty-Four is ever relevant today. This leads to the question: is any of our data even protected from our own government?

When navigating through the different legal texts, it is difficult to tell what data is accessible to government agencies without a warrant. Data protection starts with the Fourth Amendment of the U.S. Constitution, which protects people from unreasonable search and seizures. Though the Electronic Communications Privacy Act of 1986 outlined broad protections to wire, oral, and electrical communications while they were being made or stored, it has been altered by the Communications Assistance to Law Enforcement Act, the USA PATRIOT Act, USA PATRIOT reauthorization acts, and FISA Amendments Act [2]. For example, the ECPA prohibited the interception of any wire, oral, or electronic communication, but the USA PATRIOT Act notably allows the FBI to wiretap citizens without need for a warrant or probable cause. Clarity to these vague data protection laws is often found in Supreme Court cases, especially in Riley v. California (2014) and Carpenter v. United States (2018). These Supreme Court cases recognized a reasonable expectation of privacy in the contents and geolocations of cell phones.

As it turns out, there are much easier ways for law enforcement to obtain sensitive information without probable cause: introducing data brokers. Data brokers are intermediaries that buy data from parties such as weather forecasting apps, which collect precise geolocation data. Law enforcement and government intelligence agencies can in turn buy the sensitive data from data brokers because the current laws only apply when the government forces disclosure. Government agencies including the IRS, DEA, FBI, and ICE have made millions in payments to private companies for mobile location information, many times refusing to reveal how the information was used [3].

Potential for Harm
Besides egregious violations of the Fourth Amendment, these privacy violations have important implications on civil rights. Location data from social media apps are able to track protesters. This technique has been used in China to track people who do not align with the government’s politics, including Muslim Uighers and protesters in Hong Kong. In the latter case, those critical of the government have been tracked and police showed up at their doors at night [4]. It may be hard to imagine something like this happening In the United States, but the potential for geolocation and cell phone data to be weaponized still exists. Law enforcement in possession of widespread personally-identifiable GPS data could easily use intimidation tactics against protestors a part of anti-police-brutality movements such as Black Lives Matter. Even anonymized data can be aggregated with related datasets to quickly re-identify subjects, as shown by a group of MIT scientists.

To be honest, as active participants in society, we don’t have many choices to avoid these privacy violations. A few practical solutions include using a faraday bag [5], opting out of targeted advertising, Alternatively, since we know companies work in their best interest, change can start by advocating for data protection or only using products that protect our privacy. But with the rapid advancement of technology, the goals of previous legislation have become easier to subvert. Nonetheless, the law is the next best way to restrict these agencies. In April of 2021, a bipartisan bill called “The Fourth Amendment Is Not For Sale Act” was introduced to end this privacy loophole. If passed, this would protect geolocation, communication, and other sensitive data from being bought by intelligence agencies. Though the current version of the bill closes the sale of sensitive data, it does not prevent companies from handing over data, per say, in exchange for less regulation [6]. Furthermore, there has been no movement since the introduction of this bill. We must demand the stop to this erosion of our rights and privacy in this ever-evolving age of information.

References:
[1] https://theworld.org/stories/2013-07-09/17-disturbing-things-snowden-has-taught-us-so-far
[2] https://bja.ojp.gov/program/it/privacy-civil-liberties/authorities/statutes/1285
[3] https://www.vox.com/recode/22038383/dhs-cbp-investigation-cellphone-data-brokers-venntel
[4] https://apnews.com/article/asia-pacific-international-news-hong-kong-extradition-china-028636932a874675a3a5749b7a533969
[5] https://www.howtogeek.com/791386/what-is-a-faraday-bag-and-should-you-use-one/
[6] https://cdt.org/insights/new-cdt-report-documents-how-law-enforcement-intel-agencies-are-evading-the-law-and-buying-your-data-from-brokers/

Images:
[1] https://www.wallarm.com/what/securing-pii-in-web-applications
[2] https://khpg.org/en/1519768753

Digital Product Placement: The new target

Digital Product Placement: The new target
Simran Sachdev | June 23, 2022

Big Brother is watching – you invited him in. The prevalence of big tech and constant collection of personal data during every online interaction and on every virtual platform has led to increased targeting of ads. This targeting has now even invaded our favorite TV shows and movies. But when does targeted advertising and data collection cross the line of privacy invasion and what measures are being put in place to protect you, the consumer?

Monitor with a target on it being hit by an arrow.[5]

If you have ever seen an ad online that relates to your specific traits, interests, or browsing history, then you are a victim of targeted advertising. These advertisements appear while scrolling through social media, browsing a website, or streaming your favorite show and are curated based off of the user’s personal data. Artificial intelligence and machine learning technologies are becoming more prevalent in the digital advertising space and help deliver personalized content to targeted users. Digital ad spending is expected to be over $500 billion in 2022 [1]. Why is so much money being spent in this form of advertising? Well, by understanding the user’s likes and dislikes, their demographics, and their past searches, targeted ads make consumers 91% more likely to purchase an item [1].

Using consumer data to create targeted ads is a violation of their privacy. There is a lack of transparency between the consumer and the advertiser as the advertiser does obtain informed consent to use the consumer’s personal information in such a targeted way. By tracking your online usage, what products you buy, and learning the sites you visit consumers feel intruded upon. Depending on the platform, these targeted ads put consumers in a filter bubble, showing them only specific content and isolating them from alternative views and material [2]. This invasion of privacy continues to spread across online platforms, manipulating consumers in new ways.

Digital Product Placement

The latest addition to the realm of target advertising is digital product placement. With the rise of ad-blockers and increased popularity of streaming services, it is becoming increasingly difficult for advertisers to reach a large audience. With technology developed by the UK company Mirriad, major entertainment companies are using AI to directly insert digital product displays into movies and TV shows. The AI analyzes scenes that have already been filmed to find a place where an ad can subtly be inserted [3]. This can be in the form of posters on walls, buildings, and buses and even as 3D objects such as a soda can or a bag of chips. Mirriad and other technology companies are now working on technology that would tailor placed products to the viewer. Multiple versions of each ad scene would be produced so different viewers would see different advertised products.

 

An episode of “How I Met Your Mother” from 2007 was rerun in 2011 with a digitally inserted cover of “Zoo Keeper” to promote the movie’s release.[6] An episode of “How I Met Your Mother” from 2007 was rerun in 2011 with a digitally inserted cover of “Zoo Keeper” to promote the movie’s release.

This becomes another form of targeted advertising as the ad the viewer sees is based on a combination of their browsing and viewing history as well as their demographic data online such as their age, gender, ethnicity, and location [4]. Using this data the AI system now knows who is watching, removing the guesswork from advertising. For example, an active or athletic person watching a show may see a Gatorade in the scene, while at the same time a more inactive, sedentary viewer would be shown a can of Pepsi. This becomes an additional revenue stream for streaming giants but at what cost to the user? While the AI works to subtly insert ads, there is no guarantee that it will always be able to do so without ruining the viewer’s experience. Tailoring these ads to individual viewers leads to a form of deception to the viewers by not being fully transparent on how their data is being collected and used, and by not informing them of how they will be marketed to. This changes the streaming experience for everyone by making them feel more vulnerable of having a narrowing viewpoint. From fear of receiving contrasting targeted ads from their friends, personalized digital product placement may change how one conducts themselves online in order to prevent seeing different content.

AI technologies will soon be integrated into all avenues of advertising, allowing for the creation of personalized content for everyone. Yet it also brings to light data privacy concerns. With this growing use of personalized content it is necessary to update marketing rules to include digital product placements, but also update privacy frameworks to control for the increased use of user data to target ads.

 

References 

[1] Froehlich, Nik. “Council Post: The Truth in User Privacy and Targeted Ads.” Forbes, Forbes Magazine, 25 Feb. 2022, https://www.forbes.com/sites/forbestechcouncil/2022/02/24/the-truth-in-user-privacy-and-targeted-ads/?sh=42e548b4355e.

[2] Staff (2017). How Filter Bubbles Distort Reality: Everything You Need to Know. Farnam Street Blog. https://fs.blog/2017/07/filter-bubbles

[3] Lu, Donna. “Ai Is Digitally Pasting Products into Your Favorite Films and TV.” Mirriad, 1 Mar. 2022, https://www.mirriad.com/insights/ai-is-digitally-pasting-products-into-your-favorite-films-and-tv/.

[4] “Advertisers Can Digitally Add Product Placements in TV and Movies – Tailored to Your Digital Footprint | CBC Radio.” CBCnews, CBC/Radio Canada, 31 Jan. 2020, https://www.cbc.ca/radio/thecurrent/the-current-for-jan-31-2020-1.5447280/advertisers-can-digitally-add-product-placements-in-tv-and-movies-tailored-to-your-digital-footprint-1.5447284.

[5]https://www.globalreach.com/global-reach-media/blog/2021/08/11/targeted-advertising-101

[6]https://www.reddit.com/r/mildlyinteresting/comments/1dyfeb/digitally_inserted_product_placement_in_a_rerun/

 

Hey Alexa, where’s my data going?

Hey Alexa, where’s my data going?
Anonymous | June 23, 2022

In exchange for comfort and convenience, households who opt for smart home devices like Amazon Alexa hand off a surprising amount of personal data and security – but how much, exactly?

Firstly, what are smart home devices? They can range anywhere from gaming systems to refrigerators, and most importantly the common thread between them is that they need to connect to the Internet to fully function. Many of these devices are touted to help improve and streamline your day-to-day life, with one’s smartphone often used as the remote control.[1] Well-known choices include Amazon Alexa/Echo, Google Home and Nest, Ring doorbells, and Samsung Smart TVs.

Infographic: Amazon's Alexa Rules American Smart Homes | Statista
You will find more infographics at Statista

Fig. 1: An infographic showing the most popular smart home speakers.

They have gained popularity in recent years and have also faced pushback from several concerned populations, yet interestingly as the Consumers International survey shows, 63% of people surveyed distrust smart/connected devices due to how they collect data on people and their behaviors, yet about 72% of people surveyed own at least one smart device.[2] In another survey conducted by CUJO AI, a whopping 98% of 4,000 participants expressed concerns over privacy with smart home devices, yet a good half of them did not take the requisite steps to still get them or don’t take the necessary precautions to secure themselves and their devices.[3] I use Nissenbaum’s contextual integrity to investigate prominent privacy risks from one of the top smart home devices, Amazon Alexa.

Infographic: The Growing Footprint of Amazon's Alexa | Statista
You will find more infographics at Statista

Fig. 2: An infographic of how many smart home devices Alexa can control.

Personal information is collected by smart home devices for as long as they are in operation, which could mean 24/7 insight into someone’s life, and with Alexa, it was found in a study by Lentzsch et al. that skills, the commands and applications that Alexa could be installed with for different functions, had several privacy issues that could lead to third parties gaining personal information.[4] For example, fraud was a risk on the skills store due to the fact that an unrelated party could mask itself as a reputable organization, and when a user downloaded and used their skill, their personal data would go to this third party instead of the expected organization. Another oversight is that while Amazon required publishers of skills to have a public privacy policy detailing their intent of collecting and using personal data, 23.3% out of about a thousand skills had incredibly opaque policies or none at all, and still got access to personal information through Alexa.[4] Using Nissenbaum’s framework of contextual integrity, we can categorize the data subject and sender of the data as the user of the smart home device, the primary recipient of the data as Amazon Alexa , the information type as personal information such as shopping and living habits and voice, and the transmission principle as through the smart home devices and the Internet.[5] The context of the transmission principle of this personal data, intended for Alexa’s use, could now be leaked to third parties unknowingly and has been compromised.

This is not to say that smart home devices are all bad, as they indubitably provide lifestyle benefits, especially to those who may be disabled or otherwise disadvantaged. The tangible benefits do not outweigh the current privacy costs, however, and thus there needs to be more work done to protect people and their information, whether the work is legal, technological, and/or ethical. We can start by spreading more awareness on exactly how much agency people have over their personal data and privacy, and giving people the right to control them. Additionally, companies of smart home devices should take ownership of making privacy policies and disclosures more digestible and transparent for the average consumer, as well as allowing them to opt out of data harvesting.[6]

Hey Alexa, opt me out of data collection!

Positionality and Reflexivity Statement

I am an Asian American, middle class, cisgender woman. I have a smartphone, a smart watch, and a gaming system, but my household does not have any other smart home devices like TVs and kitchen appliances. Prior to this post, I was already wary of smart home devices and my stance remains the same. However, I have utilized several applications and features like fitness tracking on both my phone and watch that likely collected my data points and could be compromised. Moving forward, I will be more judicious of my smart device use and protect myself where possible, even in this increasingly data-driven world with little privacy where it seems like one’s every step is being watched. I encourage you to evaluate your relationship with your smart home devices and take your privacy into your own hands.

References
[1]Kaspersky. (n.d.). How safe are smart homes? Retrieved 2022, from https://usa.kaspersky.com/resource-center/threats/how-safe-is-your-smart-home
[2]Consumers’ International and Internet Society. (2019, May). The Trust Opportunity: Exploring Consumers’ Attitudes to the Internet of Things. Retrieved 2022, from https://www.internetsociety.org/wp-content/uploads/2019/05/CI_IS_Joint_Report-EN.pdf
[3]CUJO AI. (2021, October). Cybersecurity Perceptions Survey. Retrieved 2022, from https://cujo.com/wp-content/uploads/2021/10/Cybersecurity-Perceptions-Survey-2021.pdf
[4]Lentzsch, C. et al. (2021, February). Hey Alexa, is this Skill Safe?: Taking a Closer Look at the Alexa Skill Ecosystem. Retrieved 2022, from https://anupamdas.org/paper/NDSS2021.pdf
[5]Nissenbaum, H. F. (2011). A Contextual Approach to Privacy Online. Daedalus 140 (4), Fall 2011: 32-48, Available at SSRN: https://ssrn.com/abstract=2567042
[6]Tariq, A. (2021, January 21). The Challenges and Security Risks of Smart Home Devices. Retrieved 2022, from https://www.entrepreneur.com/article/362497Images
[Fig. 1] https://www.statista.com/chart/16068/most-popular-smart-speakers-in-the-us/
[Fig. 2] https://www.statista.com/chart/22338/smart-home-devices-compatible-with-alexa/

Websites use privacy popups to track you on every website you visit.

Websites use privacy popups to track you on every website you visit.
Francisco Valdez | June 23, 2022

Privacy Notice popups are annoying, and people are ignoring them. But, they do more than just asking for consent to store cookies.

When GDPR came into effect on May 25th, 2019, we saw privacy notice popups on websites suddenly appear, and the constant battle to keep our information private began to gain awareness. Although GDPR didn’t specify any mechanism to provide consent, the European ad industry implemented the privacy notice popups even before the law came into effect. The privacy notice popups have many goals. The most important goal is to inform users about how they are being tracked and provide a mechanism to opt out. Then the California Consumer Privacy Act (CCPA) became effective on January 1st, 2020, providing similar protections to California residents. CCPA additionally requires an easy and accessible way to opt out. [1]

Most websites use third-party Consent Management Providers (CMP) that provide plugins to implement the privacy notice popups. When users consent, they store a cookie with the response given.

Example of a Privacy Notice popup. Image by WordPress Cookie Notice plugin https://wordpress.org/plugins/cookie-notice/

Nowadays, people consent to privacy notices without reading them to be able to access the content they want. When you consent to be tracked, you do not only agree to store cookies for that particular website on your device. You also agree to access existing data on your device, such as third-party cookies, advertising identifiers, device identifiers, and similar technologies. Also included in the consent is a provision to personalize advertising and content for you on other websites or apps. In addition, your actions on other sites or apps are utilized to make inferences about your interests, influencing future advertising or content.

Many websites have illegally made it harder to opt out by making you read super long privacy notices before you can opt out. Or they make you hop through different pages to show you the opt-out button. Other websites don’t even provide an opt-out. And when you are lucky enough to find the opt-out button, some websites ignore your decision [2].

uBlock Origin in action. Image by uBlock Origin https://ublockorigin.com/

There are many countermeasures for websites that make it harder to opt out. First, you can install an ad-blocker. Ad-blockers have gained popularity in the past few years, but at the same time, it’s becoming an arms race. Publishers are always looking for ways of evading ad-blockers, and ad-blockers are always trying to detect ads and tracking signals. As a result, some publishers have made arrangements with ad-blockers to provide easy opt-out mechanisms and unobtrusive ads; these publishers can bypass
ad-blockers. Ad-blockers generally stop websites from storing cookies in your browser and block tracking signals [3]. Since websites are constantly trying to evade ad-blockers, using them may interfere with the website’s correct functioning.

Do Not Track initiative logo. Image by https://www.eff.org/issues/do-not-track

An additional layer of protection is to enable your Do-Not-Track (DNT) signals in your browser. The problem with current opt-out mechanisms is that they require you to store a cookie with your consent decision on your device. Also, you would need to opt out of every website you use. So instead of opting out constantly, DNT is a setting stored in your browser, which signals every website you visit with your privacy preferences [4]. DNT is consistent with GDPR and CCPA. Both frameworks consider a mechanism where the users configure their devices to store their privacy preferences. For this to be a successful mechanism, mass adoption from all the stakeholders is required.

In summary, Privacy Notice popups are becoming annoying, and the only viable path forward is for mass adoption of Do-Not-Track. Consumers are also responsible for reporting websites not compliant with GDPR, CCPA, or any other regional privacy law.

References

[1]

Kamala D. Harris, Attorney General, California Department of Justice (2014, May). Making Privacy Practices Public. Retrieved from https://oag.ca.gov/sites/all/files/agweb/pdfs/cybersecurity/making_your_privacy_practices_public.pdf

[2]

C. Matte, N. Bielova, and C. Santos (2019, May) Do Cookie Banners Respect my Choice?. ANR JCJC PrivaWeb ANR-18-CE39-0008. Retrieved from http://www-sop.inria.fr/members/Nataliia.Bielova/papers/Matt-etal-20-SP.pdf https://www-sop.inria.fr/members/Nataliia.Bielova/cookiebanners/

[3]

uBlock Origin. About uBlock Origin. Retrieved (2022, June) from https://ublockorigin.com/

[4]

Dan Goodin (2020, October 8th) Now you can enforce your privacy rights with a single browser tick. Ars Technica. Retrieved from https://arstechnica.com/tech-policy/2020/10/coming-to-a-browser-near-you-a-new-way-to-keep-sites-from-selling-your-data/

Surveillance & Control – The Future of National Governance

Surveillance & Control – The Future of National Governance
Austin Sanders | June 23, 2022

Global powers are pursuing contrasting data privacy laws and regulations. Will
government surveillance and control be the new norm worldwide?

The Chinese Communist Party (CCP) demonstrates an apparent lack of trust or care for its citizens. With artificial intelligence systems and data collection
methods incorporated into every part of society, the CCP monitors every text
message, web search, and purchase made legally within their borders.[1] While the internet should facilitate the spread of ideas and knowledge throughout the world, the CCP installed the “Great Firewall” to suppress and control their
population.[2] Government officials argue that it’s in the nation’s best interest
to remain united in their journey to return to the top of the global order. In
the process, the CCP has created a social credit system to track people’s
behavior and encourage habits and actions that align with the CCP’s established norm.[3] The developing and underdeveloped world stands vulnerable as the CCP governance and control philosophy spreads across the globe undermining
democracy.

China’s “Great Firewall” is a threat to democratic principles.

China’s “Great Firewall” is a threat to democratic principles.[2]

Business Insider describes the Chinese social credit system as a way to rank its population. Chinese AI and technology systems track citizens’ behavior and score their actions based on subjective rules. Bad driving or posting the wrong news article online will hurt an individual’s score. Low social credit scores will result in punishments ranging from low-internet speed to the inability to use public transportation.[3] A social check and balances system is a dangerous and slippery slope. Who makes the rules for what is socially acceptable? Humanity should fight against this type of data hoarding and manipulation by national governments.

Chinese companies are at the forefront of the technological revolution.
As China produces surveillance technology, cell towers, and cloud-based
infrastructure worldwide, there is a growing concern that countries will follow
a similar censorship model to China.[4] China’s substantial growth over the last
thirty years can be attributed to the period of relative peace at home and
abroad. Developing countries with authoritarian leaders will view the
“Great Firewall” and social credit system as a means to maintain control and
facilitate growth and development.[4] While this reads well on paper, this comes at high costs to fundamental human rights. Uyghur Muslims in Xinjiang,
pro-democratic leaders in Hong Kong, and Buddhists in Tibet have borne the brunt of atrocious human rights abuses in China’s surveillance state.[5] Government control over digital communications allows the CCP to track and arrest people whose lifestyle does not align with what the CCP envisions for the Chinese people. With such a diverse population, this method of governance is not only inappropriate for China; it would be a problematic system to implement in most countries worldwide.

Given China’s lack of domestic data privacy laws and tight control over the
Chinese tech giants, there is reason to believe that the companies open up
backdoors to access data from foreign countries. This has led Western
governments to stray away from Chinese technology companies.[6] However, countries trying to develop their digital infrastructure often have no other options. Relying on Chinese technology is essential to domestic growth but leaves their government and citizens vulnerable to Chinese surveillance.[4]

Ecuador’s surveillance system installed by Chinese tech companies.

Ecuador’s surveillance system installed by Chinese tech companies.[4]

As global powers continue to diverge in their data privacy laws, getting
developing and underdeveloped countries on board with similar governance norms as the United States and the European Union is essential. Authoritarian
governments will likely side with the CCP and seek out Chinese technologies to monitor their citizens and maintain a stronghold on control. This is detrimental to the democratic global order and will stymie human ideas and development. Going head-to-head with the CCP is challenging and will inevitably cause uncomfortable interactions at the international table. However, the United States must partner with the EU to spread ethical technology and data governance principles worldwide to promote a world that encourages freedom of speech and expression.[4] No one wants to live in a world where they might go to jail for sending a text message professing their religious beliefs.

References
1. Yang J. WeChat Becomes a Powerful Surveillance Tool Everywhere in China.
Wall Street Journal. https://www.wsj.com/articles/wechat-becomes-a-powerful-surveillance-tool-everywhere-in-china-11608633003. Published December 22, 2020. Accessed June 24, 2022.
2. Wang Y. In China, the ‘Great Firewall’ Is Changing a Generation. POLITICO.
Published September 1, 2020. Accessed June 24, 2022. https://www.politico.com/ news/magazine/2020/09/01/china-great-firewall-generation-405385
3. Canales K. China’s “social credit” system ranks citizens and punishes them
with throttled internet speeds and flight bans if the Communist Party deems them untrustworthy. Business Insider. Published December 25, 2021. Accessed June 24, 2022. https://www.businessinsider.com/china-social-credit-system-punishments-
and-rewards-explained-2018-4
4. International Republican Institute. Chinese Malign Influence and the
Corrosion of Democracy. Published online 2019.
5. Human Rights Watch. China: Events of 2021. In: English. ; 2021. Accessed
June 24, 2022. https://www.hrw.org/world-report/2022/country-chapters/china- and-tibet
6. Triolo RG Paul. Will China Control the Global Internet Via its Digital Silk
Road? Carnegie Endowment for International Peace. Published May 8, 2020.
Accessed June 24, 2022. https://carnegieendowment.org/2020/05/08/will-china-control-global-internet-via-its-digital-silk-road-pub-81857

My body, my data, my choice: How data science enhancements threaten privacy to reproductive healthcare

My body, my data, my choice: How data science enhancements threaten privacy to reproductive healthcare
Anonymous | June 23, 2022

Tweet: How surveillance technology and facial recognition software has entered the conversation of reproductive health care, and why the right to liberty and privacy is at stake

In May of 2022, a draft opinion of the Supreme Court’s previous ruling of Roe v. Wade [1] was leaked to the public, indicating that it is possible US citizens (particularly women) could lose the right to privacy towards reproductive health decisions such as abortion.

Protests in Washington D.C. in front of the U.S. Supreme Court following the leak of the potential Roe v. Wade overturning

While this decision has not been confirmed, there have been an uproar of social conversations how families may be criminally prosecuted for seeking abortions where it is legal, and how facial recognition and other data science advancements may be used as evidence to criminally prosecute or penalize them. This recent development towards reproductive privacy demands that the right to privacy be protected on a federal level to preserve America’s fundamental right to liberty.

Since the ruling of Roe v. Wade was established in 1973, the topic of abortion and reproductive health care has been deeply politically charged. It has often threatened the separation of church and state, compelling states with social cultures more intrinsically based in religion to make legislative decisions that protect life, compared to states with larger liberal populations which make legislative decisions that protect choice. Since the Supreme Court’s ruling almost 50 years ago, reproductive rights has been a constant topic in presidential elections, and is often used as a divisive measure of discussion to win votes (or sabotage their opponents). So, with the draft opinion being released of its potential overruling, the conversation regarding the threat to individual privacy has started anew, but this time the public has realized how data science and technological enhancements have a potentially menacing part in the conversation.

While it is not uncommon for pro-life protestors to be stationed outside of reproductive health clinics such as Planned Parenthood, these “peaceful” protestors have started to use facial & license plate recognition software to track and monitor those who seek reproductive health care from these locations [2]. While currently all U.S. citizens have a protected right to seek an abortion, the overturning of Roe v. Wade could mean that the information and data collected by these protestors could be used as criminal evidence. Even if someone chose to get an abortion in a state where that right is protected, they could still be recognized with these surveillance techniques and penalized upon returning to a state with different legal jurisdiction.

Protestors with list of license plates (redacted) of those who visit a Planned Parenthood site in Charlotte, North Carolina

Unfortunately, the concern for privacy doesn’t stop at facial or license plate recognition. Even indirectly associated services such as menstrual cycle monitors, private messaging, or search history logs could contain enough information to serve as evidence to assess if a user may have aborted a pregnancy. Without a clear understanding of how people could be criminalized with this information, any or all data collected could be used as social or criminal retribution and could deprive Americans of their 14th amendment right to “life, liberty, or property without due process of law” [3].

All things considered, if Roe v. Wade were overturned, there is still the option that various state legislatures could protect the right to abortion, however the right to individual or corporeal privacy would be lost. Roe v. Wade may explicitly protect reproductive rights, but its absence could create a chasm towards protecting individuals’ right to corporeal, mental, or emotional privacy. Regardless of the Roe v. Wade decision, there needs to be federal legislation protecting the right to privacy. Without a federally imposed safeguard for individual privacy, especially considering the growing enhancements of artificial intelligence, Americans could lose their basic liberty and could be socially or criminally penalized simply by making decisions for their mental, medical, or familial well-being.

For those who are want information regarding safeguarding privacy while seeking reproductive health care, visit the Digital Defense Fund [4].

[1]. Roe v. Wade – Center for Reproductive Rights

[2].Anti-abortion activists are collecting the data they’ll need for prosecutions post-Roe

[3].Due Process and Equal Protection | CONSTITUTION USA with Peter Sagal | PBS.

 

Cookieless world, Privacy & Disruption

Cookieless world, Privacy & Disruption
Amey Mahajan | June 23, 2022

Personalized ads, digital behavior and tracking users’ data for targeted marketing has been a major trend in the last few users. These practices have been a major contributor to growth of the marketing and retail industry. (ICSD 2022) suggests that purchasing behavior of nearly 44% of consumers is driven by ads. These ads and digital marketing techniques are heavily based on 3rd-party cookie sharing. By 2023 though, Google has planned to remove the 3rd party cookie support on Chrome (nd 2022) to support user privacy like Firefox and Apple.

Four cookies, each smaller than the one before.

The decision taken by a platform that enjoys the largest user base who are also the potential customers for businesses definitely has repercussions and consequences across domains that needs to be studied carefully. The discussion and analysis should be done not just through the lens of user privacy but also its far-fetched impacts on the overall marketing strategies and economic impacts on smaller businesses who don’t have a huge chunk of 1st party cookies (that are still going to remain).

User privacy is the topmost concern in this ever growing digital market for all companies. Until recent times, topics like data collection practices, understanding how collected data is packaged and shared across platforms and the potential harms of such openly used techniques were rarely discussed. With an increased awareness around multiple dimensions of privacy and legitimate practices that need to be enforced, along with stringent regulations like GDPR and CCPA, companies like Google have realized the importance of keeping customer interests first. Moreover, studies have shown that “81% people say that the potential risks they face because of data collection outweigh the benefits” (David 2022). It is imperative that steps need to be taken in the right direction to make people feel comfortable during their digital footprint expansion process.

Third-party cookie retargeting process diagram

Ads generated from these 3rd party cookies drive a lot of revenue for small and big businesses because consumer behavior techniques and the overall business model is largely dependent on it. Given that nearly 81% of companies depend on this technique to drive their business (RetailWire 2022), it is an undeniable fact that there is going to be a huge disruption after the world goes “cookie-less”. This long-lasting impact needs to be studied, analyzed and alternative techniques must be introduced to soften the blow. Apart from the retail industry, another sector to be worst-hit from this decision is definitely the marketing industry (Caccavale 2021). Revenues generated are directly proportional to the techniques used to capture customer behavior and come up with metrics to analyze and strategize for targeted marketing campaigns. Given that the foundational technique itself is radically changing, it is undeniable that this sector will be hit and should certainly gear up for it. Various studies and articles have started publishing about techniques that need to be used by businesses who heavily leverage 3rd party cookies for their strategies.

In today’s day and age, technological advancements play a pivotal role in reaching and analyzing customers and drive businesses based on that analysis. With the landscape changing so quickly and with privacy being at the forefront in driving these changes, exhaustive research needs to be done as to why these proposed changes are important, who they are going to impact and what is the scale of that impact.

References:

Caccavale, Michael. “Council Post: Bye Bye, Third-Party Cookies.” Forbes, Forbes Magazine, 13 Apr. 2021, https://www.forbes.com/sites/forbesagencycouncil/2021/04/13/bye-bye-third-party-cookies/?sh=66d66d3a3788.

International Centre for Settlement of Investment Disputes. (n.d.). Retrieved June 23, 2022, from https://icsid.worldbank.org/

3 steps for marketers to prepare for a cookieless world. Gartner. (n.d.). Retrieved June 23, 2022, from https://www.gartner.com/en/marketing/insights/articles/three-steps-for-marketers-to-prepare-for-a-cookieless-world

Google ending third-party cookies in Chrome. (n.d.). Retrieved June 23, 2022, from https://www.cookiebot.com/en/google-third-party-cookies/

Will retailers be ready when the third-party cookies crumble? RetailWire. (n.d.). Retrieved June 23, 2022, from https://retailwire.com/discussion/will-retailers-be-ready-when-the-third-party-cookies-crumble/#:~:text=Eighty%2Done%20percent%20of%20companies,State%20of%20Customer%20Engagement%20Report.%E2%80%9D

Temkin, D. (2021, March 3). Charting a course towards a more privacy-first web. Google. Retrieved June 23, 2022, from https://blog.google/products/ads-commerce/a-more-privacy-first-web/

 

When an Auto accident happens, is it due to the driver ability to drive or is it due to their credit score?

When an Auto accident happens, is it due to the driver ability to drive or is it due to their credit score?
By Jai Raju | June 24, 2022

One car collides into the front of another.

We all know in the USA, there are regulations for industries to not discriminate by race, gender, age and other classes. So it should not be a surprise that the Auto Insurance companies are not allowed to use Race as a parameter to charge you a premium. Well, on the surface it would appear they don’t. But if you take a deeper look, and connect a few dots you’ll see that’s exactly what they do – discriminate by race.

Context: Auto Insurance Industry as a case study

Insurance is a means of protection from financial loss. It is a form of risk management, primarily used to hedge against the risk of a contingent or uncertain loss.

— Wikipedia

Auto Insurance premium is a function of:

  1. Moving violations – number of tickets
  2. Loss history – prior accidents
  3. Location – some locations are highly populated and the more the number of vehicles on the road, the chances of accidents increases
  4. Age – like everything in life, learning to drive well takes time. So younger drivers are more likely to get into accidents than mature drivers.
  5. Gender – Women are a lot safer drivers than men.
  6. Marital status – Married drivers are safer drivers than the single category drivers
  7. Credit score – lower credit score people are more likely to file a claim than those of higher credit score.

There are several other variables that determine the result of how much you pay. At the core, it is a mathematical equation the data scientists come up with by relating these variables to the potential loss you may cause.
While all of that on the outset seems reasonable, note that there is no scientific proof for any of the above reasons for using the variables. The whole industry uses observational data to summarize statistically the impact of these factors on a potential loss from a driver. While these variables are correlated to the loss there is no proof that these variables and their impacts are the actual cause of the loss. When there is no causation it is unreasonable to use variables that are discriminatory.

Investigation:

Auto insurance protects you against a loss due to the operation of your vehicle, as such it is protecting you from a mistake you could make driving your vehicle. However, they use credit score as their most important variable in computing the loss. Said another way, the Auto insurance companies claim that, If you have a bad credit score, you are more likely to cause an accident. Here is why that claim is flawed:

  1. First of all, how is credit score related to your driving behavior?
  2. Second, credit score is determined by private, for-profit, publicly traded companies,whose goals are to turn a profit. Plus, Credit bureaus are under no legal requirement to be accurate, and the current credit reporting bureaus make a tremendous amount of mistakes at the consumers’ cost. A 2013 Federal Trade Commission study of the U.S. credit reporting industry discovered that 5% of consumers had errors. This disproportionately affects the poor as they cannot afford paying lawyers to get this corrected. A Congressional Research Service report stated that – consumers sometimes find it difficult to advocate for themselves when credit reporting issues arise because they are not aware of their rights and how to exercise them.
  3. Third, we have plenty of science and research that shows people of color disproportionately have lower credit scores. Also, In 2020, 18% of Black Americans had no credit score, compared to 15% of Latinos, 13% of white Americans and 10% of Asian Americans.

A similar argument can be made for the moving violations variable. There is plenty of research and science behind how the people of color are disproportionately pulled over by the police and given citations. Insurance industries have turned a blind eye towards it.

Parting thoughts:

Mobility is an essential part of a path to middle class. Auto insurance industries have a responsibility to treat all the drivers equally and assess their risk purely based on the drivers’ driving behaviour. They should not look for ways to discriminate.
Racism and Economy are tied together, racism has been about the economy has fueled racism. We often think of them as separate, putting them together is the only way to get the at the issues and challenges associated with racism2 – Angela Blackwell Glover

Reference: