Privacy Policies: Manufactured Consent 

Privacy Policies: Manufactured Consent 
Angel Ortiz | July 7, 2022

The conversation surrounding privacy policies and terms of service (ToS) has grown in  public interest these recent years, and with it concern on what exactly people are agreeing to when  they “click” accept. This more noticeable interest in the agreed upon terms for the use of one’s private information (as well as its protection) was likely sparked in part by the Facebook/Britannica Analytica breach of privacy scandal of 2018 (Confessore, N. 2018). This  event stirred a social discussion on how companies protect our data and what they are allowed to  do with the data we provide. However, despite this burgeoning unease for the misuse of user  intelligence, it is all too common to find ourselves blindly accepting the ToS of some website or  application, all because we find it too much of a nuisance to read. While it may be true that much  of this behavior is the responsibility of the consumer, one must also wonder what obligations companies have when making their policies. After all, if it is a frequent phenomenon that users accept a ToS solely due to its inconvenience, then one must begin to wonder whether this  bothersome nature is purposely infused into the text for this very reason.

Complexity of Privacy Policies 

In May 2014, the California Department of Justice outlined several concepts privacy  policies should comply with in order to properly disseminate their contents to users. One of these  key principles was “Readability”, in which they specified that privacy policies should (among  other things) use short sentences, avoid the use of technical jargon, and be straightforward (State  of California Department of Justice & Harris, K., 2014). Similarly, the FTC also advocated for  more brief and transparent privacy policies in a report they published in 2012 (Federal Trade  Commission, 2012, p. 60-61). Despite these guidelines, privacy policies seem more complex than  ever before, and this complexity does not necessarily stem from the length of the text.

While there are some excessively long privacy policies, researchers from Carnegie Mellon  University estimated that (on average) it would take 10 minutes to read a privacy policy if one  possessed a secondary education (Pinsent Masons, 2008). While this is somewhat of a long read  for some services, most would argue that they could easily dedicate 10 minutes of their time to  read a privacy policy for a service of more importance. However, the problem with these policies is usually in the complexity of the reading rather than its length. In 2019, The New York Times  published an article where they used the Lexile test to determine the complexity of 150 privacy  policies from some of the most popular websites and applications. They found that most of the  policies required a reading comprehension exceeding the college level to understand (Litman Navarro, K. 2019); for reference, it is estimated that only 37.9% of Americans, who are 25 or  older, have a bachelor’s degree (Schaeffer, K. 2022). At face value, this would mean that a non insignificant portion of the U.S. population does not have the education to understand what some  of these privacy policies entail.

Purposeful Inconvenience? 

Some may conjecture that this complexity is purposefully manufactured with the objective  of inconveniencing consumers into not reading privacy policies before accepting a ToS. While this  is an enticing thought, we cannot disregard that there is a less nefarious explanation for why  privacy policies are written in such a complex manner: legal scrutiny. It is the opinion of some experts such as Jen King, the director of consumer privacy at the Center for Internet and Society,  that privacy policies exist to appease lawyers (Litman-Navarro, K. 2019). That is to say, privacy  policies are not written with consumers as the audience in mind.

Solution 

Regardless of what the real intent behind the complexity of privacy policies is, it is  undeniable that its effect is the inability of some users to properly comprehend them or, at least,  dedicate the time to do so. Therefore, we must ask, how can we solve this problem? Often the  simplest solution is the correct one, and this holds true here as well. If the problem stems from  privacy policies not being written for consumers, then companies should begin writing their  policies with consumers in mind. This would necessarily entail making the texts shorter, more to  the point and reduced in the use of “legalese”.

Conclusion 

It is important for individuals to take the time to properly understand what they are agreeing  to when they accept a ToS, and their corresponding privacy policies. This, of course, is not likely  the norm and most would place the fault of this behavior on the consumers. However, when some  of these policies are made so long and complex that it is not only an inconvenience but an  impossibility for many users to properly comprehend what they are agreeing to, then I would argue  that this common practice is not the fault of the consumer but of the policy makers themselves.  We no longer live in a time where we have the luxury of not partaking of services that intake our  data; as such, it is my hope that as this discussion continues to grow, more policy writers shift  focus to include user understanding in their privacy policies. Otherwise, I suggest we make Law  School cheaper so that more people can obtain degrees in privacy policy comprehension.

References 

Average privacy policy takes 10 minutes to read, research finds. (2008, October 06). Pinsent  Masons. Retrieved July 3, 2022, from https://www.pinsentmasons.com/out law/news/average-privacy-policy-takes-10-minutes-to-read-research-finds#:%7E:text=The%20average%20length%20of%20privacy,take%2010%20minutes% 20to%20read.

Confessore, N. (2018, November 15). Cambridge Analytica and Facebook: The Scandal and the  Fallout So Far. The New York Times. Retrieved July 3, 2022, from  https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html

Federal Trade Commission. (2012, March). PROTECTING CONSUMER PRIVACY IN AN ERA  OF RAPID CHANGE. https://www.ftc.gov/sites/default/files/documents/reports/federal trade-commission-report-protecting-consumer-privacy-era-rapid-change recommendations/120326privacyreport.pdf

Litman-Navarro, K. (2019, June 13). Opinion | We Read 150 Privacy Policies. They Were an  Incomprehensible Disaster. The New York Times. Retrieved July 3, 2022, from  https://www.nytimes.com/interactive/2019/06/12/opinion/facebook-google-privacy-policies.html

Schaeffer, K. (2022, April 12). 10 facts about today’s college graduates. Pew Research Center.  Retrieved July 3, 2022, from https://www.pewresearch.org/fact-tank/2022/04/12/10- facts-about-todays-college graduates/#:%7E:text=As%20of%202021%2C%2037.9%25%20of,points%20from%203 0.4%25%20in%202011.

State of California Department of Justice, & Harris, K. (2014, May). Making your Privacy  Practices Public. Privacy Unit.  https://oag.ca.gov/sites/all/files/agweb/pdfs/cybersecurity/making_your_privacy_practices_publi c.pdf?

Emotional Surveillance: Music as Medicine 

Emotional Surveillance: Music as Medicine 
Anonymous | July 7, 2022

Can streaming platforms uphold the hippocratic oath? Spotify’s emotional surveillance patent exemplifies how prescriptive music could do more harm than good when it comes to consumers’ data privacy.

The pandemic changed the way we listen to music. In a period of constant uncertainty, many people turned to music. People also started to listen to more calming, meditative music. During this time, playlists started popping up on Apple Music specially curated with lofi, nature sounds. This category has been defined as ‘Chill’, but takes on many different names. The idea of music and sound therapy continues to be on the forefront of listener behavior today, with a trend on TikTok sharing brown noise sounds (brown noise has more deep, low sound waves compared to white noise, more similar to rain and storms). Brown noise can help alleviate symptoms of ADHD, and is being listened to as a sort of therapy for people who deal with anxiety.

The idea of listening to music as therapeutic is not new, however now there might be an AI tool feeding you the right diagnosis. While there is no cause for concern over someone being suggested a calming playlist, the bigger issue at hand is the direction this could take us in the future, and how surveillance audio driven recommendation systems dilute a user’s right to data privacy. Especially, when a platform wants to recommend music based on audio features that corresponded to the emotional state of the user. This was what was being considered following the patent that Spotify won back in 2021.

Spotify’s patent is a good case study for the direction which many streaming services are headed. Using this example, we can unpack the ways in which a user’s data and privacy is at risk.

The specific language of the patent is as follows:

“There is retrieval of content metadata corresponding to the speech content, and environmental metadata corresponding to the background noise. There is a determination of preferences for media content corresponding to the content metadata and the environmental metadata, and an output is provided corresponding to the preferences.” [5]

Since this patent was granted, there was significant uproar over the potential impacts. In layman’s terms, Spotify was seeking to take advantage of AI to uncover tone, monitor your speech and background noise, and recommend music based on attributes its algorithm correlates to specific emotional states. For example, if you are alone, have been playing a lot of down tempo music and have been speaking to your mom about how you are feeling depressed, the system will categorize you as ‘sad’ and will feed you more sad music.

Since it won the patent, Spotify indicated it had no immediate intention to use the technology. This is a good sign, because there are a few ways that this idea could cause data privacy harm if it was used.

Users have a right to correct the data the app collects.
To meet regulatory standards, Spotify would need to provide the attribution of the emotions that it is categorizing you with based on its audio analysis. If it thinks you are depressed, but you are being sarcastic, how will you as a consumer correct that? Without the logistics to do so, Spotify is introducing a potential privacy harm for its users. Spotify is known to sell user data to third parties, where it could be aggregated and distorted, and you could end up being pushed ads for antidepressants.

Spotify could create harmful filter bubbles.
When a recommendation system is built to continually push content similar to what it thinks a user’s mood is, that is inherently prolonging potentially problematic emotional states. In this example scenario, continuing to listen to sad music when you are depressed can have a harmful impact on your emotional wellbeing, rather than to improve it. As with any scientific or algorithmic experimentation, we know from the Belmont Report that any features built that could affect a user or participants’ health must do no harm. The impact of a filter bubble (where you only get certain content) can mimic the harm done in YouTube’s recommendations, creating a feedback loop maintaining the negative emotional state.

Users have a right to know.
Part of Spotify’s argument for why this technology could benefit the user is that without collecting this data passively from audio, the user must click buttons to select mood traits and build playlists. According to the Fair Information Practice Principles guidelines, Spotify must be transparent and involve the individual in the collection of their data. While a user’s experience is extremely important, they still need to know that this data is being collected about them. Spotify should incorporate an opt-in consent mechanism if they were to move forward with this system.

Spotify still owns the patent for this technology, and other platforms are considering similar trajectories. While the music industry considers breaking into the next wave of how we interact with music and sound, streaming platforms should be careful if they plan on building a recommendation system that will leverage emotion metadata to curate content. This type of emotional surveillance dips into a realm of data privacy which has the potential to cause more harm than good. If any distributed service providers move in this direction, they should consider the implications on data privacy harm.

References 

1 https://montrealethics.ai/discover-weekly-how-the-music-platform-spotify-collects-and-uses-your-data/
2 https://www.musicbusinessworldwide.com/spotifys-latest-invention-will-determine-your-emotional-state-from-your-speech-and-suggest-music-based-on-it/
3 https://www.stopspotifysurveillance.org/
4 https://www.soundofsleep.com/white-pink-brown-noise-whats-difference/
5 https://patents.justia.com/patent/10891948
6 https://georgetownlawtechreview.org/wp-content/uploads/2018/07/2.2-Mulligan-Griffin-pp-557-84.pdf
7 https://theartofhealing.com.au/2020/02/music-as-medicine-whats-your-recommended-daily-dose/
8 https://www.digitalmusicnews.com/2021/04/19/spotify-patent-response/
9 https://www.bbc.com/news/entertainment-arts-55839655

Password Replacement: Your Face Here

Password Replacement: Your Face Here
Jean-Luc Jackson | July 7, 2022

Biometrics promise convenient and secure logins, making passwords a thing of the past. However, consumers should be aware of possible gaps in security and vigilant of long-term shifts in cultural norms.

Microsoft encourages users to go passwordless
Microsoft encourages users to go passwordless

Technology leaders such as Microsoft, Apple, and Google are promising an impending future free of passwords. Passwordless authentication methods in use today include text or in-app validation codes, emailed “magic links”, or the user’s biometric data. These biometric-based methods are poised to replace traditional passwords and become the primary authentication systems for users’ big tech. Biometric authentication methods are no longer confined to spy films, consumers can now prove their digital identities using facial and fingerprint scans instead of employing their favorite password management service. These are exciting developments, but consumers should always be wary when exposing sensitive personal information like biometrics. The stakes with biometric data insecurity are high: passwords can be reset, new credit cards can be printed, but biometrics are permanently tied to and identifiable of their source.

The National Academy of Sciences defines biometrics as “the automated recognition of individuals based on their behavioral and biological characteristics [1].” Biometrics take advantage of features that are unique to individuals and that don’t change significantly over time. Commonly encountered examples include a person’s fingerprints, face geometry, voice, and signature. Other contenders include a person’s gait, heartbeat, keystroke dynamics, and ear shape. In other words, the way you walk, your typing patterns, and the contours of your ears are distinctive and could be used to identify you.

Published Figures on Ear Shape for Biometric Identification
Published Figures on Ear Shape for Biometric Identification

The advantage of biometrics in authentication is that they cannot be forgotten or guessed, and they are convenient to present. Microsoft announced in 2021 that consumers could get rid of their account passwords and opt-in to using facial recognition or fingerprint scanning (a service dubbed “Windows Hello”) [2]. Apple and Google have also announced similar biometric passkey technologies to be rolled out later this year [3, 4]. With this momentum, biometrics will soon be ubiquitous across modern smart devices and could one day be the only accepted login method.

Passwordless technologies offered by these tech companies utilize de-centralized security standards like FIDO (Fast IDentification Online). This authentication process involves a pair of public and private keys. The public key is stored remotely on a service’s database while the private key is stored on the user’s device (e.g., a smart phone). When the user proves their identity on their device using biometrics (e.g., with a face scan), the private key is sent to the online service and the login is approved when matched to the remote public key. This design ensures that biometric information remains on the device and is never shared or stored on a server, eliminating the threats of interception or database breaches.

FIDO Login Process

FIDO standards are an example of a de-centralized authentication system since biometric data is verified on-device and is not stored on a central server. A centralized system, on the other hand, authenticates by comparing biometric data to data saved in a central database. These systems are prone to manipulation and data breaches because of the higher potential for attacks. We should be vigilant of organizations that use centralized systems and pay close attention when they are used in government applications, such as storing biometric data about their citizens [5].

Though passwordless methods minimize security risks, gaps do exist. Researchers successfully reconstructed people’s original face images using their on-device data that result from facial recognition scans [6]. The potential to decode numerical representations of biometric data poses the threat of a new form of identity theft. Since biometrics are treated as ground-truth authentication, such a theft would provide a variety of access in a world filled with biometric logins. While most thieves won’t be able to utilize stolen biometric data with off-the-shelf methods, as technology evolves this risk will continue to expand and should receive additional attention.

It’s also possible to create imitation biometrics that allow unwanted access. Fingerprint security has often been bypassed by reproducing a copy of a fingerprint, but a group of researchers in 2018 created a machine learning model that generated fake fingerprints that successfully gained access to smart phones [7]. The continuous advancement of technology yields both benefits and risks depending on who has the tools, reminding us to exercise caution in sharing data and pushing companies to keep consumer protection as a priority.

There is no doubt that biometrics offer added convenience, and the latest authentication standards promise stronger levels of security. But as biometrics become the prevailing authentication method, we normalize the routine use of sensitive personal information in a variety of contexts. Individuals will inevitably grow more accustomed to sharing valuable information with organizations to remain productive members of society. Moving forward, it will be even more important for us as consumers to demand transparency and hold organizations accountable to minimizing data collection to only what is necessary and not using data for secondary purposes.

For context, there is currently no federal regulation regarding biometric privacy. Various states have enacted biometric-specific privacy laws, with Illinois and California leading the way in protecting its citizens. The number of state laws continues to grow, signaling the potential for national regulation soon.

Citations
[1] https://www.ncbi.nlm.nih.gov/books/NBK219892/
[2] https://www.microsoft.com/security/blog/2021/09/15/the-passwordless-future-is-here-for-your-microsoft-account/
[3] https://developer.apple.com/passkeys/
[4] https://developers.google.com/identity/fido
[5] https://www.technologyreview.com/2020/08/19/1007094/brazil-bolsonaro-data-privacy-cadastro-base/
[6] https://ieeexplore.ieee.org/document/8338413
[7] https://www.wired.com/story/deepmasterprints-fake-fingerprints-machine-learning/

Images
[1] https://www.microsoft.com/en-us/security/business/identity-access/azure-active-directory-passwordless-authentication
[2] https://link.springer.com/referenceworkentry/10.1007/978-1-4419-5906-5_738
[3] https://fidoalliance.org/how-fido-works/

Is TikTok really worth it? U.S. FCC Commissioner doesn’t think so

Is TikTok really worth it? U.S. FCC Commissioner doesn’t think so
Anonymous | July 7, 2022

It’s no secret that over the last two years, TikTok has taken over the world as one of the most popular social media applications in the world, in the United States specifically, with 19 million downloads in the first quarter of 2022 alone. American users spend hours daily going through all sorts of videos, from cute dogs to extreme athletes. The algorithm is said to be one of the best in the world, so good that users can’t find a way to log off. TikTok has changed how Americans consume information – with short videos being the new communication norm – as the app shares everything from unsolved crimes to local news, sometimes even faster than the news itself. But amongst the hype, have we ever stopped to consider what type of user data TikTok is collecting?

Commissioner of the Federal Communication Commission (FCC) Brendan Carr is so concerned about TikTok’s data access that he believes the application should be removed entirely from iPhone and Android app stores in the United States. So on June 24, 2022, he asked Apple and Google to take action (Carr, 2022). But he didn’t get too far.

After listening to BuzzFeed News’ leaked recordings from internal TikTok meetings, Carr believes TikTok has “repeatedly accessed nonpublic data about U.S. TikTok users” (Carr, 2022). Carr has also alleged that TikTok’s American employees “had to turn to their colleagues in China to determine how U.S. user data was flowing,” even though TikTok promised the American government that an American-based security team had those controls (Carr, 2022). The user data is extensive – voiceprints, faceprints, keystroke patterns, browsing histories, and more (Carr, 2022).

In the leaked recording, a TikTok official is heard saying, “Everything is seen in China,” about American user data, even though TikTok has repeatedly claimed that the data it gathers about Americans is solely stored in the United States (Meyer, 2022). In any case, China shouldn’t be allowed access to that data, as that isn’t outlined in TikTok’s Terms of Use (TikTok Inc., 2019). In contrast, in other applications like Instagram, that restriction has been clearly stated in their Terms of Use (Meta, 2019).

“At its core, TikTok functions as a sophisticated surveillance tool that harvests extensive amounts of personal and sensitive data,” Carr wrote in his letters to Google and Apple, which were published on his Twitter profile (Carr, 2022). Carr asks these tech giants to remove TikTok from their App Stores, which begs the question – is that allowed? Technically, he’s justified in asking for this. But why?

TikTok’s misrepresentation of where user data is stored puts it out of compliance with the policies both Apple and Google require every application to adhere to as a condition of being available for download (Carr, 2022). However, neither Apple nor Google have responded. Given the cry for help from the FCC, one would think the FCC’s authority over social media would be the final word, but surprisingly, that’s not the case. It turns out the FCC is responsible for ensuring communication infrastructure, but it has zero control over what is being communicated; therefore, it has little to no control over social media. Their net neutrality policy has removed their power of proper social media and big tech regulation. Although they call for it, it doesn’t mean much, as they can’t necessarily act on it (Coldewey, 2020).

Unfortunately, the United States government cannot impose fines on TikTok as no law has been broken. Any action against the tech giant would need to come from Congress, in agreement by both political parties. Without any set regulation, it’s hard to charge TikTok with anything.

TikTok is no stranger to data malpractice. In 2021, TikTok, although denying claims, agreed to pay $92 million to settle a lawsuit that alleged that the app transferred data to servers and third parties in China that could identify, profile, and track the physical locations of American users (Bryan & Boggs, 2021). In 2019, TikTok’s parent company, ByteDance, also reached a settlement with a group of parents who alleged that the company collected and exposed the data of minors, violating an American children’s privacy law (Haasch, 2021).

The controversy didn’t stop there; it continued. TikTok responded to Carr’s claims by saying the recordings were taken out of context. TikTok’s CEO, Shou Zi Chew, in a letter published by the New York Times, said the conversations in the recordings were around an initiative designed to “strengthen the company’s data security program” (Chew, 2022). Chew went into detail about how TikTok prevents data from being routed to China, mainly by having data servers located directly in the U.S., with help from American consulting firms in designing that process (Chew, 2022).

All of this begs the question: is TikTok worth it? Would you risk your data for the videos? Unfortunately, there’s little way to know if TikTok and Chew are following their policies, and the United States government is far from adequately regulating the app. It’s up to you to decide what you should do.

Sources

Bryan, K. L., & Boggs, P. (2021, October 5). Federal Court Approves $92 Million TikTok Settlement. National Law Review. Retrieved July 7, 2022, from http://natlawreview.com/article/federal-court-gives-preliminary-approval-92-million-tiktok-mdl-settlement-over

Carr, B [@BrendanCarrFCC]. (2022, June 28). TikTok is not just another video app. That’s the sheep’s clothing. It harvests swaths of sensitive data that new reports show are being accessed in Beijing. I’ve called on Apple and Google to remove TikTok from their app stores for its pattern of surreptitious data practices. [Tweet]. Twitter. https://twitter.com/brendancarrfcc/status/1541823585957707776

Chew, S. Z. (2022, June 30). TikTok’s Response to Republican Senators. The New York Times. Retrieved July 4, 2022, from https://int.nyt.com/data/documenttools/tik-tok-s-response-to-republican-senators/e5f56d3ef4886b33/full.pdf

Coldewey, D. (2020, October 19). Who regulates social media? TechCrunch. Retrieved July 7, 2022, from https://techcrunch.com/2020/10/19/who-regulates-social-media/

Haasch, P. (2021, November 19). TikTok May Owe You Money From Its $92 Million Data Privacy Settlement. Business Insider. Retrieved July 6, 2022, from https://www.businessinsider.com/tiktok-data-privacy-settlement-how-to-submit-claim-2021-11

Meta. (2022, January 4). Terms of Use. Instagram. Retrieved June 12, 2022, from https://help.instagram.com/581066165581870

Meyer, D. (2022, June 29). Apple and Google should kick TikTok out of their app stores, FCC commissioner argues. Fortune. Retrieved July 5, 2022, from https://fortune.com/2022/06/29/apple-google-tiktok-iphone-android-brendan-carr-fcc-privacy-surveillance-china-snowden/

Montti, R. (2022, July 5). TikTok Responds To Allegations Of Unsecured User Data. Search Engine Journal. Retrieved July 6, 2022, from https://www.searchenginejournal.com/tiktok-responds-user-data/456633/#close

TikTok Inc. (2019, February 1). Terms of Service. TikTok. Retrieved July 4, 2022, from https://www.tiktok.com/legal/terms-of-service-us?lang=en

Dangerous Data at Disney

Dangerous Data at Disney
Conner Brew | June 23, 2022

Disney Uses Cutting-Edge Tech to Optimize Its Parks

At Disney Parks, guests experience a one-of-a-kind magical experience. What many guests may not realize, however, is the extent to which their magical experience depends on the collection of their personal data. Disney Parks, such as the world-famous Disney World Resort in Orlando Florida, rely on cutting-edge technology to ensure that guests’ experiences are personalized and unforgettable. They do this through the use of the MyMagic+ mobile app, wearable Magic-Bands, and countless machine-learning-optimized shows and attractions throughout the parks.

How Does Disney Use Personal Data?

Since the arrival of the coronavirus pandemic in 2019, changes to the Disney park system have made the MyMagic mobile app absolutely necessary to the Disney experience. After purchasing park tickets, guests must register and reserve the days they plan to visit various parks – for example, if a guest purchases a 5-day pass to Disney World, they must reserve on the app the specific days during which they plan to visit individual parks like Animal Kingdom, Hollywood Studios, or Epcot. MyMagic also contains features for guests to retrieve Disney Photopass pictures taken throughout their park experiences, and allows guests to reserve fastpasses and other means of reserving space on busy attractions within the parks 1. Perhaps most practically, the app uses GPS location to provide the user with a live map of the park and instant directions to attractions of their choice, as well as the wait-times of all attractions. In the past, attraction wait times were calculated using a device that guests could carry in line with them – today, many Disney attractions use machine learning technology coupled with the location-tracking power of the MyMagic app to predict and optimize attraction wait times.

Disney's Magic Bands Make the Experience More Fun, But At What Cost?

To gain maximum benefit of the MyMagic experience, guests are encouraged to purchase and wear Magic Bands. These Magic Bands can be loaded with digital payment information, digital park tickets and park reservation information, restaurant reservations, and virtually any other piece of digital information that could potentially make the Disney park experience more convenient and enjoyable. These Magic Bands use radio-frequency identification (RFID) technology to communicate with devices throughout the parks to make transactions, access reservations, and more. Disney also uses personal information stored on a guest’s Magic Band to personalize their park experience in unspecified ways: “And, if you choose to use a MagicBand, it can add a touch of magic to your vacation by unlocking special surprises, personalized just for you, throughout the Walt Disney World Resort!” 2

In addition to these relatively explicit means of improving and personalizing the Disney park experience through personal data collection, numerous Disney patents and studies have shown that Disney optimizes their parks using collected data that is much less explicit. For example, Disney has patented technology that allows them to identify and track individual guests using scans of their shoes 3. Disney claims that this method of guest identification and tracking is less invasive than biometric tracking methods such as facial recognition, but Solove and other privacy experts may disagree about Disney’s claim – in fact, the ability to personally identify and track individual guests through their personal data may be equally invasive regardless of the specific piece of data used to conduct said identification and tracking – that is, whether Disney is tracking shoes or faces, isn’t it still pretty invasive?

Conclusion

For now, Disney’s exploitation of personal data in their parks is often brushed aside. After all, who cares how personal data is collected, processed, used, and disseminated as long as it’s being used to improve the guest experience? We’ve trusted Disney to provide a safe, comfortable theme park experience since 1955 – why stop now? Here’s the bottom line: as big data collection and processing becomes more sophisticated and as the Disney park experience seeks to enhance personalization, data collection will assuredly become more invasive. Ethical concerns like beneficence, personal identification, data aggregation, and other issues will only become more prominent as the volume of exploited data at Disney continues to proliferate.

Before Disney finds itself in a corner, Disney parks should take steps to become advocates and practitioners of strong data ethics. Greater transparency, improved contextual consent, and reduction of unnecessary data collection should become the norm at Disney parks. For years, Walt Disney Imagineering (WDI) has prided itself on operating on the forefront of the cutting-edge of technology. As the use of personal data grows, WDI should strive to operate on the forefront of data ethics and privacy as well!

1 https://disneyworld.disney.go.com/vacation-planning/

2 https://disneyworld.disney.go.com/faq/my-disney-experience/my-magic-plus-privacy/

3 https://patentyogi.com/latest-patents/disney/disney-judge-shoes/

Facial Dilemma

Facial Dilemma
Mohamed Sondo | June 26, 2022

Facial recognition is undoubtedly one of the most fascinating technological ventures in  the world today.

A facial recognition system is simply a technology that can match a given face from a digital  image against a database consisting of numerous faces. The technology is primarily used to  authenticate users through ID verification services and to measure facial features from an image.

The most common facial recognition technology used across the world includes the auto  photo tagging feature on Facebook and google photos. Others include Amazon Rekognition,  Betaface (which Focuses on image and video analysis, face and objection recognition), BioID,  Cognitec, DeepVision AI, Face ++, Kairos, and Sky Biometry [1]. The list is endless. Someone  would start wondering, How does a software start recognizing human faces? What features or  components make the software recognize an individual’s face?

Here is a quick glimpse at the components and the features that make the software work  effectively.

  • Hardware: used for capturing the images
  • Intelligence: For comparing the captured faces with the existing data
  • Database: An existing collection of identities.

Key Must-Have Features of a Facial Recognition Software 

So how does that FRS work, and who uses the software? Here is a brief overview of how  it works. The first step is the detection of the fed image. Detection involves extracting the face  from the provided image [1]. This process aims to have the face print, which is always unique to every human being. The faceprint is generated in the form of a code which is then passed onto

the next stage, matching. At this stage, the faceprint is matched with the other prints stored in the  system by taking the image through different technological layers to ensure accuracy. The  algorithms, in this case, consider several factors, including facial expressions, lighting, and  image angles, when discerning the perfect match [2]. The third step involves identification,  whose goal is dependent on what the particular FRS is used for. The end result from this stage  should be a 1:1 match of the subject. Intelligence services and different companies such as  Mastercard, hospitality sector, Airports, banking halls, and mobile commerce-oriented  companies use facial recognition software.

Step-by-step representation of the automated face recognition system.

 

What are the Ethical Issues of Using Facial Recognition Technology? Critics have recently questioned the facial recognition system’s accuracy and role in  identity fraud. There are cases where law enforcement agencies implicate innocent individuals  over facial recognition leads [2]. The most cited ethical concerns include racial bias,  discrimination, privacy, data breaches, mass surveillance, and lack of transparency. Have you  heard or experienced any of these? Here is an overview of each.

Data Privacy 

Remember in 2020 when the European Commission banned facial recognition technology in  public spaces to include guidelines on privacy? Data privacy is one of the general public  concerns. Facial recognition violates the public’s inherent right to remain under constant  government surveillance. Data integrity is only guaranteed through proper encryption to avoid  security vulnerabilities.

Racial Bias and Discrimination 

The racial bias remains one of the primary concerns over facial recognition systems.  Some cases and worries have recently emerged over the developments that challenge the ethics  of facial recognition. Recent statistics by the National Institute of Standards and Technology  indicated that facial recognition technologies for more than 180 algorithms indicated a racial bias towards women of color [2]. It is so unfortunate that errors detected were more common on dark skinned faces than when matching their light-skinned counterparts.

Lack of Transparency 

Facial recognition algorithms function well when tested over large datasets of images  captured under different angles and using different lighting. These images are mainly sourced  from online sites and illegitimate social media platforms. As a result, individuals’ images are  obtained and used to evaluate and improve surveillance products without any informed consent  and transparency.

Mass Surveillance 

Facial recognition leads to mass surveillance, mainly with ubiquitous cameras and data  analytics. This surveillance leads to compromising citizens’ fundamental privacy rights and  liberty.

References 

[1]Edison Paria, R. C., Juan G., & Jose G. (2017). An Improved Face Recognition Based on  Illumination Normalization Techniques and Elastic Bunch Graph Matching. In  Proceedings of the International Conference on Compute and Data Analysis (ICCDA  ’17). Association for Computing Machinery, New York, NY, USA, 176–180.  https://doi.org/10.1145/3093241.3093249

[2]Olszewska, J. I. (2016). Automated face recognition: Challenges and solutions. In (Ed.),  Pattern Recognition – Analysis and Applications. IntechOpen.  https://doi.org/10.5772/66013 https://www.intechopen.com/chapters/52911

Is Your Home Surveillance System Invading Other People’s Privacy?

Is Your Home Surveillance System Invading Other People’s Privacy?
Anonymous | June 26, 2022

If you are not disclosing to your guests that you have a Wi-Fi security camera in your home, you are invading their privacy – and may even be breaking the law. These home security systems record, listen, and watch over any guests that one might have, and they are all different forms of surveillance.

Privacy and Consent In Homes 

Although one may feel entitled to protect and watch over what goes on in their home at all times, this is a case of lost autonomy. Without consent, guests in your home can feel unsafe, creeped out, and outright uncomfortable. These at-home surveillance systems “violate basic privacy and civil rights protections by illicitly filming innocent residents [and guests] without any knowledge” [1]. What constitutes consent when in someone else’s property? Under the Belmont Report, research on human subjects must abide by “the requirement to protect those with diminished autonomy” [5]. While home Wi-Fi security camera systems do not fall under research, this notion of autonomy should be practiced regardless. Companies and homeowners alike should be held accountable for withholding certain privacy standards. In particular, Wi-Fi camera recording is subject to “‘reasonable expectation of privacy’ guidelines under privacy law” [4]. In homes, privacy laws get a little blurry. According to Wirecutter, one “can’t record video in any location where a person would expect to have a high degree of privacy.” But, what qualifies as a “high degree of privacy”? This might mean different things to different people. To

theorist Robert Post, “privacy is a value so entangled in competing and contradictory dimensions, so engorged with various and disting meanings” [2]. For instance, most would quantify a bathroom, bedroom, or anything on one’s personal property as a “high degree of privacy,” although many others might not agree with that. Here, we can see how the jargon within privacy law can blur the lines as to what constitutes privacy.

Data Collection 

In terms of the information collected through Wi-Fi security cameras, companies such as Ring, Blink and Arlo follow an almost identical data collection process. Wi-Fi security camera users might expect that only the information they provided these companies with such as name, email address, phone number, address, etc. – is collected or used – however, these companies also monitor what goes on in one’s home, the same way homeowners do. Not only do owners of Wi-Fi home camera systems and companies have access to what goes on within and outside one’s home, but third parties and affiliates do as well. This access may include one’s profile information, audio and video recordings of one’s home, and surrounding areas captured.

Namely, Ring’s privacy notice indicates they collect “content (and related information) that is captured and recorded when using our products and services, such as video or audio recordings, live video or audio streams, images, comments, and data our products collect from their surrounding environment to perform their functions (such as motion, events, temperature and ambient light)” [3]. More recently, Ring has been under investigation for partnering with multiple police stations to “minimize crime” [1]. Despite the intent of upholding a safe community, widespread devices such as at-home surveillance cameras partnering with local police stations carries Big Brother-esque energy; one is not safe or kept from being watched, even in their own homes. Even though one might think that they are in full control of their home surveillance system, unfortunately, this is not true.

Is This What You Want? 

The next time you are thinking about purchasing a Wi-Fi home surveillance system, review the pros and cons of having a monitoring device like Ring, Blink, or Arlo (i.e. consider all the data that might be collected by these devices, let your guests know that they are being kept under surveillance during their time in your home). Is the data collection, tracking, and control worth the price of yours and your guests’ privacy, autonomy, and safety?

References 

[1] Yeager, (2020). Ethics, C. for D., Avenue, P. of L. U. C. · 820 N. M., Chicago, & Disclaimer 2022. (n.d.). Amazon’s Ring Doorbell Rings In New Privacy Violations: Center for Digital Ethics & Policy: Loyola University Chicago. Retrieved June 26, 2022, from https://www.luc.edu/digitalethics/researchinitiatives/essays/archive/2020/amazonsringdoo rbellringsinnewprivacyviolations/

[2] Mulligan, D. K., Koopman, C., & Doty, N. (2016). Privacy is an essentially contested concept: A multi-dimensional analytic for mapping privacy. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160118. https://doi.org/10.1098/rsta.2016.0118

[3] Privacy Notice. (n.d.). Ring. Retrieved June 24, 2022, from https://ring.com/privacy-notice [4] Security Cameras, Ethics, and the Law. (2016, September 23). Wirecutter: Reviews for the Real World.

https://www.nytimes.com/wirecutter/blog/security-cameras-ethics-and-the-law/ [5] The Belmont Report. (n.d.). 10.

Will the Future of Art be Artificial? 

Will the Future of Art be Artificial? 
Gabriel Louis-Kayen | June 21, 2022

The use of AI-generated art has exploded in the creative industries. In the past month alone, social media has seen the wide popularization of Open AI’s Dall-E mini AI model that generates images from text— the image generator has become a popular format for creating memes. The Dall-E mini is one example of the growing field of generative AI models, using “unsupervised learning algorithms to create new digital images, video, audio, text or code” that are comprehensible and user-friendly to the often non-technical public.


*A set of images generated using the Dall-E mini model.* 

While Dall-E and one of its rivals, Google’s Imagen, are text-to-image models, commonly accessible generative AI models already exist in other artistic fields. Jukebox, another Open AI project, is a neural network that generates music complete with multiple genres and coherent lines of artificially-generated singing. Within the field of literature, Sudowrite is a service that aims to curve writer’s block through generating completely cogent paragraphs.

These emergent technologies have clear potential to completely transform the creative industries as we know them. Research has already shown that individuals are unable to “distinguish and accurately identify artwork created by AI technologies” when presented with machine- and human-generated art, with up to 77% of research participants mistaking

AI-generated art for human-generated art. This leads one to wonder, will humans be replaced by machines in the future of art creation?

Current Trends 

The prospect and reality of Americans losing jobs to technology are nothing new. A 2016 report by the Obama administration highlighted America’s growing dependence on AI-driven automation, and how its labor implications “might disrupt the current livelihoods of millions of Americans.” On a global scale, the McKinsey Global Institute predicts that due to automation and AI, up to “375 million workers worldwide, about 14% of the global workforce, will need to switch occupational categories by 2030 in order to avoid obsolescence.” The Brookings Institute expands on McKinsey’s data in predicting that, of these global statistics, 88 million American jobs will be affected by automation in the coming decades, with 52 million of these jobs being fairly susceptible to automation by 2030 — this equates to a third of the American labor force being impacted by the end of the decade. While these reports do not indicate specific predictions for the effects of AI on the creative industries, there is only room for an expansion in the use of generative AI in the arts. While generative AI makes up less than 1% of all current data, “by 2025, Gartner expects generative AI to account for 10% of all data produced.” Should AI-generated content begin to pervade the creative industries, what will the consequences be?


*A figure on American job susceptibility to automation created by the Brookings Institute* Implications and Consequences of AI in Art

There is no clear consensus on whether generative AI will have a positive or negative impact on the creative industries. Some argue that artificial intelligence will benefit art by reducing the barriers of entry for many artists. By making generative art easy and accessible online, the often costly labor of painting, drawing, writing, filming, etc. are removed from the creative process, allowing more individuals to participate in art creation. Additionally, many people believe that AI is yet another tool enhancing the way that an artist can express themselves by allowing them to creatively guide and constrain unsupervised learning algorithms. But, some fear that the easy development of AI-generated art will completely disrupt the creative industries and make human-based art obsolete. Other artists posit that art is defined by human activity and creativity, and that AI-generated alternatives should not even be considered art.

AI-art additionally raises a lot of concern over its ownership rights and usage. Earlier this year, former President Barack Obama delivered a speech on how AI continues to empower and worsen the effects of disinformation. The emergence of deep fakes and other synthetic media have shown the risks of AI generations being indistinguishable from authentic content. Within the creative industries, similar risks exist surrounding intellectual property (IP) rights and plagiarism. AI-art has raised a lot of unanswered questions: Will generative AI make art too easily replicable? Who falls responsible for AI-generated content that plagiarizes or steals from another individual’s work?

Analyzing trends of AI-driven art using the guidelines of the Belmont Report questions how justly generative AI will be employed within the creative industries. The Obama Administration’s 2016 report on AI and Automation indicates that AI-driven automation will disproportionately impact the jobs of lower-income and less-educated workers. The report estimates that 83% of jobs making less than $20 per hour have a high probability of automation, while only 31% of jobs making $20-40 per hour and 4% of jobs making over $40 per hour share that same high probability of automation. The report also asserts that 44% of jobs performed by those without a high school degree are highly automatable in contrast to the only 1% of jobs performed by those with bachelors degrees. While creative jobs are less automatable than most other similarly-paying jobs, lower-income artists may unjustly face the costs of automation while the financial benefits of generative art may concentrate in a small handful of individuals. Alternatively, the increasing accessibility of generative AI may make the benefits of AI-art more fairly distributed.


*AI-art generated by me using an online platform.* 

Where We Are Today 

It is difficult to anticipate the effects that AI will have on our society. The impacts of AI-driven automation are path dependent on how humanity’s relationship with artificial intelligence involves. Artists will need to be innovative in their applications of generative AI, but vigilant of how it detracts from the human creativity at the center of art. In a 2017 interview, Monash University Professor and artist Jon McCormack explains that AI as we know it today “is still very primitive — it doesn’t have the same capabilities as a human creative,” noting that AI models “can only draw on what they’ve been trained on.” Generative AI does not need to replace existing artistic processes, and may better serve the arts by “doing things that complement our intelligence.” AI-driven practices may open the creative industries to unforeseen innovations. Ultimately, the future of art lies in the hands of the beholder — a thoughtful partnership of artificial intelligence and human creativity will take art into uncharted yet fruitful territories.

References 

  • https://huggingface.co/spaces/dalle-mini/dalle-mini
  • https://www.techopedia.com/definition/34633/generative-ai
  • https://imagen.research.google
  • https://openai.com/blog/jukebox/
  • https://www.sudowrite.com
  • https://www.gwern.net/docs/ai/nn/gan/2021-gangadharbatla.pdf
  • https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF
  • https://www.mckinsey.com/mgi/overview/in-the-news/automation-and-the-future-of-work● https://www.brookings.edu/wp-content/uploads/2019/01/ES_2019.01_BrookingsMetro_Automation-AI_Report_Muro-Maxim-Whiton-FINAL.pdf
  • https://www.gartner.com/en/newsroom/press-releases/2021-10-18-gartner-identifies-the-top strategic-technology-trends-for-2022
  • https://www.cnbc.com/video/2022/04/21/former-pres-obama-takes-on-disinformation-says-it could-get-worse-with-ai.html
  • https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html
  • https://app.wombo.art

● https://www.abc.net.au/news/2017-08-11/artificial-intelligence-can-ai-be-creative/8793906

Video Games: A Pitfall For Unethical Child Data Aggregation?

Video Games: A Pitfall For Unethical Child Data Aggregation?
Alexandre Baude | June 26, 2022

Lede: Children’s phone games are a dangerous playground where minors risk—and suffer—data abuse.

Overview: Children who innocently amuse themselves playing phone games such as Duel Links and Raid: Shadow Legends are wandering into the clutches of dangerous data brokers through games. These games leverage the guise of “Legitimate Interest” and “Informed Consent” to aid and abet the procurement of data collected from minors who do not have the intellectual knowledge or maturity to protect themselves from data predators.


(Zovoilis, 2013)

At a time when an estimated 40% of U.S. parents allow children aged 10 and under to have a cell phone (Hurtado, 2021), kid-owned cells phones are here to stay. The reason for cell-phone ownership among this demographic is, in part, because busy parents seek connectivity with their kids who often have after school and weekend activities in conflict with their parent’s schedules, so the cell phone is meant to be a lifeline to ensure the safety and wellbeing of our youth. Another main reason why parents give their children cell phones is to provide children with entertainment. Enter the wolves in sheep’s clothing.

Ironically, the cell phone—which many parents view as a way of keeping their children out of harm’s way—is, in fact, an open invitation to data predators. Minors as young as three know to tap the Privacy Notice “consent” button—often in an attractive color—in order to play their favorite games. For example, games such as Duel Links, a popular digital version of the classic Yu-Gi-Oh card game, don’t require parental approval to access the game. Unfortunately, this doesn’t stop Konami, Duel Link’s parent corporation, to build a data profile of their child leveraging identifiers such as: IP Address, Device Name/OS Version, Usage Data, and even “identifiers as designated by third parties”—a dangerously vague line seemingly implying unabridged access to their data.  Even children whose phones have strict parental-control settings don’t get affected by the privacy notice within the app; you simply press “I agree.”

Issues such as Information Dissemination and Information Processing—let alone data accrual by third parties—don’t spring into an eight-year-old’s mind, let alone alarm their parents. COPPA guides for Accountability, Individual Choice and Access, and Readability apply in theory, but not in practice. The kids just want to play and the parents often just want their kids occupied (Auxier et al, 2020), which is an ideal formula for gaming corporations who seek, and often succeed, in monetizing clicks. Every time a child clicks the consent button and accesses a fun game is a predictor that the next time the child sees a consent button, they’ll press it right away. While COPPA enforces that websites and online services need to obtain parental consent before collecting personal information on children 13 or younger (and 15 or younger for EU citizens under the GDPR), apps are left to their own devices without repercussions.


(Smith, 2018)

Despite taking efforts to hide the child’s identity through anonymization, the IP address provides a wealth of information, along with the child’s playing preferences, reaction speeds, gear and loot-box purchases, and a host of other data are all strictly personal and confidential. This information, in turn, is used in ad campaigns, sold to data brokers, and other third parties. The practice of data selling and capitalization isn’t at fault, it’s the fact that it directly targets a predominantly under-age population of users that draws ire and worries.

While legislation and regulation like COPPA and the GDPR are fantastic first steps in the battle for data security, parents need to be educated to do their part to ensure that their progeny are kept safe from digital predators. Toward that end, legislation should seek to put in place safeguards not unlike other safeguards that already exist to protect our kids. Gaming companies should institute Privacy Notices that require parental approval as directed through the device settings; they should spare no efforts in preventing the monetization of minors’ data; and most importantly, they shouldn’t continue to turn a blind eye. Games are meant to be safe havens for children, not another data mine.

Reference List

Auxier, B. et al. (2020, July). Parenting Children in the Age of Screens. Pew Research Center. https://www.pewresearch.org/internet/2020/07/28/parenting-children-in-the-age-of-screens/

Hurtado, K. (2021, January). Surprising Facts on Child Cell Phone Usage Statistics. Parentology.com. https://parentology.com/what-you-need-to-know-child-cell-phone-usage-statistics/

Smith, D. (2018). Data Protection and Privacy. Flickr. https://www.flickr.com/photos/cerillion/43711943092/in/photolist-MEzZpm-bWN3ja-2kTHZbT-2b9KhiW-hUQW41-gnCX1D-29AF8Mf-hUSrR3-ZTSoxY-hUSzea-gnD4EF-hURL5a-2h4T4gP-hUR7dQ-2f7smZ3-2f7snqo-hUSWBD-gG7RGX-hUSn1b-2b9Kh8L-hURBuW-Mt2A1i-hUQuXP-2hQRkzj-hURNkR-psxct7-hUSRTJ-hUU4Lr-hUS7nA-hUSC34-hUSMt9-hUQFHX-2m9ci5Q-hUSuWw-hURCc4-hURktr-hURvjE-hUQU1o-gssXJa-gKn5tx-hUSHZC-QKUGX2-hURWmk-hURYHM-HDCSjS-gEoqyv-hUQXsc-hUT5xR-hURzZw-hURFWD

Zovoilis, T. (2013). Small boy with his mother looking at a tablet [Photograph]. Flickr. https://www.flickr.com/photos/55975562@N07/8547117491/

Deregulated NFTs as Building Blocks of the “Infraverse”

Deregulated NFTs as Building Blocks of the “Infraverse”
Noor Gill | June 23, 2022

Lede: A Laissez-faire approach to managing NFTs with compromised information from blockchain addresses, transaction activity, and location data poses privacy concerns for Metaverse users.

Overview: The Non-fungible tokens (NFT) marketplace is a niche phenomenon on crypto where digital art is bought and sold via cryptocurrency but despite the rising popularity of the space in 2021, decentralized technologies, a lack of legal framework, and the immutable nature of blockchain have created a range of privacy concerns for Metaverse users, including the inability to anonymize or delete public transactional data which removes users’ representational autonomy and how easily advertisers can collect users’ personal information through their virtual personas.

       NFTs function as a form of digital art and currency that operate via cryptocurrencies (a form of digital currency, most often Ethereum) and the blockchain (an online method of recording and validating ownership of crypto assets). In the context of an NFT transaction, the flow of information about the NFT as well as the consumer and seller is shared to and from the consumer and seller in a manner that is governed by transmission principles connected to the blockchain.

Time series plot of cryptocurrency value, 2021-2022

Fig 1: A double line plot for the weekly total cryptocurrency values and average transaction values on NFT platforms from 2021 to early 2022 (Chainalysis)

       With a peak weekly value of about 4 billion USD in early 2022 and a peak weekly average value per transaction of about 3.5 billion USD at the end of 2021 (Fig 1), the popularity and significance of NFTs in the global market are evident. However, there has even been debate regarding the future of NFTs in Congress and the U.S. Treasury such that the IRS has confiscated over $3.5b worth of online currencies in 2021 alone. Moreover, decentralized technologies allow for some of the most prevalent forms of crime in the NFT space – money laundering and tax evasion. The rise of NFTs comes with a rise in privacy and security concerns within the Metaverse, an intriguing yet immediate need that needs to be further explored.

Public Blockchains Lack Legal Framework

        In terms of privacy legislation for crypto-assets like NFTs, current legislation for privacy online was not written or established with blockchain in mind. For example, data cannot be deleted from the blockchain, which contradicts the California Consumer Privacy Act (“CCPA”) and EU General Data Protection Regulation (“GDPR”). Hence, there is an absence of legal documentation to protect user information, resulting in possible harm in the privacy dimension since users are not provided accurate and transparent guidelines for data usage and deletion, resulting in a lack of control over their information.

An Illusion of Privacy

        Upon purchase of an NFT, the buyer is provided a digital ownership certificate, a sort of virtual receipt accessible to all users and updated after each transaction through the blockchain. While this public record of all transactions ensures transparency and can be leveraged to accurately maintain transactions, it also provides others with the transactional history of the entire network often tied to other identifiable information. Since there is no option to set the transaction to private or delete tokens due to the immutable nature of blockchain, the process of burning and replacing wallets is vulnerable to exploitation and human error, ultimately removing a user’s decision-making autonomy over how their data is handled.

        Another issue lies in the fact that users cannot provide informed consent in receiving NFTs since they can be sent to any address regardless of whether an individual approves the transaction. For example, when Jimmy Fallon displayed his Bored Ape NFT on his show in January 2022, it became easy to use that publicly available NFT information to locate Fallon’s wallet address and even resulted in a user sending him 1,776 of a token named “Let’s Go Brandon.” which he did not consent to.

The Fine Line Between Online and IRL

        Beyond this, the use of avatars and virtual identities provides users with a false sense of detachment from their real-life identities, which paves the way for advertisers and third parties to gather users’ personal information through avatars. Not only can this trap users within filter bubbles based on similar or desirable sets of advertisements and online experiences within the Metaverse, but it also violates users’ freedoms and right to anonymity during the stage of information processing.

Fig 2: A horizontal bar chart displaying total gain defined by the difference between total profit and total expenses in USD based on wallet age for OpenSea accounts(Financial Times, Nansen)

       Due to unregulated access to users’ transaction activity, personal identifiers, and location data, the potential harms of participating in NFT transactions may outweigh the benefits. For example, for contracts on the NFT marketplace, Opensea, realized gains (the total profit minus the total cost of purchasing an NFT) were positive on a longer-term scale of 1-3+ years as opposed to the pattern of negative gains for wallets open for less than a year (Fig 2). This displays how the benefit of monetary gain from selling NFTs may be overstated or less tangible than it appears.

Parting Words

       As Web 3.0 (the emerging internet infrastructure based on decentralized networks and individual ownership of content) gains traction, it is vital to create a dedicated federal regulatory body for the NFT marketplace. As opposed to banning NFTs or continuing forward with the current status quo, establishing a federal presence in the realm of crypto to monitor transactions and audit platforms would prevent exploitation and define a structure that maximizes privacy and minimizes harm to both NFT creators and consumers.

References:

Etherscan.io. (n.d.). Retrieved June 23, 2022, from https://etherscan.io/tx/0x125714bb4db48757007fff2671b37637bbfd6d47b3a4757ebbd0c5222984f905

How filter bubbles distort reality: Everything you need to know. Farnam Street. (2019, November 14). Retrieved June 23, 2022, from https://fs.blog/filter-bubbles/

Latenight. (2022, January 24). Paris Hilton surprises Tonight show audience members by giving them their own nfts | tonight show. YouTube. Retrieved June 23, 2022, from https://www.youtube.com/watch?v=5zi12wrh5So&t=306s&ab_channel=TheTonightShowStarringJimmyFallon

​​Mulligan, D. K., Koopman, C., & Doty, N. (2016). Privacy is an essentially contested concept: A multi-dimensional analytic for mapping privacy. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160118. https://doi.org/10.1098/rsta.2016.0118

National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. U.S. Department of Health and Human Services. https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html

Nissenbaum, Helen F. and Nissenbaum, Helen F., A Contextual Approach to Privacy Online (2011). Daedalus 140 (4), Fall 2011: 32-48, Available at SSRN: https://ssrn.com/abstract=2567042

Ravenscraft, E. (2022, April 5). NFTs are a privacy and security nightmare. Wired. Retrieved June 23, 2022, from https://www.wired.com/story/nfts-privacy-security-nightmare/

Solove, Daniel J., A Taxonomy of Privacy. University of Pennsylvania Law Review, Vol. 154, No. 3, p. 477, January 2006, GWU Law School Public Law Research Paper No. 129, Available at SSRN: https://ssrn.com/abstract=667622

Yahoo! (n.d.). IRS seized $3.5B in crypto-related fraud money this year as illicit activity multiplies. Yahoo! News. Retrieved June 23, 2022, from https://ph.news.yahoo.com/irs-seized-35-b-in-crypto-related-fraud-cases-this-year-as-illicit-activity-multiplies-150407019.html