Is someone listening to my conversation with my doctor?

Is someone listening to my conversation with my doctor?
Radia Abdul Wahab | July 5, 2022

Literature has shown that 43.9% of the U.S. medical offices have adopted either full or partial EHR systems by 2009 [1] . Every time we visit the doctor, either in the office or virtually; a series of sensitive information is recorded. This includes but is not limited to demographic, health problems, medications, progress notes, medical history and lab data information [3]. Lab data information itself may not seem sensitive. However, as genome sequencing is becoming more and more within our reach, a lot of the lab data now includes genomic information.

On the other hand, by use of social media and other online tools, large amounts of information are continuously being voluntarily shared on the internet by us as individuals. This poses a huge risk of re-identification.

Additionally, by using mobile health monitoring devices for us to track our health and well-being, we are adding yet another flood of private health information into more databases. Figure 1 below shows a wheel of various sources of information.

All this information together with emerging technologies of web sniffing/crawling, along with information sciences, pose a huge challenge for patient/individual information privacy.


Figure 1: Sources of Health Information we share using our mobile devices [2]

Who has access to my data?
As more and more data is being collected and digitized, there is a tendency to manage large big-data databases, in order to enhance scientific assessment. The government and various corporations have also made a lot of data available with an intention for scientific enhancement. Oftentimes these are accessible fully on public websites, or by a minimum payment.
Is Re-identification really possible?
Various forms of information are collected when we go to the Doctor. These include but are not limited to, Identifier attributes (such as name, SSN), Quasi-Identifier attributes (such as gender, zip code) or sensitive attributes (such as disease conditions or genomic data). Most of the time, this data is “sanitized” and removed before being available to external parties [3].

“when an attacker possesses a small amount of (possibly inaccurate) information from healthcare-related sources, and associate such information with publicly-accessible information from online sources, how likely the attacker would be able to discover the identity of the targeted patient, and what the potential privacy risks are.” [3].

One of the most critical misunderstandings we have is that it is not possible to link information from one source with information from a different source. However, with the advent of modern technologies, it has become quite easy for algorithms to crawl across various web pages and consolidate information.

Another area of risk is that a lot of algorithms are using “smart” techniques, in order to bridge gaps between missing or inaccurate information. Below (Figure 2) is a schematic that shows a case study of such an algorithm.


Figure 2: Re-identification using various web sources. [3]

What is the process of Re-identification?
There are three main steps of re-identification: Attribution, Inference and Aggregation. Attribution is when sensitive or identifiable information is collected from online sources. Inference is when additional information is either “fitted” to that, or learned by algorithms. Aggregation is when various sources of information are aggregated together. These three steps provide quite a clear path to re-identification. Figure 3 below shows some aspects of these processes.


Figure 3: Process of Re-identification [4]

Conclusion
With the flood of health information entering the web, and with emerging technologies, almost no aspect of our health is really concealable. It is very important for us to minimize sharing of our information on the web, to the extent possible, since there is a lot more out there that we will never know of. There are smart technologies out there that are reaching in, and listening to all of these, and these may be used against us by adversaries.

References:

[1] Hsiao CJ, Hing E, Socey TC, Cai B: Electronic medical record/electronic health record systems of office-based physicians: United States, 2009, and preliminary 2010 state estimates. National Center for Health Statistics Health E-stat 2010.
[2] Isma Masood ,1 Yongli Wang,1 Ali Daud,2 Naif Radi Aljohani,3 and Hassan Dawood4: Towards Smart Healthcare: Patient Data Privacy and Security in Sensor-Cloud Infrastructure. Wireless Communications and Mobile Computing Volume 2018, Article ID 2143897
[3] Fengjun Li, Xukai Zou, Peng Liu & Jake Y Chen: New threats to health data privacy. BMC Bioinformatics volume 12, Article number: S7 (2011)
[4] Lucia Bianchi, Pietro Liò: Opportunities for community awareness platforms in personal genomics and bioinformatics education. Briefings in Bioinformatics, Volume 18, Issue 6, November 2017, Pages 1082–1090, https://doi.org/10.1093/bib/bbw078

Dangerous Data at Disney

Dangerous Data at Disney
Conner Brew | June 23, 2022

Disney Uses Cutting-Edge Tech to Optimize Its Parks

At Disney Parks, guests experience a one-of-a-kind magical experience. What many guests may not realize, however, is the extent to which their magical experience depends on the collection of their personal data. Disney Parks, such as the world-famous Disney World Resort in Orlando Florida, rely on cutting-edge technology to ensure that guests’ experiences are personalized and unforgettable. They do this through the use of the MyMagic+ mobile app, wearable Magic-Bands, and countless machine-learning-optimized shows and attractions throughout the parks.

How Does Disney Use Personal Data?

Since the arrival of the coronavirus pandemic in 2019, changes to the Disney park system have made the MyMagic mobile app absolutely necessary to the Disney experience. After purchasing park tickets, guests must register and reserve the days they plan to visit various parks – for example, if a guest purchases a 5-day pass to Disney World, they must reserve on the app the specific days during which they plan to visit individual parks like Animal Kingdom, Hollywood Studios, or Epcot. MyMagic also contains features for guests to retrieve Disney Photopass pictures taken throughout their park experiences, and allows guests to reserve fastpasses and other means of reserving space on busy attractions within the parks 1. Perhaps most practically, the app uses GPS location to provide the user with a live map of the park and instant directions to attractions of their choice, as well as the wait-times of all attractions. In the past, attraction wait times were calculated using a device that guests could carry in line with them – today, many Disney attractions use machine learning technology coupled with the location-tracking power of the MyMagic app to predict and optimize attraction wait times.

Disney's Magic Bands Make the Experience More Fun, But At What Cost?

To gain maximum benefit of the MyMagic experience, guests are encouraged to purchase and wear Magic Bands. These Magic Bands can be loaded with digital payment information, digital park tickets and park reservation information, restaurant reservations, and virtually any other piece of digital information that could potentially make the Disney park experience more convenient and enjoyable. These Magic Bands use radio-frequency identification (RFID) technology to communicate with devices throughout the parks to make transactions, access reservations, and more. Disney also uses personal information stored on a guest’s Magic Band to personalize their park experience in unspecified ways: “And, if you choose to use a MagicBand, it can add a touch of magic to your vacation by unlocking special surprises, personalized just for you, throughout the Walt Disney World Resort!” 2

In addition to these relatively explicit means of improving and personalizing the Disney park experience through personal data collection, numerous Disney patents and studies have shown that Disney optimizes their parks using collected data that is much less explicit. For example, Disney has patented technology that allows them to identify and track individual guests using scans of their shoes 3. Disney claims that this method of guest identification and tracking is less invasive than biometric tracking methods such as facial recognition, but Solove and other privacy experts may disagree about Disney’s claim – in fact, the ability to personally identify and track individual guests through their personal data may be equally invasive regardless of the specific piece of data used to conduct said identification and tracking – that is, whether Disney is tracking shoes or faces, isn’t it still pretty invasive?

Conclusion

For now, Disney’s exploitation of personal data in their parks is often brushed aside. After all, who cares how personal data is collected, processed, used, and disseminated as long as it’s being used to improve the guest experience? We’ve trusted Disney to provide a safe, comfortable theme park experience since 1955 – why stop now? Here’s the bottom line: as big data collection and processing becomes more sophisticated and as the Disney park experience seeks to enhance personalization, data collection will assuredly become more invasive. Ethical concerns like beneficence, personal identification, data aggregation, and other issues will only become more prominent as the volume of exploited data at Disney continues to proliferate.

Before Disney finds itself in a corner, Disney parks should take steps to become advocates and practitioners of strong data ethics. Greater transparency, improved contextual consent, and reduction of unnecessary data collection should become the norm at Disney parks. For years, Walt Disney Imagineering (WDI) has prided itself on operating on the forefront of the cutting-edge of technology. As the use of personal data grows, WDI should strive to operate on the forefront of data ethics and privacy as well!

1 https://disneyworld.disney.go.com/vacation-planning/

2 https://disneyworld.disney.go.com/faq/my-disney-experience/my-magic-plus-privacy/

3 https://patentyogi.com/latest-patents/disney/disney-judge-shoes/

Facial Dilemma

Facial Dilemma
Mohamed Sondo | June 26, 2022

Facial recognition is undoubtedly one of the most fascinating technological ventures in  the world today.

A facial recognition system is simply a technology that can match a given face from a digital  image against a database consisting of numerous faces. The technology is primarily used to  authenticate users through ID verification services and to measure facial features from an image.

The most common facial recognition technology used across the world includes the auto  photo tagging feature on Facebook and google photos. Others include Amazon Rekognition,  Betaface (which Focuses on image and video analysis, face and objection recognition), BioID,  Cognitec, DeepVision AI, Face ++, Kairos, and Sky Biometry [1]. The list is endless. Someone  would start wondering, How does a software start recognizing human faces? What features or  components make the software recognize an individual’s face?

Here is a quick glimpse at the components and the features that make the software work  effectively.

  • Hardware: used for capturing the images
  • Intelligence: For comparing the captured faces with the existing data
  • Database: An existing collection of identities.

Key Must-Have Features of a Facial Recognition Software 

So how does that FRS work, and who uses the software? Here is a brief overview of how  it works. The first step is the detection of the fed image. Detection involves extracting the face  from the provided image [1]. This process aims to have the face print, which is always unique to every human being. The faceprint is generated in the form of a code which is then passed onto

the next stage, matching. At this stage, the faceprint is matched with the other prints stored in the  system by taking the image through different technological layers to ensure accuracy. The  algorithms, in this case, consider several factors, including facial expressions, lighting, and  image angles, when discerning the perfect match [2]. The third step involves identification,  whose goal is dependent on what the particular FRS is used for. The end result from this stage  should be a 1:1 match of the subject. Intelligence services and different companies such as  Mastercard, hospitality sector, Airports, banking halls, and mobile commerce-oriented  companies use facial recognition software.

Step-by-step representation of the automated face recognition system.

 

What are the Ethical Issues of Using Facial Recognition Technology? Critics have recently questioned the facial recognition system’s accuracy and role in  identity fraud. There are cases where law enforcement agencies implicate innocent individuals  over facial recognition leads [2]. The most cited ethical concerns include racial bias,  discrimination, privacy, data breaches, mass surveillance, and lack of transparency. Have you  heard or experienced any of these? Here is an overview of each.

Data Privacy 

Remember in 2020 when the European Commission banned facial recognition technology in  public spaces to include guidelines on privacy? Data privacy is one of the general public  concerns. Facial recognition violates the public’s inherent right to remain under constant  government surveillance. Data integrity is only guaranteed through proper encryption to avoid  security vulnerabilities.

Racial Bias and Discrimination 

The racial bias remains one of the primary concerns over facial recognition systems.  Some cases and worries have recently emerged over the developments that challenge the ethics  of facial recognition. Recent statistics by the National Institute of Standards and Technology  indicated that facial recognition technologies for more than 180 algorithms indicated a racial bias towards women of color [2]. It is so unfortunate that errors detected were more common on dark skinned faces than when matching their light-skinned counterparts.

Lack of Transparency 

Facial recognition algorithms function well when tested over large datasets of images  captured under different angles and using different lighting. These images are mainly sourced  from online sites and illegitimate social media platforms. As a result, individuals’ images are  obtained and used to evaluate and improve surveillance products without any informed consent  and transparency.

Mass Surveillance 

Facial recognition leads to mass surveillance, mainly with ubiquitous cameras and data  analytics. This surveillance leads to compromising citizens’ fundamental privacy rights and  liberty.

References 

[1]Edison Paria, R. C., Juan G., & Jose G. (2017). An Improved Face Recognition Based on  Illumination Normalization Techniques and Elastic Bunch Graph Matching. In  Proceedings of the International Conference on Compute and Data Analysis (ICCDA  ’17). Association for Computing Machinery, New York, NY, USA, 176–180.  https://doi.org/10.1145/3093241.3093249

[2]Olszewska, J. I. (2016). Automated face recognition: Challenges and solutions. In (Ed.),  Pattern Recognition – Analysis and Applications. IntechOpen.  https://doi.org/10.5772/66013 https://www.intechopen.com/chapters/52911

Is Your Home Surveillance System Invading Other People’s Privacy?

Is Your Home Surveillance System Invading Other People’s Privacy?
Anonymous | June 26, 2022

If you are not disclosing to your guests that you have a Wi-Fi security camera in your home, you are invading their privacy – and may even be breaking the law. These home security systems record, listen, and watch over any guests that one might have, and they are all different forms of surveillance.

Privacy and Consent In Homes 

Although one may feel entitled to protect and watch over what goes on in their home at all times, this is a case of lost autonomy. Without consent, guests in your home can feel unsafe, creeped out, and outright uncomfortable. These at-home surveillance systems “violate basic privacy and civil rights protections by illicitly filming innocent residents [and guests] without any knowledge” [1]. What constitutes consent when in someone else’s property? Under the Belmont Report, research on human subjects must abide by “the requirement to protect those with diminished autonomy” [5]. While home Wi-Fi security camera systems do not fall under research, this notion of autonomy should be practiced regardless. Companies and homeowners alike should be held accountable for withholding certain privacy standards. In particular, Wi-Fi camera recording is subject to “‘reasonable expectation of privacy’ guidelines under privacy law” [4]. In homes, privacy laws get a little blurry. According to Wirecutter, one “can’t record video in any location where a person would expect to have a high degree of privacy.” But, what qualifies as a “high degree of privacy”? This might mean different things to different people. To

theorist Robert Post, “privacy is a value so entangled in competing and contradictory dimensions, so engorged with various and disting meanings” [2]. For instance, most would quantify a bathroom, bedroom, or anything on one’s personal property as a “high degree of privacy,” although many others might not agree with that. Here, we can see how the jargon within privacy law can blur the lines as to what constitutes privacy.

Data Collection 

In terms of the information collected through Wi-Fi security cameras, companies such as Ring, Blink and Arlo follow an almost identical data collection process. Wi-Fi security camera users might expect that only the information they provided these companies with such as name, email address, phone number, address, etc. – is collected or used – however, these companies also monitor what goes on in one’s home, the same way homeowners do. Not only do owners of Wi-Fi home camera systems and companies have access to what goes on within and outside one’s home, but third parties and affiliates do as well. This access may include one’s profile information, audio and video recordings of one’s home, and surrounding areas captured.

Namely, Ring’s privacy notice indicates they collect “content (and related information) that is captured and recorded when using our products and services, such as video or audio recordings, live video or audio streams, images, comments, and data our products collect from their surrounding environment to perform their functions (such as motion, events, temperature and ambient light)” [3]. More recently, Ring has been under investigation for partnering with multiple police stations to “minimize crime” [1]. Despite the intent of upholding a safe community, widespread devices such as at-home surveillance cameras partnering with local police stations carries Big Brother-esque energy; one is not safe or kept from being watched, even in their own homes. Even though one might think that they are in full control of their home surveillance system, unfortunately, this is not true.

Is This What You Want? 

The next time you are thinking about purchasing a Wi-Fi home surveillance system, review the pros and cons of having a monitoring device like Ring, Blink, or Arlo (i.e. consider all the data that might be collected by these devices, let your guests know that they are being kept under surveillance during their time in your home). Is the data collection, tracking, and control worth the price of yours and your guests’ privacy, autonomy, and safety?

References 

[1] Yeager, (2020). Ethics, C. for D., Avenue, P. of L. U. C. · 820 N. M., Chicago, & Disclaimer 2022. (n.d.). Amazon’s Ring Doorbell Rings In New Privacy Violations: Center for Digital Ethics & Policy: Loyola University Chicago. Retrieved June 26, 2022, from https://www.luc.edu/digitalethics/researchinitiatives/essays/archive/2020/amazonsringdoo rbellringsinnewprivacyviolations/

[2] Mulligan, D. K., Koopman, C., & Doty, N. (2016). Privacy is an essentially contested concept: A multi-dimensional analytic for mapping privacy. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160118. https://doi.org/10.1098/rsta.2016.0118

[3] Privacy Notice. (n.d.). Ring. Retrieved June 24, 2022, from https://ring.com/privacy-notice [4] Security Cameras, Ethics, and the Law. (2016, September 23). Wirecutter: Reviews for the Real World.

https://www.nytimes.com/wirecutter/blog/security-cameras-ethics-and-the-law/ [5] The Belmont Report. (n.d.). 10.

Will the Future of Art be Artificial? 

Will the Future of Art be Artificial? 
Gabriel Louis-Kayen | June 21, 2022

The use of AI-generated art has exploded in the creative industries. In the past month alone, social media has seen the wide popularization of Open AI’s Dall-E mini AI model that generates images from text— the image generator has become a popular format for creating memes. The Dall-E mini is one example of the growing field of generative AI models, using “unsupervised learning algorithms to create new digital images, video, audio, text or code” that are comprehensible and user-friendly to the often non-technical public.


*A set of images generated using the Dall-E mini model.* 

While Dall-E and one of its rivals, Google’s Imagen, are text-to-image models, commonly accessible generative AI models already exist in other artistic fields. Jukebox, another Open AI project, is a neural network that generates music complete with multiple genres and coherent lines of artificially-generated singing. Within the field of literature, Sudowrite is a service that aims to curve writer’s block through generating completely cogent paragraphs.

These emergent technologies have clear potential to completely transform the creative industries as we know them. Research has already shown that individuals are unable to “distinguish and accurately identify artwork created by AI technologies” when presented with machine- and human-generated art, with up to 77% of research participants mistaking

AI-generated art for human-generated art. This leads one to wonder, will humans be replaced by machines in the future of art creation?

Current Trends 

The prospect and reality of Americans losing jobs to technology are nothing new. A 2016 report by the Obama administration highlighted America’s growing dependence on AI-driven automation, and how its labor implications “might disrupt the current livelihoods of millions of Americans.” On a global scale, the McKinsey Global Institute predicts that due to automation and AI, up to “375 million workers worldwide, about 14% of the global workforce, will need to switch occupational categories by 2030 in order to avoid obsolescence.” The Brookings Institute expands on McKinsey’s data in predicting that, of these global statistics, 88 million American jobs will be affected by automation in the coming decades, with 52 million of these jobs being fairly susceptible to automation by 2030 — this equates to a third of the American labor force being impacted by the end of the decade. While these reports do not indicate specific predictions for the effects of AI on the creative industries, there is only room for an expansion in the use of generative AI in the arts. While generative AI makes up less than 1% of all current data, “by 2025, Gartner expects generative AI to account for 10% of all data produced.” Should AI-generated content begin to pervade the creative industries, what will the consequences be?


*A figure on American job susceptibility to automation created by the Brookings Institute* Implications and Consequences of AI in Art

There is no clear consensus on whether generative AI will have a positive or negative impact on the creative industries. Some argue that artificial intelligence will benefit art by reducing the barriers of entry for many artists. By making generative art easy and accessible online, the often costly labor of painting, drawing, writing, filming, etc. are removed from the creative process, allowing more individuals to participate in art creation. Additionally, many people believe that AI is yet another tool enhancing the way that an artist can express themselves by allowing them to creatively guide and constrain unsupervised learning algorithms. But, some fear that the easy development of AI-generated art will completely disrupt the creative industries and make human-based art obsolete. Other artists posit that art is defined by human activity and creativity, and that AI-generated alternatives should not even be considered art.

AI-art additionally raises a lot of concern over its ownership rights and usage. Earlier this year, former President Barack Obama delivered a speech on how AI continues to empower and worsen the effects of disinformation. The emergence of deep fakes and other synthetic media have shown the risks of AI generations being indistinguishable from authentic content. Within the creative industries, similar risks exist surrounding intellectual property (IP) rights and plagiarism. AI-art has raised a lot of unanswered questions: Will generative AI make art too easily replicable? Who falls responsible for AI-generated content that plagiarizes or steals from another individual’s work?

Analyzing trends of AI-driven art using the guidelines of the Belmont Report questions how justly generative AI will be employed within the creative industries. The Obama Administration’s 2016 report on AI and Automation indicates that AI-driven automation will disproportionately impact the jobs of lower-income and less-educated workers. The report estimates that 83% of jobs making less than $20 per hour have a high probability of automation, while only 31% of jobs making $20-40 per hour and 4% of jobs making over $40 per hour share that same high probability of automation. The report also asserts that 44% of jobs performed by those without a high school degree are highly automatable in contrast to the only 1% of jobs performed by those with bachelors degrees. While creative jobs are less automatable than most other similarly-paying jobs, lower-income artists may unjustly face the costs of automation while the financial benefits of generative art may concentrate in a small handful of individuals. Alternatively, the increasing accessibility of generative AI may make the benefits of AI-art more fairly distributed.


*AI-art generated by me using an online platform.* 

Where We Are Today 

It is difficult to anticipate the effects that AI will have on our society. The impacts of AI-driven automation are path dependent on how humanity’s relationship with artificial intelligence involves. Artists will need to be innovative in their applications of generative AI, but vigilant of how it detracts from the human creativity at the center of art. In a 2017 interview, Monash University Professor and artist Jon McCormack explains that AI as we know it today “is still very primitive — it doesn’t have the same capabilities as a human creative,” noting that AI models “can only draw on what they’ve been trained on.” Generative AI does not need to replace existing artistic processes, and may better serve the arts by “doing things that complement our intelligence.” AI-driven practices may open the creative industries to unforeseen innovations. Ultimately, the future of art lies in the hands of the beholder — a thoughtful partnership of artificial intelligence and human creativity will take art into uncharted yet fruitful territories.

References 

  • https://huggingface.co/spaces/dalle-mini/dalle-mini
  • https://www.techopedia.com/definition/34633/generative-ai
  • https://imagen.research.google
  • https://openai.com/blog/jukebox/
  • https://www.sudowrite.com
  • https://www.gwern.net/docs/ai/nn/gan/2021-gangadharbatla.pdf
  • https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF
  • https://www.mckinsey.com/mgi/overview/in-the-news/automation-and-the-future-of-work● https://www.brookings.edu/wp-content/uploads/2019/01/ES_2019.01_BrookingsMetro_Automation-AI_Report_Muro-Maxim-Whiton-FINAL.pdf
  • https://www.gartner.com/en/newsroom/press-releases/2021-10-18-gartner-identifies-the-top strategic-technology-trends-for-2022
  • https://www.cnbc.com/video/2022/04/21/former-pres-obama-takes-on-disinformation-says-it could-get-worse-with-ai.html
  • https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html
  • https://app.wombo.art

● https://www.abc.net.au/news/2017-08-11/artificial-intelligence-can-ai-be-creative/8793906

Video Games: A Pitfall For Unethical Child Data Aggregation?

Video Games: A Pitfall For Unethical Child Data Aggregation?
Alexandre Baude | June 26, 2022

Lede: Children’s phone games are a dangerous playground where minors risk—and suffer—data abuse.

Overview: Children who innocently amuse themselves playing phone games such as Duel Links and Raid: Shadow Legends are wandering into the clutches of dangerous data brokers through games. These games leverage the guise of “Legitimate Interest” and “Informed Consent” to aid and abet the procurement of data collected from minors who do not have the intellectual knowledge or maturity to protect themselves from data predators.


(Zovoilis, 2013)

At a time when an estimated 40% of U.S. parents allow children aged 10 and under to have a cell phone (Hurtado, 2021), kid-owned cells phones are here to stay. The reason for cell-phone ownership among this demographic is, in part, because busy parents seek connectivity with their kids who often have after school and weekend activities in conflict with their parent’s schedules, so the cell phone is meant to be a lifeline to ensure the safety and wellbeing of our youth. Another main reason why parents give their children cell phones is to provide children with entertainment. Enter the wolves in sheep’s clothing.

Ironically, the cell phone—which many parents view as a way of keeping their children out of harm’s way—is, in fact, an open invitation to data predators. Minors as young as three know to tap the Privacy Notice “consent” button—often in an attractive color—in order to play their favorite games. For example, games such as Duel Links, a popular digital version of the classic Yu-Gi-Oh card game, don’t require parental approval to access the game. Unfortunately, this doesn’t stop Konami, Duel Link’s parent corporation, to build a data profile of their child leveraging identifiers such as: IP Address, Device Name/OS Version, Usage Data, and even “identifiers as designated by third parties”—a dangerously vague line seemingly implying unabridged access to their data.  Even children whose phones have strict parental-control settings don’t get affected by the privacy notice within the app; you simply press “I agree.”

Issues such as Information Dissemination and Information Processing—let alone data accrual by third parties—don’t spring into an eight-year-old’s mind, let alone alarm their parents. COPPA guides for Accountability, Individual Choice and Access, and Readability apply in theory, but not in practice. The kids just want to play and the parents often just want their kids occupied (Auxier et al, 2020), which is an ideal formula for gaming corporations who seek, and often succeed, in monetizing clicks. Every time a child clicks the consent button and accesses a fun game is a predictor that the next time the child sees a consent button, they’ll press it right away. While COPPA enforces that websites and online services need to obtain parental consent before collecting personal information on children 13 or younger (and 15 or younger for EU citizens under the GDPR), apps are left to their own devices without repercussions.


(Smith, 2018)

Despite taking efforts to hide the child’s identity through anonymization, the IP address provides a wealth of information, along with the child’s playing preferences, reaction speeds, gear and loot-box purchases, and a host of other data are all strictly personal and confidential. This information, in turn, is used in ad campaigns, sold to data brokers, and other third parties. The practice of data selling and capitalization isn’t at fault, it’s the fact that it directly targets a predominantly under-age population of users that draws ire and worries.

While legislation and regulation like COPPA and the GDPR are fantastic first steps in the battle for data security, parents need to be educated to do their part to ensure that their progeny are kept safe from digital predators. Toward that end, legislation should seek to put in place safeguards not unlike other safeguards that already exist to protect our kids. Gaming companies should institute Privacy Notices that require parental approval as directed through the device settings; they should spare no efforts in preventing the monetization of minors’ data; and most importantly, they shouldn’t continue to turn a blind eye. Games are meant to be safe havens for children, not another data mine.

Reference List

Auxier, B. et al. (2020, July). Parenting Children in the Age of Screens. Pew Research Center. https://www.pewresearch.org/internet/2020/07/28/parenting-children-in-the-age-of-screens/

Hurtado, K. (2021, January). Surprising Facts on Child Cell Phone Usage Statistics. Parentology.com. https://parentology.com/what-you-need-to-know-child-cell-phone-usage-statistics/

Smith, D. (2018). Data Protection and Privacy. Flickr. https://www.flickr.com/photos/cerillion/43711943092/in/photolist-MEzZpm-bWN3ja-2kTHZbT-2b9KhiW-hUQW41-gnCX1D-29AF8Mf-hUSrR3-ZTSoxY-hUSzea-gnD4EF-hURL5a-2h4T4gP-hUR7dQ-2f7smZ3-2f7snqo-hUSWBD-gG7RGX-hUSn1b-2b9Kh8L-hURBuW-Mt2A1i-hUQuXP-2hQRkzj-hURNkR-psxct7-hUSRTJ-hUU4Lr-hUS7nA-hUSC34-hUSMt9-hUQFHX-2m9ci5Q-hUSuWw-hURCc4-hURktr-hURvjE-hUQU1o-gssXJa-gKn5tx-hUSHZC-QKUGX2-hURWmk-hURYHM-HDCSjS-gEoqyv-hUQXsc-hUT5xR-hURzZw-hURFWD

Zovoilis, T. (2013). Small boy with his mother looking at a tablet [Photograph]. Flickr. https://www.flickr.com/photos/55975562@N07/8547117491/

Deregulated NFTs as Building Blocks of the “Infraverse”

Deregulated NFTs as Building Blocks of the “Infraverse”
Noor Gill | June 23, 2022

Lede: A Laissez-faire approach to managing NFTs with compromised information from blockchain addresses, transaction activity, and location data poses privacy concerns for Metaverse users.

Overview: The Non-fungible tokens (NFT) marketplace is a niche phenomenon on crypto where digital art is bought and sold via cryptocurrency but despite the rising popularity of the space in 2021, decentralized technologies, a lack of legal framework, and the immutable nature of blockchain have created a range of privacy concerns for Metaverse users, including the inability to anonymize or delete public transactional data which removes users’ representational autonomy and how easily advertisers can collect users’ personal information through their virtual personas.

       NFTs function as a form of digital art and currency that operate via cryptocurrencies (a form of digital currency, most often Ethereum) and the blockchain (an online method of recording and validating ownership of crypto assets). In the context of an NFT transaction, the flow of information about the NFT as well as the consumer and seller is shared to and from the consumer and seller in a manner that is governed by transmission principles connected to the blockchain.

Time series plot of cryptocurrency value, 2021-2022

Fig 1: A double line plot for the weekly total cryptocurrency values and average transaction values on NFT platforms from 2021 to early 2022 (Chainalysis)

       With a peak weekly value of about 4 billion USD in early 2022 and a peak weekly average value per transaction of about 3.5 billion USD at the end of 2021 (Fig 1), the popularity and significance of NFTs in the global market are evident. However, there has even been debate regarding the future of NFTs in Congress and the U.S. Treasury such that the IRS has confiscated over $3.5b worth of online currencies in 2021 alone. Moreover, decentralized technologies allow for some of the most prevalent forms of crime in the NFT space – money laundering and tax evasion. The rise of NFTs comes with a rise in privacy and security concerns within the Metaverse, an intriguing yet immediate need that needs to be further explored.

Public Blockchains Lack Legal Framework

        In terms of privacy legislation for crypto-assets like NFTs, current legislation for privacy online was not written or established with blockchain in mind. For example, data cannot be deleted from the blockchain, which contradicts the California Consumer Privacy Act (“CCPA”) and EU General Data Protection Regulation (“GDPR”). Hence, there is an absence of legal documentation to protect user information, resulting in possible harm in the privacy dimension since users are not provided accurate and transparent guidelines for data usage and deletion, resulting in a lack of control over their information.

An Illusion of Privacy

        Upon purchase of an NFT, the buyer is provided a digital ownership certificate, a sort of virtual receipt accessible to all users and updated after each transaction through the blockchain. While this public record of all transactions ensures transparency and can be leveraged to accurately maintain transactions, it also provides others with the transactional history of the entire network often tied to other identifiable information. Since there is no option to set the transaction to private or delete tokens due to the immutable nature of blockchain, the process of burning and replacing wallets is vulnerable to exploitation and human error, ultimately removing a user’s decision-making autonomy over how their data is handled.

        Another issue lies in the fact that users cannot provide informed consent in receiving NFTs since they can be sent to any address regardless of whether an individual approves the transaction. For example, when Jimmy Fallon displayed his Bored Ape NFT on his show in January 2022, it became easy to use that publicly available NFT information to locate Fallon’s wallet address and even resulted in a user sending him 1,776 of a token named “Let’s Go Brandon.” which he did not consent to.

The Fine Line Between Online and IRL

        Beyond this, the use of avatars and virtual identities provides users with a false sense of detachment from their real-life identities, which paves the way for advertisers and third parties to gather users’ personal information through avatars. Not only can this trap users within filter bubbles based on similar or desirable sets of advertisements and online experiences within the Metaverse, but it also violates users’ freedoms and right to anonymity during the stage of information processing.

Fig 2: A horizontal bar chart displaying total gain defined by the difference between total profit and total expenses in USD based on wallet age for OpenSea accounts(Financial Times, Nansen)

       Due to unregulated access to users’ transaction activity, personal identifiers, and location data, the potential harms of participating in NFT transactions may outweigh the benefits. For example, for contracts on the NFT marketplace, Opensea, realized gains (the total profit minus the total cost of purchasing an NFT) were positive on a longer-term scale of 1-3+ years as opposed to the pattern of negative gains for wallets open for less than a year (Fig 2). This displays how the benefit of monetary gain from selling NFTs may be overstated or less tangible than it appears.

Parting Words

       As Web 3.0 (the emerging internet infrastructure based on decentralized networks and individual ownership of content) gains traction, it is vital to create a dedicated federal regulatory body for the NFT marketplace. As opposed to banning NFTs or continuing forward with the current status quo, establishing a federal presence in the realm of crypto to monitor transactions and audit platforms would prevent exploitation and define a structure that maximizes privacy and minimizes harm to both NFT creators and consumers.

References:

Etherscan.io. (n.d.). Retrieved June 23, 2022, from https://etherscan.io/tx/0x125714bb4db48757007fff2671b37637bbfd6d47b3a4757ebbd0c5222984f905

How filter bubbles distort reality: Everything you need to know. Farnam Street. (2019, November 14). Retrieved June 23, 2022, from https://fs.blog/filter-bubbles/

Latenight. (2022, January 24). Paris Hilton surprises Tonight show audience members by giving them their own nfts | tonight show. YouTube. Retrieved June 23, 2022, from https://www.youtube.com/watch?v=5zi12wrh5So&t=306s&ab_channel=TheTonightShowStarringJimmyFallon

​​Mulligan, D. K., Koopman, C., & Doty, N. (2016). Privacy is an essentially contested concept: A multi-dimensional analytic for mapping privacy. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160118. https://doi.org/10.1098/rsta.2016.0118

National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. U.S. Department of Health and Human Services. https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html

Nissenbaum, Helen F. and Nissenbaum, Helen F., A Contextual Approach to Privacy Online (2011). Daedalus 140 (4), Fall 2011: 32-48, Available at SSRN: https://ssrn.com/abstract=2567042

Ravenscraft, E. (2022, April 5). NFTs are a privacy and security nightmare. Wired. Retrieved June 23, 2022, from https://www.wired.com/story/nfts-privacy-security-nightmare/

Solove, Daniel J., A Taxonomy of Privacy. University of Pennsylvania Law Review, Vol. 154, No. 3, p. 477, January 2006, GWU Law School Public Law Research Paper No. 129, Available at SSRN: https://ssrn.com/abstract=667622

Yahoo! (n.d.). IRS seized $3.5B in crypto-related fraud money this year as illicit activity multiplies. Yahoo! News. Retrieved June 23, 2022, from https://ph.news.yahoo.com/irs-seized-35-b-in-crypto-related-fraud-cases-this-year-as-illicit-activity-multiplies-150407019.html

Public Privacy: Is Digital Privacy Truly Attainable?

Public Privacy: Is Digital Privacy Truly Attainable?
Shanie Hsieh | June 24, 2022

With the rise of the digital age, it becomes increasingly difficult to prevent digital footprints. In order to keep up with new technologies, users share more and more of their data to companies and products. The question is how can we properly protect data and privacy, while evolving new innovations.

Privacy Policies and Frameworks

There are currently guidelines implemented to guide and protect privacy. The Federal Trade Commission Act (FTCA) [1] is one that is commonly referenced. Some important principles include what can be considered deceptive practices, consumer choice, and transparency in privacy policies. The General Data Protection Regulation (GDPR) [2] and California Consumer Privacy Act (CCPA) [3] serve a similar purpose in giving users the power over their data. There are multiple other frameworks that detail more into protecting user information, but these are meant as general guidelines where sites can easily bypass.

Who Reads Privacy Policies

Every application, product, and website these days all have some sort of Terms of Service and Privacy Policy written up in order to gain a user’s trust and grow their platform. These documents are often thousands of words, ex. Twitter’s 4,500 word privacy policy. It is common knowledge, and even joked about, that most people click the “I accept” without even taking a second glance at what they are agreeing to. An article from The Washington Post [4] details this issue and offers a new approach to persuade companies to change the format of privacy policies in a way that can allow a user to better understand what they are consenting to. If consumers aren’t reading these policies that are put in place for their own protection, what purpose do these privacy policies truly hold?

Image 1: Pew Research shows how often do users read privacy policies. (From Pew Research Center)

Trust In The Digital-verse

An article published in Forbes [5] writes, “In the digital world, trust issues are often data issues.” This article goes on to advocate for companies to execute their work ethically so it does not breach any user’s trust, so in the long term, all users could trust in what they are agreeing to across the web. In an ideal world, this is our course of action to respect all people and their privacy. However, realistically, we have seen evidence of breaches in privacy and manipulation through the use of personal data. We cannot solely rely on respect and trust to enact effective privacy policies and protection.

The Balance Between Privacy And Innovation

Based in algorithms and development, data is the backbone of technological advancements. Our biggest companies today, Google, Apple, Amazon, etc., have created some of the most influential products our world has ever seen, but at the cost of analyzing their users’ data, some of whomst had no idea they had consented to. A quote from an interview on CBN news [6] says, “We don’t realize how that data can be used to manipulate us, to control us, to shape our perception of truth and reality, which I think is some of the biggest questions in terms of technology and what it’s doing to us: is altering our perception of reality.” The leading question now is, “Is privacy… real?”

Image 2: Cartoon depicting a person who had unawarely consented to sharing personal information. (Created by Steve Sack on Cagle Cartoons)

What Can We Do?

As users, all we can do is read these policies carefully. If we assume that we can trust the companies who published their privacy policies, then it is our job to read what is written and not obliviously consent to something we may not truly agree to. As for companies, they should make it simpler to understand their policies. Twitter, for example, had tried to turn their privacy policy into a game in attempts to help their users understand the document. Overall, as we work towards a better future, we should share a mutual respect between users and the company in order to find this balance between privacy and technology.

References

[1] Federal Trade Commission Act Section 5: Unfair or Deceptive Acts or Practices 

[2] GDPR

[3] California Consumer Privacy Act (CCPA)

[4] I tried to read all my app privacy policies. It was 1 million words.

[5] Trust In The Digital Age Requires Data Ethics And A Data Literate Workforce 

[6] Is Privacy the Tradeoff for Convenience in the Age of Digital Worship? | CBN News 

Images

[1] Pew Research 

[2] Cartoon 

How many company data breaches will affect you in your lifetime?

How many company data breaches will affect you in your lifetime?
Tessa de Vries | June 23, 2022

While the title may scare you, this question is important to consider; many of you are likely users or clients of a company that has faced a cyber attack, breaching user data, and now facing a class action lawsuit. Some of you, may even be users or clients of multiple of these companies.

It’s difficult for the average user to understand exactly how a company may be misusing data, not only due to lack of domain knowledge and legal nuances, but the overwhelming tendency for companies to use vague terms in their policies. However, when a lawsuit breaks the news and affects millions of users, it can be a major concern that immediately grabs your attention. And these days, with user data at the forefront of any and every company, it seems as though we blindly put our trust in systems that may be weak to security.

You may wonder, so what companies and lawsuits are you talking about?

To name a few of the biggest, most widespread:

Capital One: In July 2019, Capitol One was fined $80 million by the U.S Treasury Department for careless network security practices that enabled a hacker to access the personal information of 106 million of the bank’s credit card holders. Among the largest of its kind on record at the time, the breach compromised about 140,000 Social Security numbers and 80,000 bank account numbers (Press).

T-Mobile: In August 2021, T-Mobile announced that their systems were subject to a criminal cyberattack that compromised data of millions of customers, former customers, and prospective customers. Fortunately, the breach did not expose any customer financial information, but, like so many breaches, some SSN, name, address, DOB and driver’s license/ID information were compromised (Sievert).

TikTok: In November 2021, TikTok agreed to pay $92 million to settle over a dozen lawsuits alleging that the platform harvested personal user data, including information using facial recognition technology, without consent. TikTok allegedly tracked and sold the data of 89 million users to advertisers and third parties in China in violation of state and federal law (Allyn). 

TikTok Notification – Lawsuit

Are there more?

The short answer is yes. There are numerous big and small companies that have faced legal consequences and lawsuits over data malpractice. Most of these you can find and read about online. However, a big concern remains how many companies are there that are using malpractice and getting away with it, flying under the radar? How many companies have weak security systems that are at risk of being hacked and facing a data breach? Of course, and frustratingly enough, this question is impossible to answer.

Well what can I do about it?

First, if you are or were a user at one of the companies that has faced a lawsuit or fine for data malpractice, you can file a claim and join the class action lawsuit.

Second, you can read up on company data policies in detail. Many of us hit the agree button without taking any time to read the actual data policy. Additionally, while reading the policy, you can evaluate it against state and federal privacy regulations.

Third, although this is very challenging, you can try to be selective with which companies you choose to subscribe to. The good thing about this day and age, there are countless companies providing the same service. For example, as of 2022, there are 747 Wireless Telecommunications Carriers businesses.

Closing remarks

While we on the user end cannot control how a company operates around our data or how strong their security system is, it’s important to be aware and educate yourself on potential risks. And when you are put in a situation where your data has been compromised, you seek justice or some sort of compensation to hold these companies accountable.

References

Jewett, Abraham. “Data Breaches: Current Open Lawsuits and Settlements.” Top Class Actions, 26 Apr. 2022, https://topclassactions.com/lawsuit-settlements/lawsuit-news/data-breaches-current-open-lawsuits-and-settlements/.

Allyn, Bobby. “Tiktok to Pay $92 Million to Settle Class-Action Suit over ‘Theft’ of Personal Data.” NPR, NPR, 25 Feb. 2021, https://www.npr.org/2021/02/25/971460327/tiktok-to-pay-92-million-to-settle-class-action-suit-over-theft-of-personal-data.

Press, The Associated. “Capital One Is Fined $80 Million for Huge Data Breach.” Fortune, Fortune, 7 Aug. 2020, https://fortune.com/2020/08/07/capital-one-fined-80-million-data-breach/.

Sievert, Mike. “The Cyberattack against t‑Mobile and Our Customers: What Happened, and What We Are Doing about It. ‑ t‑Mobile Newsroom.” TMobile, 27 Aug. 2021, https://www.t-mobile.com/news/network/cyberattack-against-tmobile-and-our-customers.

 

Fourth Amendment is For Sale

Fourth Amendment is For Sale
Jeremy Yeung | June 23, 2022

In the age of information, government agencies are subverting data protection laws by buying your sensitive and “protected” data.

How comfortable would you be to show your friends and family your entire browser history? If you wouldn’t want your friends or family to know something, would you be comfortable with the government knowing just cause? Actually, the government already has access to this data and they don’t need any reason to obtain it. Moreover, Snowden revealed that the intelligence agencies collect text messages, contact lists, photos, and video chats from social networks directly without your consent or court orders [1]. It seems that the phrase, “Big Brother is watching you,” from the book Nineteen Eighty-Four is ever relevant today. This leads to the question: is any of our data even protected from our own government?

When navigating through the different legal texts, it is difficult to tell what data is accessible to government agencies without a warrant. Data protection starts with the Fourth Amendment of the U.S. Constitution, which protects people from unreasonable search and seizures. Though the Electronic Communications Privacy Act of 1986 outlined broad protections to wire, oral, and electrical communications while they were being made or stored, it has been altered by the Communications Assistance to Law Enforcement Act, the USA PATRIOT Act, USA PATRIOT reauthorization acts, and FISA Amendments Act [2]. For example, the ECPA prohibited the interception of any wire, oral, or electronic communication, but the USA PATRIOT Act notably allows the FBI to wiretap citizens without need for a warrant or probable cause. Clarity to these vague data protection laws is often found in Supreme Court cases, especially in Riley v. California (2014) and Carpenter v. United States (2018). These Supreme Court cases recognized a reasonable expectation of privacy in the contents and geolocations of cell phones.

As it turns out, there are much easier ways for law enforcement to obtain sensitive information without probable cause: introducing data brokers. Data brokers are intermediaries that buy data from parties such as weather forecasting apps, which collect precise geolocation data. Law enforcement and government intelligence agencies can in turn buy the sensitive data from data brokers because the current laws only apply when the government forces disclosure. Government agencies including the IRS, DEA, FBI, and ICE have made millions in payments to private companies for mobile location information, many times refusing to reveal how the information was used [3].

Potential for Harm
Besides egregious violations of the Fourth Amendment, these privacy violations have important implications on civil rights. Location data from social media apps are able to track protesters. This technique has been used in China to track people who do not align with the government’s politics, including Muslim Uighers and protesters in Hong Kong. In the latter case, those critical of the government have been tracked and police showed up at their doors at night [4]. It may be hard to imagine something like this happening In the United States, but the potential for geolocation and cell phone data to be weaponized still exists. Law enforcement in possession of widespread personally-identifiable GPS data could easily use intimidation tactics against protestors a part of anti-police-brutality movements such as Black Lives Matter. Even anonymized data can be aggregated with related datasets to quickly re-identify subjects, as shown by a group of MIT scientists.

To be honest, as active participants in society, we don’t have many choices to avoid these privacy violations. A few practical solutions include using a faraday bag [5], opting out of targeted advertising, Alternatively, since we know companies work in their best interest, change can start by advocating for data protection or only using products that protect our privacy. But with the rapid advancement of technology, the goals of previous legislation have become easier to subvert. Nonetheless, the law is the next best way to restrict these agencies. In April of 2021, a bipartisan bill called “The Fourth Amendment Is Not For Sale Act” was introduced to end this privacy loophole. If passed, this would protect geolocation, communication, and other sensitive data from being bought by intelligence agencies. Though the current version of the bill closes the sale of sensitive data, it does not prevent companies from handing over data, per say, in exchange for less regulation [6]. Furthermore, there has been no movement since the introduction of this bill. We must demand the stop to this erosion of our rights and privacy in this ever-evolving age of information.

References:
[1] https://theworld.org/stories/2013-07-09/17-disturbing-things-snowden-has-taught-us-so-far
[2] https://bja.ojp.gov/program/it/privacy-civil-liberties/authorities/statutes/1285
[3] https://www.vox.com/recode/22038383/dhs-cbp-investigation-cellphone-data-brokers-venntel
[4] https://apnews.com/article/asia-pacific-international-news-hong-kong-extradition-china-028636932a874675a3a5749b7a533969
[5] https://www.howtogeek.com/791386/what-is-a-faraday-bag-and-should-you-use-one/
[6] https://cdt.org/insights/new-cdt-report-documents-how-law-enforcement-intel-agencies-are-evading-the-law-and-buying-your-data-from-brokers/

Images:
[1] https://www.wallarm.com/what/securing-pii-in-web-applications
[2] https://khpg.org/en/1519768753