Uber can take you somewhere, could it also take advantage of your personal information?

Uber can take you somewhere, could it also take advantage of your personal information?
By Yuze Chen | March 7, 2020

Nowadays, people are using technology to help them get around the city. Ride-sharing platforms such as Uber and Lyft are being used by hundreds and thousands of people around the world to help them get from point A to point B. These platforms help the riders with a lower-than-taxi fare, removes the hassle to book an appointment with taxi companies in advance. They also give drivers an opportunity to earn extra income using their own vehicles. With the convenience to take people around the city on their fingertips, ride-sharing companies are also known for their sketchy privacy policy.


Photo by ​Austin Distel​ on ​Unsplash

What Data does Uber Collects

According to Uber’s ​Privacy Notice​, Uber collects a variety of data including three main aspects: Data provided by user, such as username, email, address, etc; Data created when using Uber services, such as user’s location, app usage and device data; Data from other sources, such as Uber partners who provide data to Uber. Overall, the personal identifiable information (PII) are collected and stored by Uber. Besides that, Uber also collects and stores information that can reconstruct an individual’s daily life, such as location history. With the wealth of PIIs collected and stored by Uber, the user should be concerned about what Uber can do with the data.


Photo by ​Charles Deluvio​ on ​Unsplash Solove’s Taxonomy Analysis

Solove’s Taxonomy provides us a framework to analyze the potential harm on the data subject (the user). According to Solove’s Taxonomy, Uber users are the data subject. When the user uses the Uber service, their personal data are transmitted to Data Holder, which is Uber. Then it will go through Information Processing step in Solove’s Taxonomy, including aggregation, identification, etc. After processing the data, there will be a Information Dissemination step where Uber can take action on user’s data.

Information Collection
Let’s look at the first step in Solove’s Taxonomy – Information Collection. As discussed above, Uber collects data in three ways. However, there is a weird clause in Uber’s privacy notice ​III. A. 2. Location data section​: “​precise or approximate location data from a user’s mobile device if enabled by the user… when the Uber app is running in the foreground (app open and on-screen) or background (app open but not on-screen) of their mobile device​”. This means Uber is able to always collect the user’s current location on background even though the user isn’t using the app. The clause gives Uber an opportunity to always monitor the user’s location after using Uber once. This creates potential harm to the user because it’s possible that a user’s location is always being collected by Uber without consent.

Information Processing
In the Information Processing step of Solove’s Taxonomy, Uber has a couple of sketchy terms in their privacy policy. In ​III. A. 2 Transaction Information, Uber claims that if a user refers a new user with the promo code, both user’s information will be associated in Uber’s database. This practice could be harmful to both user’s privacy and could cause some unpredicted consequences. For example, many users simply post their promotion code to a public blog or discussion board. Without knowing the referee who could use the promotion code, there is potential privacy and legal harm to to associate two strangers. For example, if the referee were under investigation of a crime when using Uber, the referer user could have also been involved in the legal matter, even if they don’t know each other.
Information Dissemination
According to Privacy Policy Section III. D, Uber could share the user data with third parties such as Uber’s data analytics providers, insurance and financing partners. It does not specify who those partners are and how they are able to do with the data. It is obvious there will be potential harms to the user during this step. First, the user might not want their data to be disclosed to the third party other than Uber. This is more privacy concerns

Conclusion
According to our analysis on Uber’s privacy policy, we found plenty of surprising terms that would jeopardize user’s privacy. The data collection, data processing, and data dissemination are all having issues with user consent and user privacy protection. Some are even pulling users into potential legal issues. The wording of the Privacy Notice makes it easy for Uber to take advantage of user data legally, such as selling to its partner. There are a lot that Uber could do to better address those privacy issues in the Privacy Notice.

References
https://www.uber.com/global/en/privacy/notice/
https://wiki.openrightsgroup.org/wiki/A_Taxonomy_of_Privacy

Assessing Clearview AI through a Privacy Lens

Assessing Clearview AI through a Privacy Lens
By Jonathan Hilton | February 28, 2020

The media has been abuzz lately about the end of privacy driven extensively from the capabilities of Clearview AI. The New York Times drew attention to the start up in an article about Clearview AI called ‘The Secretive Company That Might End Privacy as We Know It’, where the paper details the secretive nature of a company that has expansive capabilities with facial recognition. Privacy is a complicated subject that requires extensive context to understand what is and what is not acceptable in a society. In fact, privacy does not have universal meaning and understanding within society or within the academic community. Privacy in one context is not considered a privacy violation in another context. Since this ambiguity exists, we can deconstruct the context of vast facial recognition capabilities with Solove’s Taxonomy to better understand the privacy concerns with Clearview AI.

Prior to examining Clearview AI’s privacy concerns with Solove’s Taxonomy, we need to address what Clearview AI is doing with their facial recognition capability. Clearview AI CEO, Ton That (see figure 1), started the company back in 2017. The basic premise of the company is that their software can take a photo and use a database comprised of billions of photos to identify the person in the photo. How did they come up with this database with out the consent of each person whose likeness is in the database? Clearview AI has scraped pictures from Facebook, YouTube, Venmo and many others to compile their massive database. The company claims the information is ‘public’ and thus does violate any privacy laws. So who exactly uses this software? Clearview maintains that they work with law enforcement agencies and their product is not meant for public consumption.


Clearview CEO Hoan Ton-That

More information about Clearview AI can also be found at their website: https://clearview.ai/

After the Times published the article bringing Clearview AI’s practices to light, there has been intense backlash from the public over privacy concerns. Solove’s Taxonomy should be able to help us decompose exactly what the privacy concerns are.

Solove’s Taxonomy was published by Daniel J. Solove in January 2006. The basic components of the Taxonomy can be found in figure 2 below.


Solove’s Taxonomy

In the case of Clearview AI, the entire public is the Data Subject, or at least those who have either posted videos or pictures or were part of videos or pictures posted by others. So even if a person never posted to social media or a video site, their likeness may be part of the database simply because someone else did post their picture or even caught them in the background of their own photo. An obvious case of harm can come from surveillance according to the Solove taxonomy. Locations, personal associations, activities, and preferences could be derived from running a person’s photo through the system and then viewing all of the results. While Clearview AI claims that their product is to search and not surveil according to their website, it is very difficult to see how the data could not be used for surveillance. The distinction between and search and surveillance is a hard one to draw since searches can quickly produce results that achieve surveillance.

In terms of information processing, the entire purpose of the software is to aggregate data to attain personal identification. With this type of information, great concerns start to arise on the use of this type of capability and data usage. While Clearview maintains that its software serves the needs of law enforcement, the question of secondary use brings great concern. How long until law enforcement adjacent organizations gain access to the software, such as bounty hunters, private security, or private investigators. What other government organizations will gain access to the same capability for purposes such as taxes, welfare disbursement, airport surveillance, or government employment checks. The road to secondary uses of data by the government is well paved by historical precedence such as the use of genealogical DNA for criminal investigations or Social Security numbers becoming de-facto government ID numbers. Furthermore, what happens when Clearview AI is purchased and used by governments to repress their people. More authoritarian regimes could use the software to fully track their citizens and use it against anyone advocating human rights or democracy.


Pictures can be used for much more than law enforcement

Information dissemination also poses a risk that Clearview AI’s results could be distributed to others that would use it for nefarious purposes. For instance, if a bad actor in a government agency distributes a search of their friend or co-worker to another, what type of harm could that cause the person whose images were searched. Additionally, what happens when the software is either hacked or reverse engineered so that the general public can do the same searches or searches are sold on the dark web. It is not a far cry to imagine a person being blackmailed for the search results. We have already seen where hackers will encrypt hard drives to hold people ransom for either money or explicit images of the computer owner, so this would be another extension of the same practice.

Finally, invasion is a real threat to all data subjects. A search of a person’s image that reveals more than who they are but more what they are doing, when they are doing it, and with whom could be used in many ways to affect the person’s day to day decision making and activities. For instance, if this information can be sold to retailers or marketers, a person may find that there is a targeted advertisement is made at the right place and time to influence a person’s decision. Furthermore, the aforementioned bad actor scenario could lead to direct stalking or personal residence invasion if significant details are known about a person. A few iterations of this technology could find the software’s capability mounted on a AR / VR IoT device that will identify people and perhaps some level of personal information as they are seen in public. So basically, a person can where the device and see who each person is that passes them in a mall or on a street.

In conclusion, Solove’s Taxonomy does help deconstruct the privacy concerns associated with Clearview AI’s facial recognition capability. The serious concerns with the software is unlikely to go away anytime soon but unfortunately once this type of technology is developed, it usually does not go away. Society will have to continue to grapple more and more with the growing power and capability of facial recognition.

References:

https://www.popularmechanics.com/technology/security/a30613488/clearview-ai-app/
https://en.wikipedia.org/wiki/Clearview_AI

Clearview AI CEO Defends Facial Recognition Software


Solove, Daniel J., ‘A Taxonomy of Privacy’, University of Pennsylvania Law Review, Jan 2006.

Is Google’s Acquisition of Fitbit Healthy For Users?

Is Google’s Acquisition of Fitbit Healthy For Users?
By Eli Hanna | February 28, 2020

Fitbit Overview

Fitbit is a company that offers wearable fitness tracking devices and online services to help users track their activity and fitness goals. The devices track the number of steps taken, heart rate, sleep patterns and other fitness related information. This data is displayed on a dashboard through the Fitbit mobile or web applications enabling users to view their activity and track their progress towards goals. With over 28 million active users and 100 million devices sold, Fitbit has accumulated a wealth of fitness data on its users.

Given this wealth of user information, Fitbit views consumer trust to be critical to their business and promises to never sell personal user information. In fact, their privacy policy states that personal information will not be shared with third parties with three exceptions:

  • When a user agrees or directs Fitbit to share information. This includes participating in a public challenge or linking your account to an external social media account.
  • For external processing. Fitbit transfers information to external service providers who process the information according to the Fitbit privacy policy.
  • For legal reasons to prevent harm. Where provided by law, Fitbit will provide information to government officials. In such a case, it is Fitbit’s policy to notify users unless they are prohibited from doing so or if there is an exigent circumstance involving a danger of death or serious injury.

Acquisition By Google

On November 1, 2019 Fitbit announced that it had agreed to be acquired by Google. From Fitbit’s perspective, this acquisition means they will have more resources to invest in wearable technology and make their products and services available to a larger population. Google also benefits from the acquisition by purchasing rights to a popular brand of wearable technology, an area where they have struggled to compete with Apple.

Privacy Concerns
Despite the business opportunities this acquisition offers, Fitbit users have voiced concerns that the deal may put their privacy at risk. In contrast to Fitbit’s promise to protect users’ personal information, Google has built its business by collecting user data through its services and selling that information to advertisers. This leaves some customers wondering if their sleeping or exercise habits will soon be sold to the highest bidder.

For their part, Fitbit and Google have promised that personal information collected from Fitbit users will never be sold and “Fitbit health and wellness data will not be used for Google ads.” However, regulators remain unconvinced. In February 2020, the European Data Protection Board (EDPB) issued a statement, “There are concerns that the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.” The EDPB also directed Google and Fitbit to conduct a “full assessment of the data protection requirements and privacy implications of the merger”.

Where It Stands

While the proposed acquisition of Fitbit by Google may provide the needed resources to develop better fitness trackers and services for users, both companies have a long way to go in order to convince consumers and regulators that personal information will be handled responsibly. To address these concerns both companies should update their privacy policies to inform users how their data will be used by Google. In addition, they should take proactive steps to identify how aggregated or de-identified information may be combined with other information collected by Google and outline safeguards to ensure user privacy remains in tact.

A Privacy Policy To Which Nobody Agreed

A Privacy Policy To Which Nobody Agreed
By Andrew Morris | February 24, 2020

On Monday, February 10th, Attorney General Barr announced an indictment of four members of China’s Military for hacking into Equifax.

Equifax operates in a unique space: like Facebook, they have troves of data about a significant number of people. Specific data about financial balances, transactions, payment histories, and creditworthiness. The data may not be as socially personal as Facebook’s, but it is every bit as sensitive, if not more. However, unlike Facebook, nobody agreed to house their data there.

Equifax doesn’t have a privacy policy as much as a marketing page about privacy[1] and a “privacy statement”[2].

In this document, Equifax has taken the time to ensure that they are compliant with laws and best practices about data management and correction right up until the point where it starts to involve sensitive data.

It is worth noting that the California Consumer Privacy Act (CCPA) permits California residents to manage and delete their data. A dedicated page[3] details those rights. However, on my attempt to actually exercise these rights (2/21/2020 at 7:39pm PST), their dedicated site was unresponsive to requests.

Given the scope of recent breaches (147 million US residents), it might reason that regulators and government agencies would address consumer rights in the United States. The FTC made a statement about the Equifax data breach recently and accompanied it with some additional information.[4] On this page, there is a telling ‘question and answer’ that the FTC provides:

Q: I don’t want Equifax to have my data. What can I do?

A: “Equifax is one of three national credit bureaus. These companies collect information about your credit history, such as how many credit cards you have, how much money you owe, and how you pay your bills. Each company creates a credit report about you, and then sells this report to businesses who are deciding whether to give you credit. You cannot opt out of this data collection. However, you can review your credit report for free and freeze your credit.”

In other words, the financial credit system is so essential to commercial operations, the FTC has decided that this data collection is effectively mandatory for most of America.

This organizational system, where private organizations are responsible for infrastructure and data management for the financial system, is not unique to the United States. Some examples highlight the key differences:

  • Germany relies on a company called Schufa Holding AG. [5] However, they provide customers the right to erase, rectify and restrict processing of personal data under GDPR. [6]
  • Austria relies on another company called Kreditschutzverband von 1870 (KSV1870 for short – literally, Credit Protection Association from 1870), which operates as a blacklist-style credit list. This type of system would be un-ideal for granting opt-out rights, and yet they do allow the Austrian Data Protection Authority to intervene. [7]
  • The UK uses a variety of companies. One of them is TransUnion, who manages a specific page on the rights to delete data [8], and it requires some discussion and acknowledgment of the potential consequences, but there is a process to address it.

These exceptions seem to be limited to Europe. Anywhere where the General Data Protection Regulation (GDPR [9]) applies, the specific data subjects have rights. To summarize the legislation, these rights include simple terms and conditions explaining consent, timely notification of data breaches, the right to access your data, the right to be forgotten, data portability, and privacy by design. There is also a significant number of appropriate technical and organizational measures to ensure security levels and risk are commensurate.

In other words, many of the protections built into the GDPR would address both the rights of data subjects and potentially help some of the operational elements that permitted the Equifax data breach. Consumers and data subjects in the United States would benefit from either an expansion of the CCPA or GDPR to cover all residents.

Can You Trust Facebook with Your Love Life?

Can You Trust Facebook with Your Love Life?
By Ann Yoo Abbott | February 21, 2020

If you have ever heard about the Facebook data privacy scandal or emotion experiment, you probably don’t trust Facebook with your personal data anymore. Facebook had let companies such as Spotify and Netflix read users’ private messages, and Facebook was sued for letting the political consulting firm Cambridge Analytica access data from some 87 million users in 2018. It doesn’t end there. For one week in 2012, Facebook altered the algorithms it uses to determine which status updates appeared in the News Feed of 689,003 randomly selected users (about 1 of every 2,500 Facebook users). The results of this study were published in the Proceedings of the National Academy of Sciences (PNAS).

Recently, Facebook has launched its new dating service in the US. They had been advertising privacy is important when it comes to dating, so they consulted with experts in privacy and consumer protection and embedded privacy protections into the core of Facebook Dating. They say they are committed to protecting people’s privacy within Facebook Dating so that they can create a place where people feel comfortable looking for a date and starting meaningful relationships. Let’s see if this is really the case.

Facebook Dating Privacy & Data Use Policy

Facebook Dating’s privacy policy is an extension of the main Facebook Data Policy, and includes a warning that Facebook users may learn you’re using Facebook Dating via mutual friends:

Here are some of the highlights from Facebook’s Data Policy as far as what information is collected when you create a profile and use Facebook Dating:

Content: Facebook collects the “content, communications and other information you provide” while using it, which includes what you’re saying to your Facebook Dating matches. “Our systems automatically process content and communications you and others provide to analyze context and what’s in them.”

Connections: Facebook Dating will also analyze what Facebook groups you join, who you match and interact with, and how – and analyze how those people interact with you. “…such as when others share or comment on a photo of you, send a message to you, or upload, sync, or import your contact information.”

Your Phone: Facebook Dating collects a lot of information from your phone, including the OS, hardware & software versions, battery level, signal strength, location, and nearby Wi-Fi access points, app and file names and types, what plugins you’re using and data from the cookies stored on your device. “…to better personalize the content (including ads) or features you see when you use our Products on another device…”

Your Location: To get suggested matches on Facebook Dating, you need to allow Facebook to access your location. Facebook collects and analyzes all sorts of things about where you take your phone, including where you live, where you go frequently. Even Bluetooth signals, and information about nearby Wi-Fi access points, beacons, and cell towers are part of the information they collect. Facebook also analyzes what location info you choose to share with Facebook friends, and what they share with you.

This list does not include everything that Facebook takes from us. Even after all these informations, they still have more listed on their Data Policy. You’ll want to read it for everything Facebook discloses about what they collect and how it’s used. Do you think Facebook collecting so much of our information is justified? Don’t you think it’s too excessive?

Why the Biggest Technology Companies are Getting into Finance

Why the Biggest Technology Companies are Getting into Finance
By Shubham Gupta | February 16, 2020

With the rise in popularity of “fintech” companies such as Paypal, Robinhood, Square, Plaid, Affirm, and Stripe, the general public is becoming more and more used to conducting monetary transactions through their phones rather than with their wallets. And of course, the biggest tech companies are trying to claim a piece of the pie as well. Apple and Amazon both offer a credit card, Google is planning to offer checking accounts to users who use Google Pay, and Facebook is going as far as creating a new currency called Libra.

Of course, by expanding these new financial products, big tech companies are expanding their revenue streams and are aiming to immerse users more and more into their ecosystem. However, one of the biggest yet most hidden advantages of offering these financial products is that it opens up the doors for big tech companies to collect financial data. For example, processing a payment transaction through services such as Apple Pay or Google pay allow the company to keep a note of what the consumer bought, when they bought it, from where they bought it, and how much money they spent. This information could possibly be used to better understand consumer spending behavior and offer more targeted advertising to consumers.

Another danger of these services is security. Tech companies having access to your sensitive financial information could possibly lead to your information being exposed in the event of a data breach or hack. In the past, both Google and Facebook have suffered through numerous data breaches which exposed millions of consumer’s social media and email accounts. Additionally, because big tech companies are not held to the same level of regulations as banks and other financial institutions, they are more susceptible to operate in ways with your finances compared to banks. These worries are reflected in recent surveys of consumers asked if they trust big tech corporations with their finances. As seen in the chart below, consumer trust of tech companies handling their finances ranges around 50 – 60%. Facebook, probably due to its history of privacy violations, ranks the lowest with around a 35% approval rating.

That’s not to say that big tech’s contribution in the finance space has not come without its merits. Apple’s latest credit card, along with being a sleek piece of titanium, offers users helpful charts and visualizations to help them keep track of their spending and eliminates hidden fees. Facebook’s Libra, despite having a lot of partners back out of the deal, is envisioned to use block chain to process payments, making them cheaper, faster, and more secure. Amazon also offers a solid cash back on any purchase made on Amazon.com, Whole Foods, and other various restaurants, gas stations, and drug stores.

With a lot of these tech giants, having such a wide variety of products and services, finance is only the latest frontier for big tech companies. With the surge in popularity of these products especially in the younger generation, more and more companies are taking notice and trying to innovate in this space as well. However, as companies continue to innovate new financial products and solutions, government regulation is quick to follow.

References:

https://www.ngpf.org/blog/question-of-the-day/qod-which-tech-company-would-you-trust-the-most-to-handle-your-finances-amazon-google-or-apple/

https://9to5mac.com/2019/08/02/apple-card-details/

Air Tags: Finding your Keys or your Sensitive Information?

Air Tags: Finding your Keys or your Sensitive Information?
By Chelsea Shu | February 7, 2020

With people becoming more reliant on features such as Apple’s Find My iPhone to help them find their phone or Mac laptop, people wish there was also a way to find other commonly misplaced items such as their wallet and keys. Apple may have a solution.

Apple is supposedly creating a new product called Air Tags that will track items through Bluetooth technology. The Air Tags will be small, white tags that attach to important items with an adhesive.

They have tracking chips in them that will connect them to an iPhone app called “Find My.” This will enable users to locate and track all of their lost items through an app on their iPhone. The Air Tags will also have a feature allowing a person to press a button to emit a sound from the Air Tag, allowing the user to locate their item easily.

Privacy Concerns

While this product may offer consumers an opportunity to seamlessly keep track of their items, it may be too good to be true. Apple already collects a multitude of personal data: what music we listen to, what news we read, and what apps we use most often. Introducing more items such as wallets and keys into Apple’s tracking system means Apple has increased surveillance on a person’s daily activities and their locations throughout the day. Increased surveillance means more access to a person’s personal life and possibly, sensitive information.

While Apple claims that it does not sell location information to advertisers, its privacy policy states that its Location Services function “allows third-party apps and websites to gather and use information based on current location.” This raises a concern because it enables third parties to collect this data for their own purposes and it is unclear what they will do with this data. Given this data, these third party companies can now track when Air Tag users arrive and leave places as well as monitor their tendencies.

Furthermore, there are large consequences if this data lands in the hands of a malicious person. Creating a product like Air Tags opens up the possibility for data to be shown or accessed by someone it was unintended for. This can lead to unwanted information being exposed or used against the person whose data is being tracked.

Apple also claims that it scrambles and encrypts the data it collects. But in reality, the anonymization of data is quite difficult. While Apple claims that the location data they collect “does not personally identify you,” combining the magnitude of data that Apple has on each person could make it possible to fit the puzzle pieces together and identify a person, violating that person’s privacy and enabling that persons sensitive information to be exposed.

In summary, the addition of Apple Air Tags might seem like a convenient, useful idea, but what people do not realize is that opting in to such a product opens up the doors for increased data surveillance.

References
https://www.macrumors.com/guide/airtags/
https://www.businessinsider.com/apple-airtags-new-iphone-product-rumors-release-date-features-2020-2
https://www.usatoday.com/story/tech/talkingtech/2018/04/17/apple-make-simpler-download-your-privacy-data-year/521786002/
https://support.apple.com/en-us/HT207056

Amazon Go Stores

Amazon Go Stores
By Rudy Venguswamy | February 8, 2020

The tracking of retail shopping has always been a huge part of American consumer culture. It almost seems a given that after Black Friday, news stations will be broadcasting numbers about how this Black Friday compared in terms of revenue to previous years. These numbers though, are only a peek mainstream consumers get into the obsessive relationship retailers have with tracking customers. The amount of data retailers are now pushing to have about consumers has grown exponentially thanks to smart technology such as computer vision, internet analytics and customized Machine Learning models built to sell more to consumers.

The holy grail for retailers, with this in mind, has always been not just tracking sales, but tracking the entire sales cycle. The increase in online sales has of course made part of this job easier, but quite interesting, the biggest juggernaut of online sales, Amazon, in the past two years, has opened up a physical store, one that in many ways harkens back to physical shopping, but in many ways, a is step closer to the coveted grail of retail, tracking everything about a consumer.

In its new Amazon Go stores, cameras decorate every square corner, and machine learning plays the field, tracking each participant’s ID (though Amazon insists without facial recognition), their movements in the store, and links this to their online presence, creating perhaps the greatest concentration of insights into a consumer walking in a store that the world has ever seen.

This newfound power, transcending the physical and online shopping experience is, with no doubt, a marvelous engineering feat, costing both hundreds of millions of dollars of R&D spent, and sophisticated matching algorithms that detect reluctance in consumers and encourage them with coupons offline and online.

This power, however, also will shift the paradigm for privacy in the real world. As consumers, most expect their activities online and their interactions in the real world to stay, for the most part, separated. This new shift in the way commerce can be done means that this physical- online wall is all but evaporated and abstracted away to ML models.

Under an ethical framework of subject interaction with experimentation via machine learning, I think Amazon Go stores are minefields for unethical manipulation of consumers. Though Amazon has made off-the-cuff promises about what technology AI “currently” is being allowed to operate in the store (such as no face detection), these assurances should not be reassuring as they, in truth, are subject to change contingent solely on Amazon’s bottom line and engineering prowess. Consumers by simply walking into the store are forced into the game played by algorithms whose purpose is to maximize sales. It’s already dubious when this happens online. It should be exponentially more concerning when this manipulation enters our physical world too.

In conclusion, these Amazon Go stores, which track nearly every aspect of the consumer degrade the inherent privacy wall of real life versus online interaction. This is problematic as the subjects of this incursion, the consumers, are unwitting. Customers don’t necessarily consent when they walk into a store. Placing limits on artificial intelligence and its manipulation of our physical interactions with stores is critical to the safety of consumers from an otherwise predatory retail practice.

Is privacy a luxury good?

Is privacy a luxury good?
By Keenan Szulik | February 7, 2020

As one of the largest companies in the world—now valued at over $1 trillion—Apple Inc. has masterfully crafted its products and its brand into an international powerhouse. In fact, in 2002, almost 20 years ago, Wired Magazine declared “Apple: It’s All About the Brand“, detailing how Apple’s brand and marketing had driven the continued rise of the company.

Industry experts used words like “humanistic”, “innovation”, and “imagination” to describe Apple’s brand in 2002. But now, in 2020, there’s a new word that comes to mind: luxury.

In many ways, Apple has been slowly moving itself into the luxury goods market for years: first with the Apple Watch in 2015, its first foray into wearable devices. Then, into jewelry, with the introduction of AirPods in 2016. More recently, with the iPhone X and its hefty price tag of $999. And to really cement itself as a luxury brand, Apple released its most expensive computer yet in late 2019. The price? Over $50,000.


Source: https://qz.com/1765449/the-apple-mac-pro-can-cost-over-50000/

Apple’s new Mac Pro, priced at over $50,000 when fully loaded with all possible customizations.

This (luxury) branding is important, especially as Apple continues its competitive war against Google’s Android. Android devices, unlike Apple’s iOS devices, are incredibly price accessible. They do not cost $999, like a new iPhone X.

Android’s price accessibility has enabled it to become the global smartphone market share leader, contrary to what some American readers may think. In fact, according to [estimates from IDC](https://www.idc.com/promo/smartphone-market-share/os), Android phones represent nearly 87% of smartphones globally, compared to Apple’s 13%.

Apple’s iOS dominates North America, but lags across the rest of the globe, per DeviceAtlas.

How has Apple responded in the face of competition and proliferation from Android? Privacy.

For over a decade, Google—the maker of Android smartphones—has been mired in privacy concerns. Most recently, a questionable handling of personally identifiable information in advertising and an Amnesty International report detailing “total contempt for Android users’ privacy” entrenched the stereotype that Google did not respect the privacy of its users.

Apple pounced on this opportunity, launching a series of ads titled “Privacy on iPhone” with the slogan “Privacy. That’s iPhone.” Two such ads, from 2019, now have over 25 million views each on YouTube.

This is where it gets interesting: by leveraging privacy as a competitive advantage, Apple associates privacy with its luxury brand. Apple customers deserve and receive privacy; the rest, not so much. This assertion is a subtle one, but it’s absolutely critical: Apple is effectively telling consumers that privacy is a luxury good.

There are two resultant questions from this:

1. Is privacy a luxury good, or a fundamental human right? (Let’s ignore, for the time being, defining “privacy”.)

2. Technically, how would Apple achieve this privacy in ways that its competitors would not?

The ethics of the human right to privacy is a fascinating debate for the 21st century. Smartphones are now nearly ubiquitous, and the meaning of privacy has changed dramatically over the last decade (as it almost certainly changed dramatically over every prior decade, thanks to technological innovation). But it’s worth noting: there are many technical tradeoffs when engineering with privacy as a goal.

Apple, for one, has taken many steps to engineer privacy into its products by creating “on-device intelligence” systems (which it has also effectively marketed). This means that rather than taking data and sending it back to Apple’s servers to be processed, the data can be processed on your phone, which you own and control. Google has also taken steps to achieve this on-device intelligence, but has communicated its benefits less effectively to consumers.

Building these on-device intelligence systems, however, is expensive. Privacy, in turn, is expensive. And Apple uses this, in part, to justify the high price tag on its iPhones (further asserting privacy as a luxury good).

All of this is to say that we’re in a trying time. As brands such as Apple and Google introduce privacy as a point of competition, we as consumers feel the impact of their choices. This could have a positive effect: Apple and Google could enter a privacy war, raising the privacy standards in a way that positively benefits all consumers. Or it could deepen divides, with privacy becoming a luxury good afforded to the rich and powerful, and revoked from those with less.

Parallel Reality: Fixated on a Pixelated World

Parallel Reality: Fixated on a Pixelated World
By Michael Steckler | January 24, 2020

While augmented and virtual reality continue to receive increasing media coverage and popularity in both tech and gaming circles, new strides in parallel reality technology deserve to garner similar focus, if not more. Parallel Reality Displays, a technology developed by Redmond, Washington-based startup Misapplied Sciences, are enabled by a new pixel that can simultaneously project up to millions of light rays of different colors and brightness. Each ray can then be software-directed to a specific person. A mind-bending innovation that allows a hundred or more viewers to simultaneously share a digital display, sign, or light and each see something different. Misapplied Sciences has partnered with Delta Airlines to test the technology in a beta experience at Detroit Metropolitan Airport later in 2020. The idea is that different travelers could see their personalized flight information all on the same screen at the same time.

The airport scenario is just one of many applications of the technology. Parallel reality offers personalized experiences for people looking at the same screen. As such, its privacy implications are rich and distinct from those presented by more traditional virtual environments, such as those presented by online news forums and social network platforms. These aforementioned arenas are threatened by issues such as fake news, targeted and native advertising, and online surveillance. Parallel reality suffers from these same threats, and more. Since parallel reality customizes people’s experiences in the physical world, the technology relies on physical surveillance, and as such, poses Orwellian threats accordingly. Moreover, different information can be spread to different people in alarming ways. This can have the potential to further solidify the disparities between people with different politics, values, interests, and beliefs. For example, the isolation of “bubbles” between Democrats and Republicans can be exacerbated and evolve into something worthy of a Black Mirror.

Aware of these privacy concerns, Misapplied Sciences has advocated for opt-in advertising, as well as anonymized tracking of an individual’s physical location. The upside of this technology’s potential does not have to be consumed by its potential dangers. Misapplied Sciences may have designed their technology with a privacy-first mindset, but will other companies exercise similar due diligence? In an age marked by growing levels of digital distrust, advocacy groups and lawmakers alike should begin brainstorming appropriate regulations to mitigate the risks associated with parallel reality.

References