Air Tags: Finding your Keys or your Sensitive Information?

Air Tags: Finding your Keys or your Sensitive Information?
By Chelsea Shu | February 7, 2020

With people becoming more reliant on features such as Apple’s Find My iPhone to help them find their phone or Mac laptop, people wish there was also a way to find other commonly misplaced items such as their wallet and keys. Apple may have a solution.

Apple is supposedly creating a new product called Air Tags that will track items through Bluetooth technology. The Air Tags will be small, white tags that attach to important items with an adhesive.

They have tracking chips in them that will connect them to an iPhone app called “Find My.” This will enable users to locate and track all of their lost items through an app on their iPhone. The Air Tags will also have a feature allowing a person to press a button to emit a sound from the Air Tag, allowing the user to locate their item easily.

Privacy Concerns

While this product may offer consumers an opportunity to seamlessly keep track of their items, it may be too good to be true. Apple already collects a multitude of personal data: what music we listen to, what news we read, and what apps we use most often. Introducing more items such as wallets and keys into Apple’s tracking system means Apple has increased surveillance on a person’s daily activities and their locations throughout the day. Increased surveillance means more access to a person’s personal life and possibly, sensitive information.

While Apple claims that it does not sell location information to advertisers, its privacy policy states that its Location Services function “allows third-party apps and websites to gather and use information based on current location.” This raises a concern because it enables third parties to collect this data for their own purposes and it is unclear what they will do with this data. Given this data, these third party companies can now track when Air Tag users arrive and leave places as well as monitor their tendencies.

Furthermore, there are large consequences if this data lands in the hands of a malicious person. Creating a product like Air Tags opens up the possibility for data to be shown or accessed by someone it was unintended for. This can lead to unwanted information being exposed or used against the person whose data is being tracked.

Apple also claims that it scrambles and encrypts the data it collects. But in reality, the anonymization of data is quite difficult. While Apple claims that the location data they collect “does not personally identify you,” combining the magnitude of data that Apple has on each person could make it possible to fit the puzzle pieces together and identify a person, violating that person’s privacy and enabling that persons sensitive information to be exposed.

In summary, the addition of Apple Air Tags might seem like a convenient, useful idea, but what people do not realize is that opting in to such a product opens up the doors for increased data surveillance.

References
https://www.macrumors.com/guide/airtags/
https://www.businessinsider.com/apple-airtags-new-iphone-product-rumors-release-date-features-2020-2
https://www.usatoday.com/story/tech/talkingtech/2018/04/17/apple-make-simpler-download-your-privacy-data-year/521786002/
https://support.apple.com/en-us/HT207056

Amazon Go Stores

Amazon Go Stores
By Rudy Venguswamy | February 8, 2020

The tracking of retail shopping has always been a huge part of American consumer culture. It almost seems a given that after Black Friday, news stations will be broadcasting numbers about how this Black Friday compared in terms of revenue to previous years. These numbers though, are only a peek mainstream consumers get into the obsessive relationship retailers have with tracking customers. The amount of data retailers are now pushing to have about consumers has grown exponentially thanks to smart technology such as computer vision, internet analytics and customized Machine Learning models built to sell more to consumers.

The holy grail for retailers, with this in mind, has always been not just tracking sales, but tracking the entire sales cycle. The increase in online sales has of course made part of this job easier, but quite interesting, the biggest juggernaut of online sales, Amazon, in the past two years, has opened up a physical store, one that in many ways harkens back to physical shopping, but in many ways, a is step closer to the coveted grail of retail, tracking everything about a consumer.

In its new Amazon Go stores, cameras decorate every square corner, and machine learning plays the field, tracking each participant’s ID (though Amazon insists without facial recognition), their movements in the store, and links this to their online presence, creating perhaps the greatest concentration of insights into a consumer walking in a store that the world has ever seen.

This newfound power, transcending the physical and online shopping experience is, with no doubt, a marvelous engineering feat, costing both hundreds of millions of dollars of R&D spent, and sophisticated matching algorithms that detect reluctance in consumers and encourage them with coupons offline and online.

This power, however, also will shift the paradigm for privacy in the real world. As consumers, most expect their activities online and their interactions in the real world to stay, for the most part, separated. This new shift in the way commerce can be done means that this physical- online wall is all but evaporated and abstracted away to ML models.

Under an ethical framework of subject interaction with experimentation via machine learning, I think Amazon Go stores are minefields for unethical manipulation of consumers. Though Amazon has made off-the-cuff promises about what technology AI “currently” is being allowed to operate in the store (such as no face detection), these assurances should not be reassuring as they, in truth, are subject to change contingent solely on Amazon’s bottom line and engineering prowess. Consumers by simply walking into the store are forced into the game played by algorithms whose purpose is to maximize sales. It’s already dubious when this happens online. It should be exponentially more concerning when this manipulation enters our physical world too.

In conclusion, these Amazon Go stores, which track nearly every aspect of the consumer degrade the inherent privacy wall of real life versus online interaction. This is problematic as the subjects of this incursion, the consumers, are unwitting. Customers don’t necessarily consent when they walk into a store. Placing limits on artificial intelligence and its manipulation of our physical interactions with stores is critical to the safety of consumers from an otherwise predatory retail practice.

Is privacy a luxury good?

Is privacy a luxury good?
By Keenan Szulik | February 7, 2020

As one of the largest companies in the world—now valued at over $1 trillion—Apple Inc. has masterfully crafted its products and its brand into an international powerhouse. In fact, in 2002, almost 20 years ago, Wired Magazine declared “Apple: It’s All About the Brand“, detailing how Apple’s brand and marketing had driven the continued rise of the company.

Industry experts used words like “humanistic”, “innovation”, and “imagination” to describe Apple’s brand in 2002. But now, in 2020, there’s a new word that comes to mind: luxury.

In many ways, Apple has been slowly moving itself into the luxury goods market for years: first with the Apple Watch in 2015, its first foray into wearable devices. Then, into jewelry, with the introduction of AirPods in 2016. More recently, with the iPhone X and its hefty price tag of $999. And to really cement itself as a luxury brand, Apple released its most expensive computer yet in late 2019. The price? Over $50,000.


Source: https://qz.com/1765449/the-apple-mac-pro-can-cost-over-50000/

Apple’s new Mac Pro, priced at over $50,000 when fully loaded with all possible customizations.

This (luxury) branding is important, especially as Apple continues its competitive war against Google’s Android. Android devices, unlike Apple’s iOS devices, are incredibly price accessible. They do not cost $999, like a new iPhone X.

Android’s price accessibility has enabled it to become the global smartphone market share leader, contrary to what some American readers may think. In fact, according to [estimates from IDC](https://www.idc.com/promo/smartphone-market-share/os), Android phones represent nearly 87% of smartphones globally, compared to Apple’s 13%.

Apple’s iOS dominates North America, but lags across the rest of the globe, per DeviceAtlas.

How has Apple responded in the face of competition and proliferation from Android? Privacy.

For over a decade, Google—the maker of Android smartphones—has been mired in privacy concerns. Most recently, a questionable handling of personally identifiable information in advertising and an Amnesty International report detailing “total contempt for Android users’ privacy” entrenched the stereotype that Google did not respect the privacy of its users.

Apple pounced on this opportunity, launching a series of ads titled “Privacy on iPhone” with the slogan “Privacy. That’s iPhone.” Two such ads, from 2019, now have over 25 million views each on YouTube.

This is where it gets interesting: by leveraging privacy as a competitive advantage, Apple associates privacy with its luxury brand. Apple customers deserve and receive privacy; the rest, not so much. This assertion is a subtle one, but it’s absolutely critical: Apple is effectively telling consumers that privacy is a luxury good.

There are two resultant questions from this:

1. Is privacy a luxury good, or a fundamental human right? (Let’s ignore, for the time being, defining “privacy”.)

2. Technically, how would Apple achieve this privacy in ways that its competitors would not?

The ethics of the human right to privacy is a fascinating debate for the 21st century. Smartphones are now nearly ubiquitous, and the meaning of privacy has changed dramatically over the last decade (as it almost certainly changed dramatically over every prior decade, thanks to technological innovation). But it’s worth noting: there are many technical tradeoffs when engineering with privacy as a goal.

Apple, for one, has taken many steps to engineer privacy into its products by creating “on-device intelligence” systems (which it has also effectively marketed). This means that rather than taking data and sending it back to Apple’s servers to be processed, the data can be processed on your phone, which you own and control. Google has also taken steps to achieve this on-device intelligence, but has communicated its benefits less effectively to consumers.

Building these on-device intelligence systems, however, is expensive. Privacy, in turn, is expensive. And Apple uses this, in part, to justify the high price tag on its iPhones (further asserting privacy as a luxury good).

All of this is to say that we’re in a trying time. As brands such as Apple and Google introduce privacy as a point of competition, we as consumers feel the impact of their choices. This could have a positive effect: Apple and Google could enter a privacy war, raising the privacy standards in a way that positively benefits all consumers. Or it could deepen divides, with privacy becoming a luxury good afforded to the rich and powerful, and revoked from those with less.

Parallel Reality: Fixated on a Pixelated World

Parallel Reality: Fixated on a Pixelated World
By Michael Steckler | January 24, 2020

While augmented and virtual reality continue to receive increasing media coverage and popularity in both tech and gaming circles, new strides in parallel reality technology deserve to garner similar focus, if not more. Parallel Reality Displays, a technology developed by Redmond, Washington-based startup Misapplied Sciences, are enabled by a new pixel that can simultaneously project up to millions of light rays of different colors and brightness. Each ray can then be software-directed to a specific person. A mind-bending innovation that allows a hundred or more viewers to simultaneously share a digital display, sign, or light and each see something different. Misapplied Sciences has partnered with Delta Airlines to test the technology in a beta experience at Detroit Metropolitan Airport later in 2020. The idea is that different travelers could see their personalized flight information all on the same screen at the same time.

The airport scenario is just one of many applications of the technology. Parallel reality offers personalized experiences for people looking at the same screen. As such, its privacy implications are rich and distinct from those presented by more traditional virtual environments, such as those presented by online news forums and social network platforms. These aforementioned arenas are threatened by issues such as fake news, targeted and native advertising, and online surveillance. Parallel reality suffers from these same threats, and more. Since parallel reality customizes people’s experiences in the physical world, the technology relies on physical surveillance, and as such, poses Orwellian threats accordingly. Moreover, different information can be spread to different people in alarming ways. This can have the potential to further solidify the disparities between people with different politics, values, interests, and beliefs. For example, the isolation of “bubbles” between Democrats and Republicans can be exacerbated and evolve into something worthy of a Black Mirror.

Aware of these privacy concerns, Misapplied Sciences has advocated for opt-in advertising, as well as anonymized tracking of an individual’s physical location. The upside of this technology’s potential does not have to be consumed by its potential dangers. Misapplied Sciences may have designed their technology with a privacy-first mindset, but will other companies exercise similar due diligence? In an age marked by growing levels of digital distrust, advocacy groups and lawmakers alike should begin brainstorming appropriate regulations to mitigate the risks associated with parallel reality.

References

AI innovation or exploitation : Uber’s rideshare digital economy (Musings of a private citizen)

AI innovation or exploitation : Uber’s rideshare digital economy
(Musings of a private citizen)
By Anonymous | December 6, 2019

T​axicabs have been around for decades, regulated and controlled by city governments and other local transportation authorities across the civilized world in one of the following ways – drivers have to apply for a taxicab license or permit with a city or state agency, they need to have a good driving record and often are governed by local rules around fares, rights they must afford to their customers and when and how much they can charge for their cab service during the day, night, busy traffic, airports, etc…

Behind the innovative ideas like Uber or Lyft is a very principled thought of sharing – a very basic human trait that each of us learns as early as we are probably learning to walk. Share your food, your toys, your books, pencil and then as adults, share your space, sometimes possessions for the greater good – benefits like reducing waste and traffic when we think of things like carpools and friends driving friends together.
There are limits to human sharing — sharing does not scale up, after all we know that all friendships, all neighbors, colleagues are not created equal. We all don’t have the same possessions, and hence our ability or willingness to share can be varied and hence reciprocity may be uneven, resulting in broken hearts, spoilt relationships and a system of carpooling that does not scale. Thus enter – ridesharing services like Uber.

Due to income disparity, a section of the community that is left behind finds a way to make an income as a driver supplementing during times of hardship or when one might be in job training or when in between jobs or just not suitably qualified or experienced for available local jobs.

Innovation creates hundreds of thousands of “gig economy” jobs and results in new income rivers to flow from the pockets of haves to have-nots. This in turn increases access to resources like education, child care, college education and fill the gaps left despite public transportation in cities like Chicago, New York, New Delhi, San Francisco, Mumbai, Calcutta, London where numerous layers of local public transportation services have been in existence for decades alongside taxicab drivers.

There are some visible evils of this sharing economy, innovative yet exploitative by design. While taxicabs are regulated and have some rights, On Uber the drivers and riders are all “users” and these drivers cannot expect to be treated with employee benefits, something that is a sticking issue for those who drive 40 or more hours a week. The appeal of a gig job is far greater and fares are hard to compete with for the regulated taxicab drivers. Another major problem is that of discrimination of marginalized sections of society both as riders and drivers – the inherent biases result in drivers geting lower fares with lower ratings and customers get lower ratings and have to often wait longer time periods in order to get their rides.

The algorithm cannot repair the biases of society – Uber amplifies these biases. Finally, Uber users have to put up with a direct invasion of privacy. The app continues to track the rider after they are not with Uber anymore resulting in collection, processing and utilization of private data that should not have been collected.

It has been known for some time now that Uber has poor internal privacy safeguards and the data they collect can be used in the name of “R&D” projects within Uber where data scientists have been found to be using user data while the User privacy policy remains devoid of a proper disclaimer of these research objectives and how these may affect the user community.

While Uber is a technology platform, it does have a powerful ability to manipulate the market. Using AI to reinforce learning ways to test the tolerance of a rider when it comes to price point and the tolerance of a driver to accept a lower price, the margin between the driver and the rider is Uber’s profit.

Uber is effectively charging variable “user fee” based on the value of the transaction, the ability of the customers to let go of their bottomline in the interest of convenience and a shared mindset.Uber is doing this while capturing a lot more data than is needed in broad daylight (and at night) from all its customers blurring the line between innovation and exploitation.

Coupon Browser Extensions: Sweet Deals For A Not So Sweet Price?

Coupon Browser Extensions: Sweet Deals For A Not So Sweet Price?
By Keane Johnson | December 6, 2019

With Thanksgiving come and gone, the holiday season is in full swing, meaning most Americans are turning their attention to holiday shopping. This year is predicted to be another record-breaking season, with total consumer sales forecasted to grow 3.8% over 2018 and exceed $1 trillion for the first time in history [1].

Although brick-and-mortar stores continue to account for the vast majority of consumer spending, online sales are forecasted to increase 13.2% to ​$135.5 billion, or 13.4% of total holiday shopping [1].

This growth is not limited to the holiday season. Online shopping in the United States has grown from 7.1% of total sales in 2015 to 8.9% of total sales in 2018 [2]. This increase in online shopping has motivated the creation of multiple sites and plugins that deliver discount codes or coupons to consumers. These plugins automatically process what is in a consumer’s online shopping cart, search the internet for available codes or coupons, and apply the best one at checkout.

One of the pioneers in this space is RetailMeNot. The original RetailMeNot service aggregated coupon and discount codes for a wide variety of companies on its website. Consumers would then go to the site, copy the coupon code, and apply the code to their carts at checkout. In 2018, RetailMeNot observed over 453 million site visits and facilitated global sales of ​$4.9 billion [3].

Two years ago, RetailMeNot released a browser extension – RetailMeNot Genie – that applies discounts and cash-back offers directly to a consumer’s cart at checkout. The plug-in is 100% free, meaning that those savings come at no monetary cost to the consumer.

However, as the saying goes, if you are not buying the product, you are the product. An examination of RetailMeNot’s Privacy Policy and their use of customer data raises some serious ethical concerns. RetailMeNot collects, either online or through a mobile device, consumer’s contact information (email, phone number, and address), “relationship information” (lifestyle, preferences, and interests), “transaction information” (kinds of coupons that are redeemed), location information (consumer’s proximity to a merchant), and “analytics information” (information about a user’s mobile device, including applications used, web pages browsed, battery level, and more).

RetailMeNot uses this information for a variety of purposes, including: creating user profiles that may infer age range, income range, gender, and interests; inferring the location of places users visit often; providing notifications when users arrive at, linger near, or leave these places; and providing advertisements through display, email, text, and mobile-push notifications [4]. Additionally, RetailMeNot may allow third parties to track and collect this data for their own purposes [4].

RetailMeNot may also share personal information “to effect a merger, acquisition, or otherwise; to support the sale or transfer of business assets” [4]. This final clause is the most troubling because it gives RetailMeNot leeway to sell its users’ personal information to support its business. And so although there is no upfront monetary cost to using RetailMeNot, users could end up paying in the background with their personal data.

However, alternative deal-hunting services are becoming more and more available. One of the fastest-growing is Honey, which like RetailMeNot’s Genie, is a browser extension that automatically applies coupon codes and discounts. Honey is transparent about the type of data they collect and is upfront about never selling their users’ personal information [5]. However, they may share data “with a buyer or successor if Honey is involved in a merger, acquisition, or similar corporate transaction” [5]. Their recent acquisition by Paypal [6] may mean that their users’ personal information is now in the hands of one of the largest online payments systems in the world. And Honey may no longer have control over how this information is used.

In summary, free deal-hunting and coupon-finding extensions are becoming more popular because they offer consumers an opportunity to easily save money. But this free money may be too good to be true. An inspection of the privacy policies of a couple popular services shows that consumers may end up paying with their sensitive personal information.

Content References

[1] https://www.businessinsider.com/emarketer-2019-holiday-shopping-forecast-2019-11

[2] https://www.invespcro.com/blog/global-online-retail-spending-statistics-and-trends/

[3] https://www.retailmenot.com/corp/about/

[4] https://www.retailmenot.com/static/privacy/

[5] https://www.joinhoney.com/privacy

[6] https://techcrunch.com/2019/11/20/paypal-to-acquire-shopping-and-rewards-platform-honey-for-4-billion/

Image References

[1] https://www.businessinsider.com/emarketer-2019-holiday-shopping-forecast-2019-11

[2] https://www.businessinsider.com/emarketer-2019-holiday-shopping-forecast-2019-11

[3] https://www.popsugar.com/smart-living/How-Does-RetailMeNot-Work-45478112

[4] [https://www.joinhoney.com](https://www.joinhoney.com/)

ICE – License Plate tapping & Immigration control – Privacy and ethical concerns

ICE – License Plate tapping & Immigration control – Privacy and ethical concerns
By Dasa Ponnappan | December 6, 2019

Background

With Trump’s era on stricter immigration starting 2016, ICE faced a daunting task of enforcing a stricter immigration policy to enforce poll promises of detaining and deporting illegal immigrants. What followed is an unprecedented level of policy design and execution to meet the standards. In this blog post, we are going to see the historical context, the means through which ICE employed consultants and technology to achieve the goal. Also, a once-abandoned license plate tracking became mainstream ICE Tool for targeting, detaining, and deporting illegal immigrants overlooking the ethical and privacy concerns.

ICE Immigration crackdown

Starting in 2017, post-Trump era policy on illegal immigration crackdown, ICE was entrusted with the daunting task of using “all legal means” to stop and deport illegal immigrants across the country. That included a massive ramp-up of a force of 10,000 to handle detention and deportation. Having McKinsey as their management consultants, ICE started devising massive recruitment drives of officers in Gyms, devising means to deport illegal immigrants to border cities with little safety and medical needs.

The Technology means:

ICE started deploying tracking means to apprehend and deport illegal immigrants. In order to do that, they started adopting tracking License plates, which was once ruled out as a policy in 2014 by DHS due to severe backlash around privacy concerns. The tracking of license plates involves tracking all of the vehicles passing a point, which resulted in tracking people’s movement across the country. This tracking provided unprecedented insight into people’s lives through their movements. The database that provided this comprehensive insight around license plates was that of a private entity. ICE tapped onto it to track, detain and deport illegal immigrants.

Privacy and ethical Concerns:

Given the nature of data collection, the process of not getting consent in collecting privacy data, and breach of privacy conceptions of Solove in terms of anti totalitarianism, the right to be left alone is very evident. Not only that, the vulnerability of data exposure of license tracking could potentially lead to stalking of individuals and harm in their way. It also serves as a stepping stone on the state controlling the lives of individuals through the umbrella of national security. Despite the privacy and ethical concerns, ICE spokesperson argued in its favor by citing year-long training for their staff around protecting and ethical use of license data. Given the duration of retainment of this data, which runs into years, and having collected this data through a for-profit organization, it is tough to justify the advantage of such means of tracking to control illegal immigration.

Alternative Data Sources in Investment Management Firms

Alternative Data Sources in Investment Management Firms
By Peter Yi Wang | December 6, 2019

Investment management firms have found alternative data as a way to gain information advantage over their peers. Industry spending on alternative data by investment firms such as mutual funds, hedge funds, pension funds, private-equity firms will jump from $232 million in 2016 to a projected $1.1 billion in 2019 and $1.7 billion next year, according to AlternativeData.org, an industry trade group supported by data provider YipitData. There are currently hundreds of alternative data providers across the globe, with a heavy concentration in the United States.

In recent years, investment management firms, particularly hedge funds which are highly focused on time-sensitive information, have pioneered innovative ways of tracking information. As an example, some hedge fund may send out drones to fly over lumberyards to see the stockpile of lumbers to make the bet to short the price of lumber. Other hedge funds may retrieve satellite images of retail stores parking lots to gauge the performance of the department stores. And furthermore, other hedge funds track online job postings of different companies to decipher the trajectory of growth.

While all these methods give hedge funds a particular edge over its peers, these actions also raise questions around the privacy protection rights for the subjects of these tracking. For example, does the lumberyard owners worry that drones are flying over their yards? Does parking lot drivers concerned about satellite taking pictures of their parked cars? Does the particular management of the subject companies care that their online job postings are being scrapped?

The most urgent issue that faces the alternative data and investment management industry that uses alternative data today is the lack of a global best practices standard.

In October, 2017, the Investment Data Standards Organization (IDSO) was formed in the United States to support the growth of alternative data industry through the creation and promotion of industry standards and best practices. This non-governmental organization is focused on three main products: 1) Personal identifiable information (PII); 2) Web crawling; and 3) Dataset compliance for sensitive information (SI).

There are four main areas which can be materially enhanced from a privacy perspective:

  1. Consent: The data subjects like individuals (or websites containing individual information) and businesses need to consent directly or indirectly to the data collection process.
  2. Storage and security: The alternative data storage should have a regulatory time limit similar to call transcripts of trading records under the Securities regulations in many countries. This ensures that personal identifiable information is deleted safely under a certain period. The data subjects should also reserve the right to delete their personal data upon request.
  3. Secondary use: Secondary use of the alternative data should be strictly monitored or prohibited given the unfair distribution of costs and benefits.
  4. Confidentiality: Personal identifiable information should be kept confidential at all times and data subjects have the opt-out option to exclude their information from alternative data sets.

Given alternative data is a global phenomenon and rapidly expanding, a global standards organization similar to IDSO should be formed to address the four critical recommendations listed above. Without a proper global standard, the alternative data industry and the investment management industry that utilizes alternative data may continue to breach the privacy boundaries of data subjects. Urgent privacy protection actions are needed in the alternative data industry.

Robo Cop, Meet Robo Lawyer – Using AI to Understand EULAs

Robo Cop, Meet Robo Lawyer – Using AI to Understand EULAs
By Manpreet Khural | December 6, 2019

No one reads EULAs. Everyone is concerned about their online privacy. Perhaps a bit of a hyperbole, but it is true none the less that End User License Agreements (EULAs) are overlooked documents that users scroll through as quick as they can, searching for the I Agree button. If they do take the time to read them, they often find the language difficult to decipher and may not have the prerequisite knowledge to identify concerns.

Do Not Sign, a recent tool released by DoNotPay, claims to parse through the legal language of EULAs and identifies Warnings and potential Loopholes for the user to review prior to agreeing to the terms of the document. The tool can even, on the userís behalf, send letters addressing the issues to the EULA company. DoNotPay began its journey to this tool by first helping its users contest parking tickets successfully, cancel subscriptions, and even sue companies in small claims court. With its new tool, it seeks to create a new landscape in which consumers can protect their privacy and contest problematic and abusive EULAs.

Is it not ironic, however, that we would use a technology product to protect ourselves from mainly technology centric EULAs? Before we hail this as a solution to this modern problem, we must ask what capability it has to inflict harm. According to its developers and journalists who have tried it, Do Not Sign is designed to rarely if ever produce false positives, Warnings where an underlying problem does not actually exist. It can however miss some problematic terms of an agreement. Specifically, it often misses out on terms related to online tracking. If users begin to use and trust this tool, they may feel more protected than when they had to agree to a EULA by themselves. This could provide a false sense of security, a convenience that may miss an important section of terms. This may lead to an agreement made when it should not have been. There does exist a potential for harm.

Is that enough of a roadblock to stay away from Do Not Sign? Likely the answer is no. Users are not able to EULAs with the level of scrutiny that this tool can. Overall it provides them with an ability to make a more informed decision in the face of legal or technologically opaque terms. A simplification is more than welcomed. One of the goals of the tool is to provide users with negotiating power. As more people use it to understand EULAs and subsequently reject questionable ones, the companies behind the agreements may open up to letting users pick and choose terms or at least provide some kind of feedback. This empowers people to have a voice in the matter of their privacy in specifically the digital sphere. This may spark a greater interest in consumer protections and create a better framework of principles when it comes to constructing EULAs.

Overall, Do Not Sign helps users to understand an environment foreign to most of them. While there are concerns with overreliance on the tool or the tool missing critical red flags in documents, the benefits of having something like this widely available far outweighs the hurdles. As persons who deal with such privacy related agreements regularly, we should support this tool so that the masses can begin to protect themselves.

References
1. https://www.theverge.com/2019/11/20/20973830/robot-lawyer-donotpay-ai-startup-license-agreements-sign-arbitration-clauses
2. https://www.techspot.com/news/82859-donotpay-ai-now-offers-advice-license-agreements-before.html
3. https://abovethelaw.com/legal-innovation-center/2018/10/12/donotpay-is-the-latest-legal-tech-darling-but-some-are-saying-do-not-click/?rf=1

When humans lose from “AI Snake Oil”

When humans lose from “AI Snake Oil”
By Joe Butcher | December 6, 2019

No matter where you work or live, you don’t have to go far to hear someone talk about the benefits of AI (yes, that’s Artificial Intelligence I’m referring to). What do I mean exactly by AI? Well, for the most part machine learning (ML for all you acronym lovers), but that’s a topic for another blog post. While everyone loves to talk about creating AI tools and systems, one can argue we aren’t talking enough about the human lives impacted by AI-aided decisions.

While I am two years into UC Berkeley’s Master of Information and Data Science (MIDS) program, I realize I have far more to learn about AI and data science. In fact, the more I learn, the more I realize I don’t know. What’s frightening to me is the amount of AI-based companies being created and funded that influence decisions that have a real impact on humans’ lives. It can be challenging and time consuming for data science trained professionals to understand the validity of the tools that companies are creating, not to mention individuals who are less data science savvy which make up the majority of the workforce.

Arvind Narayanan, an Associate Professor of Computer Science at Princeton, recently gave a talk at MIT focused on “How to recognize an AI snake oil”. During this presentation, Professor Narayanan articulates the contrast between the hope of how AI can be successfully applied to certain domains to the reality around its effectiveness (or lack thereof). Professor Narayanan goes on to discuss domains where AI is making real, genuine progress and domains where AI is not only “fundamentally dubious”, but also results in ethical concerns due to inaccuracy. Furthermore, he claims that for predicting social outcomes, AI is no better than manually scoring using a few features.

While none of this is likely shocking to anyone in the field, it does beg the question of what is being done to protect society from negative consequences. With policy and regulations struggling to keep up with the pace of technological advancement, some have argued that self-regulation will be enough to combat the likes of “AI Snake Oil”. Neither seem to be progressing fast enough to protect people from poor decisions made by algorithms. Moreover, political turbulence (both in the U.S. and around the world) and potential for economic disruption across industries leave most people feeling both uneasy and hopeless.

Ethical frameworks and regulations have been proven ways to protect humans from harm. While the current situation is a daunting one, the data science community should challenge itself to stay committed to work grounded in values and ethics. While it can be tempting to reap the economic benefits from developing solutions that customers are willing to pay for, it is critical that we understand whether our solutions follow the basic principles from the Belmont Report that we started this class with:

  • Respect of persons: Respect people’s autonomy and avoid deception
  • Beneficence: “Do no harm”
  • Justice: Fairly administer procedures and solutions

We can’t control everything that happens in the crazy world out there, but we can control how we apply our newly acquired data science toolkit. We should all choose wisely.

References
[1] Narayanan, A. “How to recognize AI snake oil”. https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
[2] Sagar, R. “The Snake Oil Merchants of AI: Princeton Professor Deflates the Hype.” https://analyticsindiamag.com/ai-hype-algorithms-bias-princeton-professor-talk-mi/
[3] “Belmont Report” https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/index.html