Hiding in Plain Sight: A Tutorial on Obfuscation

Hiding in Plain Sight: A Tutorial on Obfuscation
by Andrew Mamroth | 14 October 2018

In 2010, the Federal Trade Commission pledged to give internet users the power to determine if or when websites were allowed to track their behavior. This was the so called “Do Not Track” list. Now in 2018, this project has been sidelined and widely ignored by content providers and data analytics platforms, with users left to the wolves. So if one wanted to avoid these trackers or at the very least disincentivize their use by rendering them useless, what options do we have?

Imagine the orb weaving spider. This animal creates similar looking copies of itself in it’s web so as to prevent wasps from reliably striking it. It introduces noise to hide the signal. This is the idea behind many of the obfuscation tools used to nullify online trackers today. By hiding your actual request or intentions under a pile of noise making signals, you get the answer you desire while the tracking services are left with a unintelligible mess of information.

Here we’ll cover a few of the most common obfuscation tools used today to add to your digital camouflage so you can start hiding in plain sight. All of these are browser plug-ins and work with most modern browsers (firefox, chrome, etc.)


camo spider

TrackMeNot

TrackMeNot is a lightweight browser extension that helps protect web searchers from surveillance and data-profiling by search engines. It accomplishes this by running randomized search-queries to various online search engines such as Google, Yahoo!, and Bing. By hiding your queries under a deluge of noise, it becomes near impossible or at the very least, impractical to aggregate your search results to build a profile of you.

TrackMeNot was designed in response to the U.S. Department of Justice’s request for Google’s search logs and in response to the surprising discovery by a New York Times reporter that some identities and profiles could be inferred even from anonymized search logs published by AOL Inc (Nissenbaum, 2016, p. 13)

The hope was to protect those under criminal prosecution from government or state entities, from seizing their search histories for use against them. Under the Patriot Act, the government can demand library records via a secret court order and without probable cause that the information is related to a suspected terrorist plot. It can also block the librarian from revealing that request to anyone. Additionally, the term “records” covers not only the books you check out, it also includes search histories and hard drives from library computers. By introducing noise into your search history, this can make snooping in these logs very unreliable.

AdNauseam

AdNauseam is a tool to thwart online advertisers from collecting meaningful information from you. Many content providers make money by the click so by clicking on everything you provide a significant amount of noise without generating a meaningful stamp.

AdNauseam quietly clicks on every blocked ad, registering a visit on ad networks’ databases. As the collected data gathered shows an omnivorous click-stream, user tracking, targeting and surveillance become futile.

FaceCloak

FaceCloak is a tool that allows users to use social media sites such as Facebook while providing fake information to the platform for some users.

Users of social networking sites, such as Facebook or MySpace, need to trust a site to properly deal with their personal information. Unfortunately, this trust is not always justified. FaceCloak is a Firefox extension that replaces your personal information with fake information before sending it to a social networking site. Your actual personal information is encrypted and stored securely somewhere else. Only friends who were explicitly authorized by you have access to this information, and FaceCloak transparently replaces the fake information while your friends are viewing your profile.


FaceCloak

Conclusion

Online privacy is becoming ever more important and ever more difficult to achieve. It has become increasingly clear that the government is either unable or unwilling to protect it’s citizens privacy in the digital age and more often than not is a major offender in using individuals personal information for dubious goals. Obfuscation tools are becoming ever more prevalent as companies quietly take our privacy with or without consent. Protecting your privacy online isn’t impossible, but it does take work, fortunately the tools exist to take it back.

Reference

Brunton, Finn, and Helen Nissenbaum. Obfuscation: a User’s Guide for Privacy and Protest. MIT Press, 2016.

Shopping in the Digital Age – a Psychological Battle

Shopping in the Digital Age – a Psychological Battle
by Krissy Gianforte | 14 October 2018

Imagine your last trip to Target. You entered the store with your shopping list in-hand, intending to buy “just a few things”. As you walked down the aisles, though, extra items caught your eye – a cute coffee mug, or a soft throw blanket that would definitely match your living room. And so began the battle of wits – you versus the store. Hopefully you stayed strong and focused, and ended the day with your budget at least partially intact.

Now instead of that coffee mug and blanket, imagine that Target had an aisle designed specifically for you, containing all of the items you’ve *almost* purchased but decided against. As you walk through the store, you are bombarded by that pair of shoes you always wanted, sitting on a shelf right next to a box set of your favorite movies. How can you resist?

As stores collect more and more personal data on their customers, they are able to create exactly that shopping experience. But is that really a fair strategy?


Just a few things…

Classic economics

Stores have always used psychological tricks to help you spend more money. Shopping malls don’t have clocks to make sure you are not reminded that you need to leave; fast food restaurants offer “Extra Value Meals” that actually aren’t any less expensive than buying the items individually, and movie theatres use price framing to entice you into purchasing the medium size popcorn even though it is more than you want to eat.


Really a value?

All of these tactics are fairly well known – and shoppers often consciously recognize that they are being manipulated. There is still a sense that the shopper can ‘win’, though, and overcome the marketing ploys. After all, these offers are generic, designed to trick the “average person”. More careful, astute shoppers can surely avoid the traps. But that is changing in the age of big data…

An unfair advantage

In today’s digital world, stores collect an incredible amount of personal information about each and every customer: age, gender, purchase history, even how many pets you have at home. That deep knowledge allows stores to completely personalize their offerings for the individual – what an ex-CEO of Toys-R-Us called “marketing to the segment of one…the holy grail of consumer marketing”(Reuters, 2014). Suddenly, the usual psychological tricks seem a lot more sinister.

For example, consider the Amazon shopping site. As you check out, Amazon offers you a quick look at a list of suggested items, 100% personalized based on your purchase history and demographics. This is similar to the “impulse buy” racks of gum and sweets by the grocery store register, but much more powerful because it contains *exactly the items most likely to tempt you*.

Even familiar chains like Domino’s Pizza have begun using personal data to increase sales: the restaurant now offers a rewards program where customers can earn free pizza by logging-in with each purchase. Each time the customer visits the Domino’s site, he is shown a progress bar towards his next free reward. This type of goal-setting is a well-recognized gamification technique designed to increase the frequency of purchases. Even futher, the Domino’s site uses the customer’s purchase history to create an “Easy Meal”, which can be ordered with a single button-click. Ordering pizza is already tempting – even more so when it is made so effortless!


Personal Pizza

But has it crossed a line?

Retailers have enough personal information to tempt you with *exactly* the product you find irresistable. The catch is, they also know how hard you have tried to resist it. As clearly as Amazon can see the products you’ve repeatedly viewed, they can see that you have *never actually purchased them*. Domino’s knows that you typically purchase pizza only on Fridays, with your weekend budget.

And yet, the personalized messages continue to come, pushing you to act on that extra indulgent urge and make that extra purchase. The offers are no longer as easy to resist as a generic popcorn deal or value meal. They pull at exactly the thing you’ve been resisting…and eventually overrule the conscious decision you’d previously made against the purchase.

Shoppers may begin to question whether personal data-based tactics are actually fair, with the balance of power shifting so heavily in favor of the seller. Can these psychological manipulations really be acceptable when they are *so effective*? To help determine whether some ethical line has truly been crossed, we can apply personal privacy analysis frameworks to this use of customers’ personal data.

Daniel Solove’s Taxonomy of Privacy provides a perfect starting point to help identify what harms may be occuring. In particular, these data-based marketing strategies may represent invasions – Intrusion on your personal, quiet shopping time, or Decisional Interference as you are coerced into buying products you don’t want or need. However, this case is not as clear as typical examples of decisional interference, where *clearly uninvolved* parties interfere (such as the government stepping into personal reproductive decisions). Here, the seller is already an integral part of the transaction environment – so perhaps they have a right to be involved in customers’ decisions.

Deirdre Mulligan’s analytic for mapping privacy helps define seller and customers’ involvement and rights even more concretely. Using the analytic’s terminology, the issue can be understood along multiple dimensions:

  • Dimensions of Protection: privacy protects the subject’s (customers’) target (decision-making space and peaceful shopping time)
  • Dimensions of Harm: the action (using personal data for manipulating purchases) is conducted by the offender (merchants)

Those definitions are straightforward; however, defining the Dimension of Theory becomes more difficult. Where we would hope to assign a clear and noble object that privacy provides – something as universal as dignity or personal freedom – in this case we find simply a desire to not be watched or prodded by ‘big brother’. Though an understandable sentiment, it does not provide sufficient justification – a motivation and basis for providing privacy. Any attempt to assign a from-whom – an actor against whom privacy is a protection – returns somewhat empty. Privacy would protect consumers from the very merchants who collected said data, which they rightfully own and may use for their own purposes, as agreed when a customer acknowledges a privacy policy).

Business as usual

As we are unable to pinpoint any actually unethical behavior from sellers, perhaps we must recognize that personalized marketing is simply the way of the digital age. Ads and offers will become more tempting, and spending will be made excessively easy. It is a dangerous environment, and as consumers we must be aware of these tactics and train ourselves to combat them. But in the end, shopping remains a contentious psychological battle between seller and shopper. May the strongest side win.

Hate Speech – How far should social media censorship go, and can there be unintended consequences?

Hate Speech – How far should social media censorship go, and can there be unintended consequences?

By Anonymous, 10/14/2018

Today, there is a lot of harmful or false information being spread around the internet, among the most egregious of which is hate speech. It may seem a difficult task to stop hate speech from spreading over the vast expanse of the internet, however, some countries are taking matters into their own hands. In 2017, Germany banned any hate speech, as well as “fake news” that was defamatory, on social media sites. Some social media sites have already begun censorship of hate speech; for example, both Twitter and Facebook disallow it.

This raises the question, should there be social media censorship of hate speech, and could there be disparate impact on certain groups, particularly on protected groups? Some may argue that there shouldn’t be censorship at all; in the US, there is very little regulation of hate speech, since courts have repeatedly ruled that hate speech regulation violates the First Amendment. However, others may take the view that we should do all we can to stop language which could potentially incite violence against others, particularly minorities. Furthermore, since social media companies are private platforms, ultimately they have control over the content allowed on their site, and it makes sense for them to enhance their reputation by removing bad actors. Thus some social media companies have decided that while it not appease everyone, social media censoring is positive for their platform and will be enforced.

Could social media censoring of hate speech lead to unintended consequences, that unintentionally harm some groups or individuals over others? Firstly, since hate speech is not always well defined, could the list of phrases included as hate speech disproportionally affect certain groups of people? Certain phrases may not be offensive in some cultures or countries, but may be in others. How should social media sites determine what constitute hate speech, and is there a risk that certain groups have their speech censored more than other groups? In addition, could the way hate speech is monitored could be subject to bias from the reviewer or algorithmic bias?

Some social media sites do seem to recognize the complexity of censorship of hate speech. Facebook has a detailed blog which discusses its approach to hate speech; it details that the decision to determine whether a comment is hate speech or not includes a consideration of both context and intent. The blog even provides complex examples of words that may appear offensive but are not because the phrase was made in sarcasm, or the use of the word was to reclaim the word in a non-offensive manner. This is an important recognition in the censorship process; among many issues in ethics, there is often no absolute right or wrong answer, with context being the major determinant, and this is no exception.

Facebook also notes that they “are a long way from being able to rely on machine learning and AI to handle the complexity involved in assessing hate speech.” Algorithmic bias is thus not an issue as of yet, but more importantly, it is good that there is not a rush to use algorithms in this situation; something as context based as hate speech would be extremely difficult to flag correctly, and doing so is certain to lead to many false positives. It does however mean the main identification of hate speech comes from user reports and employees who review the content. This could lead to two additional forms of bias. Firstly, there could be bias in the types of posts that are reported or the people whose posts get reported. There could be a particular issue if there are certain posts that are not being reported because the people they target are too afraid to report it, or if they do not see the post. This is a difficult problem to address, although anonymous reports should somewhat resolve the issue of the fear of reporting. The second type of bias is that the reviewers themselves are biased in a certain way. While it is difficult to remove all types of bias, it is important to increase the understanding of potential sources of bias, and then address the bias. Facebook has pledged that their teams will continue learning about local context and the changing language, in an effort to combat this. It is a difficult battle, and we must hope that the social media companies are able to get it right; in the meantime, we should continue to monitor that hate speech censorship censors just hate speech, and no more.

References:

BBC News, “Germany starts enforcing hate speech law”, retrieved October 13, 2018 from
https://www.bbc.com/news/technology-42510868

Twitter, “Hateful Conduct Policy”, retrieved October 13, 2018 from https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy

Facebook, Community Standards – Hate Speech, retrieved October 13 from https://www.facebook.com/communitystandards/hate_speech

Facebook Newsroom, “Hard Questions: Who Should Decide What Is Hate Speech in an Online Global Community?”, retrieved October 13 from https://newsroom.fb.com/news/2017/06/hard-questions-hate-speech/