Archive for December 4th, 2018

Potential Negative Consequences IoT Devices Could Have on Consumers
By Anonymous | December 4, 2018

IoT, or the Internet of Things, are devices that have the ability to collect and transmit data across the internet or other devices. The number of internet connected devices has grown rapidly among consumers. In the past, a typical person only owned a few IoT devices, such as desktops, laptops, routers and smartphones. Now, due to technological advances, many people also own televisions, video game consoles, smart watches (e.g. Fitbit, Apple Watch), digital assistants (e.g. Amazon Alexa, Google Home), cars, security systems, appliances, thermostats, locks and lights that all connect and transmit information over the internet.

While companies are constantly trying to find new ways to implement IoT capabilities into the lives of consumers, security seems to be taking a back seat. Therefore, with all of these new devices, it is important for consumers to remain aware of the personal information that is being collected, and to be informed of the potential negative consequences that could result from owning such devices. Here are four things you may want to be aware of:

1. Hackers could spy on you

43768357 – hooded cyber criminal stealing secrets with laptop

I am sure you have heard stories of people who have been spied on after having the webcams on their laptops hacked. Other devices, like The Owlet, a wearable baby monitor, was found to be hackable, along with SecurView smart cameras. What if someone were able to access your Alexa? They would be able to learn a lot about your personal life through recordings of your conversations. If someone were to hack your smart car, then they would be able to know where you are at most times. Recently, researchers uncovered vulnerabilities in Dongguan Diqee vacuum cleaners that could allow attackers to listen or perform video surveillance.

2. Hackers could sell or use your personal information

It may not seem like a big deal if a device, such as your FitBit is hacked. However, many companies would be interested in obtaining this information and could achieve financial gains with it. What if an insurance company could improve their models with this data, and as a result, increased their rates for customers with poor vital signs? Earlier this year, hackers were able to steal sensitive information from a casino after gaining access to a smart thermometer in a fish tank. If hackers can steal data from companies that prioritize security, then they will probably have a much easier time doing the same to an average person. The data you generate is valuable, and hackers can find a way to monetize it.

3. Invasion of privacy by device makers

Our personal information is not only obtainable through hacks. We may be willingly giving it away to the makers of the devices we use. Each device and application has its own policies regarding the data it chooses to collect and store. A GPS app may store you travel history so it can make recommendations in the future. However, it may also use this information to make money on marketing offers for local businesses. Device makers are financially motivated to use your information to improve their products and target their marketing efforts.

4. Invasion of privacy by government agencies

Government agencies are another group that may have access to our personal information. Some agencies, like the FBI, have the power to request data from device makers in order to gather intelligence related to possible threats. Law enforcement may be able to access certain information for purposes of investigations. Last year, police charged a man with murdering his wife using data from her Fitbit. Also, lawyers may be able to subpoena data in criminal and civil litigation.

IoT devices will continue to play an important role in everyone’s lives. They will continue to create an integrated system that will lead to increased efficiency for all. However, consumers should remain informed, and if given a choice between a brand of device, like Alexa or Google Home, consider choosing a company that prioritizes the security and policy issues discussed above. This will send a message that consumers care, and encourage positive change.

The View from The Middle

December 4th, 2018

The View from The Middle
By Anonymous | December 4, 2018

If you are like me, you probably spend quite a bit of time online.

We read news articles online, watch videos, plan vacations, shop and much more. At the same time, we are generating data that is being used to tailor advertising to our personal preferences. Profiles constructed from our personal information are used to suggest movies and music we might like. Data driven recommendations make it easier for us to find relevant content. Advertising also provides revenue for the content providers which allows us to access those videos and articles at reduced cost.

But is the cost really reduced? How valuable is your data and how important is your privacy? Suppose you were sharing a computer with other members of your household. Would you want all your activities reflected in targeted advertising? Most of the time we are unaware that we are under surveillance and have no insight into the profiles created using our personal information. If we don’t want our personal information shared, how do we turn it off?

To answer that question, let’s first see what is being collected. We’ll put a proxy server between the web browser and the internet to act as a ‘Man-in-the-Middle’. All web communication goes through the proxy server which can record and display the content. We can now see what is being shared and where it is going.

The Privacy Settings of our Chrome browser allow us to turn off web services that share data. We also enable ‘Do Not Track’ to request that sites not track our browsing habits across websites.

Let’s see what happens when we browse to the webpage of a popular travel site and perform a search for vacation accommodation. In our proxy server we observe that the travel website caused many requests to be sent from our machine to advertising and analytics sites.

We can see requests being made to AppNexus (secure.adnxs.com), a company which builds groups of users for targeted advertising. These requests have used the X-Proxy-Origin HTTP Header to transmit our IP address. As IP addresses can be associated with geographic location this is personal data we may prefer to protect.

Both the Google Marketing Platform (doubleclick.net) and AppNexus are sharing details of the travel search in the Referrer HTTP Header. They know the intended destination and dates and the number of adults and children travelling.

ATDMT (ad.atdmt.com) is owned by a Facebook subsidiary Atlas Solutions. It is using a one pixel image as a tracking bug although the Do Not Track header is set to true. Clearbrain is a predictive analytics company which is also using a tracking bug.

Now we’ll have a look at the effectiveness of some popular privacy tools:

  1. The Electronic Frontier Foundation’s ‘Privacy Badger’ combined with ‘Adblock Plus’ in Chrome. Privacy Badger is a browser add-on from the Electronic Frontier Foundation that stops advertisers and other third-party trackers from secretly tracking what pages you look at on the web. Adblock Plus is a free open source ad blocker which allows users to customize how much advertising they want to see.
  2. The Cliqz browser with Ghostery enabled. Ghostery is a privacy plugin giving control over ads and tracking technologies. Cliqz is an open source browser designed for privacy.

There are now far fewer calls to third party websites. Privacy Badger has successfully identified and blocked the ATDMT tracking bug. Our IP address and travel search are no longer being collected. However neither Privacy Badger nor Ghostery detected the Clearbrain tracker. Since Privacy Badger learns to spot trackers while we browse it may just need to more time to detect bugs.

While these privacy tools are quite effective at providing some individual control over personal information, they are by no means a perfect solution. This approach places the burden of protecting privacy on the individual who does not always understand the risks. While these tools are designed to be easy to install, many people are unfamiliar with browser plugins.

Furthermore, we are making a trade off between our privacy and access to tailored advertising. Content websites we love to use may be sponsored by the advertising revenue we are now blocking.

For now, these tools at least offer the ability to make a choice.

The Customer Is Always Right: No Ethics in Algorithms Without Consumer Support
by Matt Swan | December 4, 2018

There is a something missing in data science today: ethics. It seems like there is a new scandal everyday; more personal data leaked to any number of bad actors in the greatest quantities possible. Big Data has quickly given way to Big Data Theft.

The Internet Society of France, for example, a public interest group advocating for online rights, is pushing Facebook to fix the problems that led to the recent string of violations. They’re suing for $100 million Euros (~$113 million USD) and threatening EU-based group action, if appropriate remedies are not made. Facebook is also being pursued by a public interest group in Ireland and recently paid a fine of 500,000 pounds (~$649,000 USD) for their role in the Cambridge Analytica breach. Is this the new normal?

Before we answer that question, it might be more prudent to ask why this happened in the first place. That answer is simple.

Dollars dictate ethics.

Facebook’s primary use of our data is to offer highly targeted (read: effective) advertising. Ads are the price of admission and it seems we’ve all come to terms with that. Amid all the scandals and breaches, Facebook made their money – far more money than they paid in fines. And they did it without any trace of ethical introspection. Move fast and break things, so long as they’re not your things.

Dollars dictate ethics.

Someone should be more concerned about this. In the recent hearings in the US Congress in early September, there was talk about regulating the tech industry to try to bring these problems under control. This feels like an encouraging move in the correct direction. It isn’t.

First, laws cannot enforce ethical behavior. Laws can put in place measures to reduce the likelihood of breaches or punish those not sufficiently safeguarding personal data or those failing to correct algorithms with a measurable bias, but it cannot require a company to have a Data Ethicist on the payroll. We’ve already noted that Facebook made more money than they paid in fines, so what motivation do they have to change their behavior?

Second, members of Congress are more likely to believe TensorFlow is a new setting on their Keurig than they are to know it’s an open source machine learning framework. Because of this reality, some organizations – such as 314 Action – exist and prioritize electing more STEM professionals to government because of technology has progressed quickly and government is out of touch. We need individuals who have a thorough understanding of technological methods.

Meanwhile, higher education is making an effort to import ethics into computer and data science programs, but there are still limitations. Some programs, such as UC Berkeley’s MIDS program, have implemented an ethics course. However, at the time of this writing, no program includes a course in ethics as a graduation requirement.

Dollars dictate ethics.

Consider the time constraints; only so many courses can be taken. If one program requires an ethics course, the programs that do not will be at an advantage in recruiting because they will argue the ethics course is a lost opportunity to squeeze in one more technology course. This will resonate with prospective students since there are no Data Ethicist jobs waiting for them and they’d prefer to load up on technology-oriented courses.

Also, taking an ethics course does not make one ethical. Ultimately, while each budding data scientist should be forced to consider the effects of his or her actions, it is certainly no guarantee of future ethical behavior.

If companies aren’t motivated to pursue ethics themselves and the government can’t force them to be ethical and schools can’t force us to be ethical, how can we possibly ensure the inclusion of ethics in data science?

I’ve provided the answer three times. If it were “ruby slippers”, we’d be home by now.

Dollars dictate ethics.

All the dollars start with consumers. And it turns out that when consumers collectively flex their economic muscles, companies bend and things break. Literally.

In late 2017, Fox News anchor Sean Hannity had made some questionable comments regarding a candidate for an Alabama senate seat. Consumers contacted Keurig, whose commercials aired during Hannity’s show, and complained. Keurig worked with Fox to ensure their ads would no longer be shown at those times, which also resulted in the untimely death of a number of Keurig machines.

The point is this: if we want to effect swift and enduring change within tech companies, the most effective way to do that is through consistent and persistent consumer influence. If we financially support companies that consider the ethical implications of their algorithms, or simply avoid those that don’t, we can create the necessary motivation for them to take it seriously.

But if we keep learning about the newest Facebook scandal from our Facebook feeds, we shouldn’t expect anymore more than the same “ask for forgiveness, not permission” attitude we’ve been getting all along.

Sources:
https://www.siliconrepublic.com/companies/data-science-ethicist-future 
https://www.siliconrepublic.com/enterprise/facebook-twitter-congress
https://www.siliconrepublic.com/careers/data-scientists-ethics
https://news.bloomberglaw.com/privacy-and-data-security/facebook-may-face-100m-euro-lawsuit-over-privacy-breach
https://www.nytimes.com/2017/11/13/business/media/keurig-hannity.html
http://www.pewinternet.org/2018/11/16/public-attitudes-toward-computer-algorithms/

Freudian Slips

December 4th, 2018

Freudian Slips
by Anonymous | December 4, 2018

In the ongoing battleground in the use and abuse of personal data in the age of big data, we often see competing forces of companies wanting to create new innovation and regulators who want to enforce limits in consideration of privacy or sociological harms that could arise from unmitigated usage. Often we will see companies or organizations who want as much data as possible, unfettered by considerations of regulation or other restrictions.

An interesting way to think about the underlying dynamic is to consider superimposing psychological models of human behavior on the cultural forces at play. Ruth Fulton Benedict wrote, Culture is “Personality Writ Large” in her book Patterns of Culture (htt1). One model for understanding the forces underlying human behavior is the Freudian one of the Id, Superego and Ego. In this model of explaining human behavior, Freud identified the Id as the primal underlying driving forces of human gratification, whether they be to satiate hunger or sexual in nature. It is an entirely selfish want or need for gratification. The Superego is the element of the brain that adheres to social norms, morality and is aware of the inherent social contract, the force of mutually agreed upon rules that enable individuals to co-exist. In his model, the resultant Ego was the output of these two forces that was the observed behavior presented to the world. An unhealthy balance of either Id or Superego, in his model, would result in diseased behavior. Consider a person who feels hungry and just takes things off other people’s plates without hesitation. This Id imbalance, without sufficient Superego restriction would be considered unhealthy behavior.

Drawing an analogy to behavior in the world of big data, we can see the Id as the urges of companies for innovation or profit. It’s what drives people to search for new answers, create innovative products, make money, to dive into solving a problem regardless of the consequences. However, unchecked by consideration or adherence to any social contract consideration, or the Superego, unhealthy behavior begins to leak out – privacy and data breaches, unethical use of people’s data, and even abusive workplace environments. Consider Uber, an extraordinarily innovative company with rapid growth. Led by Trevor Kalanick, there was a strong Id component to their rapid growth in a take no prisoners approach. In the process, people’s privacy was often overlooked. They often flaunted city regulations or cease and desist orders. They created an application to evade law enforcement (htt2). They also used data analytics to analyze if passengers were having one night stands. (htt4)

Of course, an inherent lack of trust results from some of these unchecked forces. But, without that driving Id, that drive to create, innovate, make money, it is unlikely Uber would have grown so rapidly. It is also likely no coincidence that some of the downfall of this unchecked Id, resulted in similar Id-like behavior leaking into the workplace, resulting in rampant sexual harassment and misconduct allegations and the eventual resignation of their CEO. Google, which has quickly grown to one of the biggest companies in the world, has also been recently accused of similar rampant sexual misconduct allegations.

Similarly, this is why on the flip side, a heavily Superego organization, one overly protective and regulatory, always considering stringent rules, might also be considered unhealthy. Consider the amount of innovation coming out of governmental organizations and institutions. This Freudian perspective superimposed on the dynamics of forces in the battles of big data organizations and government regulation is one perspective of how to interpret the different roles groups are playing. Neither could exist without each other, and the balance between the two is what creates a healthy growth and environment. There is a necessary amount of regulation, or reflection of social consequences, as well as a corresponding primal urges of recognition or power, that can create they type of growth that actually serves both to create healthy organizations.

References
(n.d.). Retrieved from http://companyculture.com/113-culture-is-personality-writ-large/
(n.d.). Retrieved from https://thehill.com/policy/technology/368560-uber-built-secret-program-to-evade-law-enforcement-report
(n.d.). Retrieved from https://boingboing.net/2014/11/19/uber-can-track-your-one-night.html

The Next Phase of Smart Home Tech: Ethical Implications of Google’s New Patent
By Jennifer Podracky | December 4, 2018

On October 30, 2018, Google filed a new patent for an extension of their Google smart home technology, titled “SMART-AUTOMATION SYSTEM THAT SUGGESTS OR AUTOMATICALLY (sp) IMPLEMENTS SELECTED HOUSEHOLD POLICIES BASED ON SENSED OBSERVATIONS.” In summary, this patent proposes a system by which an interconnected smart home system can detect the status and activities of persons in the household via audio or visual cues, and then implement a home-wide policy across the rest of the system based on rules that Google users have set up.

In English, what that means is that Google will use either microphones or cameras built into its smart devices to see who is home and what they are doing, and then make decisions on how to activate/deactivate other smart devices as a result. For example, it may hear the sound of a child playing in his or her room, and then determine via cameras and microphones in other devices that there is no one else in the home. Based on earlier observation of the home, Google already knows that there are two adults and one child that live in the home full-time; by adding this information to the information it has on the home’s current state, Google can infer that the child is home alone unsupervised. From there, Google can do a multitude of things: notify the owner(s) of the Google smart home account of the unsupervision, lock any unsecured smart locks, turn off smart lights in the front rooms of the home, disable the smart television in the child’s room, and so on. The action(s) that Google will take will depend on the policies and rules that the smart home’s users have configured. Google can also suggest new policies to the smart home users, based on the home status that it’s inferred; if it determines that a child is home alone and no policies have been configured for this situation, it can suggest the above actions.

 

Ethical Implications for the Smart Home Consumer

There are a couple of key components of this patent that could be cause for alarm with concerns to privacy.

What Google Can See

Thus far, commercially-released Google smart home devices (specifically the Google Home product line) have not included cameras. Today’s products include microphones, and are constantly listening to all voices and sounds in the home awaiting their “wake word” that requires them to take some action. Google can use the data that it collects from these microphones, even those not associated with device commands, to learn more about the individuals living in the home. Google devices can determine from regular household noises when different individuals usually arrive home, when they usually eat dinner, whether they cook or order out, how often they clean, and so on. By adding a camera to this product line, Google will always be both listening and watching. This means that Google won’t just be able to know when you cook, but also see what you cook. It will also be able to see your kitchen, including what brands of kitchenware you use, how high-end your appliances are, and how often you buy fresh produce. Perhaps most alarmingly, Google will also be able to see what you look like while you’re in the kitchen. Google can then use this information to draw conclusions about your health, income, and more.

Smart Home in the Kitchen

What Google Can Learn

Additionally, Google can learn more about individuals in the home based on the policies that they choose to implement. By setting a policy that detects noise in a child’s room after 8pm, Google can infer that this child’s bedtime is 8pm and then suggest other policies related to that (e.g. restricting TV usage). By setting policies restricting channel availability to specific household members, Google can infer which TV shows and channels that specific individuals are (or aren’t) allowed to watch.

Why this matters

By watching and listening to the home, Google is amassing an incredible amount of data on both the individual(s) that purchased the smart devices, as well as anyone else who is in the home at any time (including minors and un-consenting visitors).

What can Google do with all this data? Well, in a 2016 patent titled “PRIVACY-AWARE PERSONALIZED CONTENT FOR THE SMART HOME”, Google discusses how it could use visual clues like the contents of your closet to determine what kinds of clothes and brands you like, to then market related content to you. Specifically: “a sensing device 138 and/or client device may recognize a tee-shirt on a floor of the user’s closet and recognize the face on the tee-shirt to be that of Will Smith. In addition, the client device may determine from browser search history that the user has searched for Will Smith recently. Accordingly, the client device may use the object data and the search history in combination to provide a movie recommendation that displays, “You seem to like Will Smith. His new movie is playing in a theater near you.”” Google will use the audio and visual data that it collects to determine your likes and dislikes and market content to you accordingly.

Google can also provide the data of un-consenting individuals back to the owner of the smart home system. Suppose you’ve hired a babysitter for the evening to watch your child; Google can report back to you at the end of the night saying how much time she spent with the child, what she watched on TV, and what she looked at on the internet. Google can hear if your child is whispering past their bedtime, “infer mischief” (which is a direct quote), and then tattle to you. Google can see and hear if your teenager is crying in their room, and then report back on its findings to you without their knowledge. For the record, these are all real examples listed in the patent, so Google is aware of these uses too.

As of today, these patents have not been implemented (as far as we know) as part of the commercially available Google smart home product line. However, as the product line advances, it is important that we keep the privacy and ethical concerns in mind before bringing the latest-and-greatest device into a home that is shared with others.

Hello Berkeley!

December 4th, 2018

My name is Taylor and I’m an incoming student of the Master of Information and Data Science program. My current research interests are natural language understanding and generation. Reach me at ude.yelekreb.loohcsinull@rolyat!