Freudian Slips

Freudian Slips
by Anonymous | December 4, 2018

In the ongoing battleground in the use and abuse of personal data in the age of big data, we often see competing forces of companies wanting to create new innovation and regulators who want to enforce limits in consideration of privacy or sociological harms that could arise from unmitigated usage. Often we will see companies or organizations who want as much data as possible, unfettered by considerations of regulation or other restrictions.

An interesting way to think about the underlying dynamic is to consider superimposing psychological models of human behavior on the cultural forces at play. Ruth Fulton Benedict wrote, Culture is “Personality Writ Large” in her book Patterns of Culture (htt1). One model for understanding the forces underlying human behavior is the Freudian one of the Id, Superego and Ego. In this model of explaining human behavior, Freud identified the Id as the primal underlying driving forces of human gratification, whether they be to satiate hunger or sexual in nature. It is an entirely selfish want or need for gratification. The Superego is the element of the brain that adheres to social norms, morality and is aware of the inherent social contract, the force of mutually agreed upon rules that enable individuals to co-exist. In his model, the resultant Ego was the output of these two forces that was the observed behavior presented to the world. An unhealthy balance of either Id or Superego, in his model, would result in diseased behavior. Consider a person who feels hungry and just takes things off other people’s plates without hesitation. This Id imbalance, without sufficient Superego restriction would be considered unhealthy behavior.

Drawing an analogy to behavior in the world of big data, we can see the Id as the urges of companies for innovation or profit. It’s what drives people to search for new answers, create innovative products, make money, to dive into solving a problem regardless of the consequences. However, unchecked by consideration or adherence to any social contract consideration, or the Superego, unhealthy behavior begins to leak out – privacy and data breaches, unethical use of people’s data, and even abusive workplace environments. Consider Uber, an extraordinarily innovative company with rapid growth. Led by Trevor Kalanick, there was a strong Id component to their rapid growth in a take no prisoners approach. In the process, people’s privacy was often overlooked. They often flaunted city regulations or cease and desist orders. They created an application to evade law enforcement (htt2). They also used data analytics to analyze if passengers were having one night stands. (htt4)

Of course, an inherent lack of trust results from some of these unchecked forces. But, without that driving Id, that drive to create, innovate, make money, it is unlikely Uber would have grown so rapidly. It is also likely no coincidence that some of the downfall of this unchecked Id, resulted in similar Id-like behavior leaking into the workplace, resulting in rampant sexual harassment and misconduct allegations and the eventual resignation of their CEO. Google, which has quickly grown to one of the biggest companies in the world, has also been recently accused of similar rampant sexual misconduct allegations.

Similarly, this is why on the flip side, a heavily Superego organization, one overly protective and regulatory, always considering stringent rules, might also be considered unhealthy. Consider the amount of innovation coming out of governmental organizations and institutions. This Freudian perspective superimposed on the dynamics of forces in the battles of big data organizations and government regulation is one perspective of how to interpret the different roles groups are playing. Neither could exist without each other, and the balance between the two is what creates a healthy growth and environment. There is a necessary amount of regulation, or reflection of social consequences, as well as a corresponding primal urges of recognition or power, that can create they type of growth that actually serves both to create healthy organizations.

References
(n.d.). Retrieved from companyculture.com/113-culture-is-personality-writ-large/
(n.d.). Retrieved from thehill.com/policy/technology/368560-uber-built-secret-program-to-evade-law-enforcement-report
(n.d.). Retrieved from boingboing.net/2014/11/19/uber-can-track-your-one-night.html

The Next Phase of Smart Home Tech: Ethical Implications of Google’s New Patent

The Next Phase of Smart Home Tech: Ethical Implications of Google’s New Patent
By Jennifer Podracky | December 4, 2018

On October 30, 2018, Google filed a new patent for an extension of their Google smart home technology, titled “SMART-AUTOMATION SYSTEM THAT SUGGESTS OR AUTOMATICALLY (sp) IMPLEMENTS SELECTED HOUSEHOLD POLICIES BASED ON SENSED OBSERVATIONS.” In summary, this patent proposes a system by which an interconnected smart home system can detect the status and activities of persons in the household via audio or visual cues, and then implement a home-wide policy across the rest of the system based on rules that Google users have set up.

In English, what that means is that Google will use either microphones or cameras built into its smart devices to see who is home and what they are doing, and then make decisions on how to activate/deactivate other smart devices as a result. For example, it may hear the sound of a child playing in his or her room, and then determine via cameras and microphones in other devices that there is no one else in the home. Based on earlier observation of the home, Google already knows that there are two adults and one child that live in the home full-time; by adding this information to the information it has on the home’s current state, Google can infer that the child is home alone unsupervised. From there, Google can do a multitude of things: notify the owner(s) of the Google smart home account of the unsupervision, lock any unsecured smart locks, turn off smart lights in the front rooms of the home, disable the smart television in the child’s room, and so on. The action(s) that Google will take will depend on the policies and rules that the smart home’s users have configured. Google can also suggest new policies to the smart home users, based on the home status that it’s inferred; if it determines that a child is home alone and no policies have been configured for this situation, it can suggest the above actions.

 

Ethical Implications for the Smart Home Consumer

There are a couple of key components of this patent that could be cause for alarm with concerns to privacy.

What Google Can See

Thus far, commercially-released Google smart home devices (specifically the Google Home product line) have not included cameras. Today’s products include microphones, and are constantly listening to all voices and sounds in the home awaiting their “wake word” that requires them to take some action. Google can use the data that it collects from these microphones, even those not associated with device commands, to learn more about the individuals living in the home. Google devices can determine from regular household noises when different individuals usually arrive home, when they usually eat dinner, whether they cook or order out, how often they clean, and so on. By adding a camera to this product line, Google will always be both listening and watching. This means that Google won’t just be able to know when you cook, but also see what you cook. It will also be able to see your kitchen, including what brands of kitchenware you use, how high-end your appliances are, and how often you buy fresh produce. Perhaps most alarmingly, Google will also be able to see what you look like while you’re in the kitchen. Google can then use this information to draw conclusions about your health, income, and more.

Smart Home in the Kitchen

What Google Can Learn

Additionally, Google can learn more about individuals in the home based on the policies that they choose to implement. By setting a policy that detects noise in a child’s room after 8pm, Google can infer that this child’s bedtime is 8pm and then suggest other policies related to that (e.g. restricting TV usage). By setting policies restricting channel availability to specific household members, Google can infer which TV shows and channels that specific individuals are (or aren’t) allowed to watch.

Why this matters

By watching and listening to the home, Google is amassing an incredible amount of data on both the individual(s) that purchased the smart devices, as well as anyone else who is in the home at any time (including minors and un-consenting visitors).

What can Google do with all this data? Well, in a 2016 patent titled “PRIVACY-AWARE PERSONALIZED CONTENT FOR THE SMART HOME”, Google discusses how it could use visual clues like the contents of your closet to determine what kinds of clothes and brands you like, to then market related content to you. Specifically: “a sensing device 138 and/or client device may recognize a tee-shirt on a floor of the user’s closet and recognize the face on the tee-shirt to be that of Will Smith. In addition, the client device may determine from browser search history that the user has searched for Will Smith recently. Accordingly, the client device may use the object data and the search history in combination to provide a movie recommendation that displays, “You seem to like Will Smith. His new movie is playing in a theater near you.”” Google will use the audio and visual data that it collects to determine your likes and dislikes and market content to you accordingly.

Google can also provide the data of un-consenting individuals back to the owner of the smart home system. Suppose you’ve hired a babysitter for the evening to watch your child; Google can report back to you at the end of the night saying how much time she spent with the child, what she watched on TV, and what she looked at on the internet. Google can hear if your child is whispering past their bedtime, “infer mischief” (which is a direct quote), and then tattle to you. Google can see and hear if your teenager is crying in their room, and then report back on its findings to you without their knowledge. For the record, these are all real examples listed in the patent, so Google is aware of these uses too.

As of today, these patents have not been implemented (as far as we know) as part of the commercially available Google smart home product line. However, as the product line advances, it is important that we keep the privacy and ethical concerns in mind before bringing the latest-and-greatest device into a home that is shared with others.