The TikTok Hubbub: What’s Different This Time Around?

The TikTok Hubbub: What’s Different This Time Around?
By Anonymous | September 25, 2020

Barely three years since its creation, TikTok is the latest juggernaut to emerge in the social media landscape. With over two billion downloads (over 600 million of which occurred just this year), the short video sharing app that allows users to lip sync and share viral dances finds itself among the likes of Facebook, Twitter, and Instagram in both the size of its user base and ubiquitousness in popular culture. Along with this popularity has come a firestorm of criticism related to privacy concerns, as well as powerful players in the U.S. government categorizing the app as a national security threat.


Image from: https://analyticsindiamag.com/is-tiktok-really-security-risk-or-america-being-paranoid/

Censorship
The largest reason TikTok seems to garner such scrutiny is the app’s parent company, ByteDance, is a Chinese company, and as such is governed by Chinese laws. Early criticisms of the company noted possible examples of censorship, including the removal of a teen’s account who was critical of human rights abuses by the Chinese government, and a German study that found TikTok hid posts made by LGBTQ users and those with disabilities. Exclusion of these viewpoints from the platform certainly raises censorship concerns. It is worth noting TikTok is not actually available in China, and the company maintains that they “do not remove content based on sensitivities related to China”.

Data Collection
Like many of its counterparts, TikTok collects a vast amount of data from its users, including location, IP addresses, and browsing history. In the context of social media apps, this seems to be the norm. It is the question of where this data might ultimately flow that garners the most criticism. The Wall Street Journal notes “concerns grow that Beijing could tap the social-media platform’s information to gather data on Americans.” The idea that this personal information could be shared with a foreign government is indeed alarming, but might have one wondering why regulators have been fairly easy on U.S. based companies like Facebook, whose role in 2016’s election interference is still up for debate, or why citizens do not find it more problematic that the U.S. government frequently requests user information from Facebook and Google. In contrast to the U.S. Government, the European Union has been at the forefront of protecting user privacy and took preemptive steps by implementing the GDPR so that foreign companies, such as Facebook, could not misuse user data without consequence. It seems evident that control of personal data is a concern globally, but one that the U.S. is only selectively taking seriously if it stems from a foreign company.


Image from: https://www.usnews.com/news/technology/articles/2020-03-04/us-senator-wants-to-ban-federal-workers-from-using-chinese-video-app-tik-tok

The Backlash
In November 2019, with bipartisan support, a U.S. national security probe of TikTok was initiated over concerns of user data collection, content censorship, and the possibility of foreign influence campaigns. In September 2020, President Trump went so far as to implement a ban on TikTok in the U.S. Currently, it appears that Oracle has become TikTok’s “trusted tech partner” in the United States, possibly allaying some fears of where data is stored and processed for the application, and under whose authority, providing a path for TikTok to keep operating within the U.S.

For its part, TikTok is attempting to navigate very tricky geopolitical demands (the app has also been banned in India, and Japan and others may follow), even establishing a Transparency Center to “evaluate [their] moderation systems, processes and policies in a holistic manner”. Whether their actions will actually be able to assuage both the public and government’s misgivings is anyone’s guess, and it can also be argued that where the data they collect is purportedly stored and who owns the company are largely irrelevent to the issues raised.

As the saga over TikTok’s platform and policies continues to play out, hopefully the public and lawmakers will not miss the broader issues raised over privacy practices and user data. It is somewhat convenient to scrutinize a company from a nation with which the U.S. has substantive human rights, political, and trade disagreements. While TikTok’s policies should indeed raise concern, we would do well to ask many of the same questions of the applications we use, regardless of where they were founded.

Steps to Protect Your Online Data Privacy

Steps to Protect Your Online Data Privacy
By Andrew Dively | September 25, 2020

Some individuals, when asked about why they don’t take more steps to protect their privacy, respond with something along the lines of, “I don’t have anything to hide.”, but if I were to ask those same individuals to send me their usernames and passwords to their email accounts, very few would actually grant me permission. When there is a lot of personal information about us on the internet, it can harm us in ways we never intended. Future employers who scour social media looking for red flags, past connections searching for our physical addresses on Google, or potential litigators looking up our employer and job title on LinkedIn to determine if we’re worth suing. This guide is going to cover the various ways our data and lives are exposed on the web and how we can protect ourselves.

Social media is by far the worst offender when it comes to data privacy, not only because of the companies’ practices but also because of the information people willingly give up, which can be purchased by virtually any third party. I’d encourage you to Google yourself to see what comes up. If you see your page from any networking sites like LinkedIn or Facebook, there are settings to remove these from public search engines. Then, you have to file a query with Google to remove the links once they no longer work. Then, within the same Google page, go to images and see what comes up. These can usually be removed as well. I would recommend removing as much Personally Identifiable Information (PII) as possible from these pages, such as current city, employers, spouses, birth dates, age, gender, pet names, or anything else that could potentially compromise your identity. Then, go through you contacts and remove individuals you don’t know, because I’d recommend that you use the highest security settings on these apps, but they can be circumvented if someone makes a fake account and sends you a friend request. Each of these social media sites has a method under privacy to view your page from the perspective of an outsider, nothing should be visible other than your name and profile picture. Next we will move onto protecting your physical privacy.

If I walked up to most individuals, they wouldn’t give me their physical address either, yet it only takes five seconds to find it on Google. If you scroll down further on the page where you searched your name, you will see other sites like BeenVerified.com, Whitepages.com, and MyLife.com. All it takes for someone to find where you live on these sites is your full name, age range, and the state you live in. These sites aggregate various personal information from public records and other sources and sells them to other companies and individuals who may be interested in them. You will find your current address and the places you’ve lived for the past ten years, all of your relatives and their information, net worth, birth date, age, credit score, criminal history, etc. The good news is that you can wipe your information from most of these sites by searching for the “opt out” form, which they are required to honor by law. If you want to take a further step, you can setup a third party mail service or P.O. Box that has a physical mailing address for less than $10 per month, to avoid having to give your physical address out. Most people aren’t aware that even entities such as the Department of Motor Vehicles sells individuals address information that gets aggregated by these companies. Protecting your physical address and other vital details can go a long way to protect your privacy.

As we wrap this up, the key takeaway from all of this is to try to think about how your data can be compromised and to take steps to protect it before something happens. There are many more potential harms out there beyond just identity theft. Rather than relying on the Government to regulate data privacy in the US, we as individuals can take steps to reclaim our personal privacy and freedom.

Private Surveillance versus Public Monitoring

Private Surveillance versus Public Monitoring
By Anonymous | September 18, 2020

In an era where digital privacy is regarded highly, we put ourselves in a contradictory position when we embed digital devices into every aspect of our lives.

One such device that has a large fan club is the Ring doorbell, a product sold by Ring, an Amazon company. It serves the purpose of a traditional doorbell, but combined with its associated phone application, it can record audio and video to monitor motion detected between five and thirty feet of the fixture. Neighbors can then share their footage with each other for alerting and safety purposes.


Ring Video Doorbell from shop.ring.com

However, as most users of smart devices can anticipate, the data our devices generate rarely remains solely ours. Our data has the ability to enter the free market for alternate uses, analysis, and even for sale. One of the main concerns that has surfaced for these nifty devices is the behind-the-scenes access to the device’s data. Ring has been in partnership with law enforcement agencies across the United States. While intentions of this partnership are broadcasted as a way to increase efficiency in solving crime, it begs a larger question. Washington Post’s Drew Harwell points out that “Ring users consent to the company giving recorded video to “law enforcement authorities, government officials and/or third parties” if the company thinks it’s necessary to comply with “legal process or reasonable government request,” according to its terms of service. The company says it can also store footage deleted by the user to comply with legal obligations.” This begs a larger ethical question on whether these kinds of policies infringe on an individual consumer’s autonomy per the Belmont Principle regarding Respect for Persons. If we can’t control what our devices record, store, and what that data is used for, who should have that power?

What began as a product to protect personal property has garnered the power to become a tool for nationwide monitoring voluntarily or involuntarily. This product which is intended for private property surveillance can become a tool available for public surveillance given the authority law enforcement has for access to device data. While the discussion of the power given to law enforcement agencies is larger in scope, in context of the Ring device, it leaves us wondering if one product has garnered a beastly capability to become a tool for mass surveillance. This then creates a direct link to the Fair Information Practice Principles. Per the Collection Limitation principle, the collection of personal information should be limited and obtained by consent. The Ring devices blur the definition of personal information in this instance. Is the recording of when you leave and enter your home your personal information? If your neighbor captures your movements via their device, does their consent to police requests for access to their device compromise your personal autonomy because your activity is the one being shared?

In contrast to this, an ethical dilemma also arises. If neighbors sharing their device data with law enforcement can catch a dangerous individual (as the Ring terms and conditions state), is there a moral obligation to share that data despite having consent of the recorded individual? This is the blurry line between informed consent and public protection.

Lastly, as law enforcement becomes more easily able to rely on devices like Ring, it brings about a larger question of protection equity. With a base cost of approximately $200 and a monthly subscription of approximately $15 to maintain the device’s monitoring, there is a possibility for protection disparity. Will the areas where people can afford these devices inherently receive better protection from local law enforcement because it is faster and easier to solve those crimes? Per the Belmont Principle regarding Social Justice, the burden of civilian protection should be evenly distributed across all local law enforcement agencies. Would it be equitable if police relied on devices like this as a precursor to offering aid in resolving crimes? On the contrary, these devices also have the ability to hinder law enforcement by giving early warning of police searches to a potential suspect. Is that a fair advantage?


Police officer facing Ring doorbell

These foggy implications are what leave once crime cautious citizens wondering if these devices are tethering the lines of data privacy and ethics concerns and even contributing to a larger digital dystopia.

Data collection – Is it ethical?

Data collection – Is it ethical?
By Sarah Wang | September 18, 2020

Companies’ data collection is growing rapidly and is projected to continue. Data collection refer that companies use a cornucopia of collection methods and sources to capture customer data, on a wide range of metrics. The type of data collected could range from personal data, such as Social Security numbers and other identifiable information, to attitudinal data, such as consumer satisfactory, product desirability and more.

Consumer data is collected for business purpose. For example, companies often analyze customer service records to understand what interaction methods worked well and what did not, and how customers responded, on a grand scale. Furthermore, it is also common for companies to sell customers data to third-party resources for profit or business collaboration. But this process is nearly never clearly disclosed to customers.

Why are companies collecting customer data?
Targeted advertising the the main driver behind customer data collection. Target advertising is directed towards audiences with certain traits, based on the product or the person the advertiser is promoting to. Contextualized data can help companies understand customers’ behavior and personalize marketing campaign. As the result, the increase in the likelihood of a purpose transaction will increase companies’ return on investment.

Concerns about data collection
Data privacy and breach is the major concerns about data collection. Last year, major corporations, such as Facebook, Google, Uber, experienced data breaches that put tens of millions of personal records into the hands of criminals. These breaches are only the “tip of the iceberg” when it comes to hacked accounts and stolen data. Consumers are beginning to take notice. In a research conducted by PWC, 69% of consumers believe companies are vulnerable to hacks and cyberattackers.

Over the time, this caused consumers to lose trust in companies that have customers’ personal information. Only 10% of consumers feel they have complete control over their personal information. If customers don’t trust the business to protect their sensitive data and use it responsibly, companies will get nowhere to harness the value of that data to offer customers a better experience

Last, but not the least, another downside of data driven business is it subjects to model bias. One example of that is how amazon’s recruiting algorithm is particular in favor of male and penalized resumes that includes the word “women” because the algorithm is trained based on the large sample bias toward male employees

How to reassure customers that their data is being protected?
First and foremost, companies need to demonstrate respect to their customers by providing full transparency of what data is collected, how the data will be used, when will the data be purged and expired, if ever.

Secondly, companies need to provide customers the option with data not being collected. Each individual should be treated as autonomous person capable of making decisions for themselves. Behind this idea is that data should be owned by customers. Individuals may consent that their data are used by companies but only under certain boundaries and conditions.

BIBLIOGRAPHY
1. Consumer Intelligence Series: Protect.me, Article by PWC, September 2017
https://www.pwc.com/us/en/advisory-services/publications/consumer-intelligence-series/protect-me/cis-protect-me-findings.pdf

2. Targeted Advertising, Wikipedia, January 2017
https://en.wikipedia.org/wiki/Targeted_advertising

The Social Credit System

The Social Credit System
By Anonymous | September 18, 2020

Many Westerners are familiar with China’s Great Firewall, a tool used by the Chinese Communist Party to monitor, control, and restrict information over the internet inside China’s borders. However, a less well known surveillance practice has been covertly spreading like wildfire in China, and that is the omnipresent surveillance system incorporated into all levels of government, society, and family life. This is the start of what is called the Social Credit System.

The Social Credit System is a point system used to evaluate the behavior and trustworthiness of people and businesses in China. In the picture below, we see some methods to gain or lose points. For instance, if a citizen donates blood or money, or engages in charity work, they gain more points. However, if they don’t visit aging parents, play too much video games, publicly oppose the government, or spread rumors on the internet, that can result in a decrease in points. Everything about the person is used in the calculation of the social credit score, including: daily habits, payments of bills and utilities, locations you visit, online activity, and education.


Methods in which citizens can gain or lose social credit score and possible rewards and punishments.
https://www.visualcapitalist.com/the-game-of-life-visualizing-chinas-social-credit-system/

There are serious ethical factors for this system. Much like having a high Credit Score in the US, there are perks for having a high Social Credit System in China. This includes: easier access to financial services, forgoing deposits on certain rental items, battery visibility on dating websites, and better travel deals. There are serious ramifications, like: restrictions on trains and airplanes, family members being blocked from prestigious schools, denial of social messaging apps, or being unable to pull out a mortgage for a house. And, not to mention the social ostracization that comes from disapproving people around you. An individual’s livelihood is inextricably linked to their Social Credit Score. What restrictions should be considered acceptable and which are not?

The Chinese government is constantly collecting data about individuals, including: financial records, social media activities, credit history, health records, online transactions, tax payments, social network, and location information. There is no option to opt out of the collection process, alter or amend records, or voice concerns over abuse or misuse of the data. There are no laws or regulations that protect the individual over privacy concerns from the government collection and use of their citizen’s data. What should be the line for privacy and legal controls?

This system is made possible by recent advances in technology, big data, and artificial intelligence, such as facial recognition. In addition, the surveillance state created by millions of security cameras, internet watch, interconnected IoT devices, and neighborhood watchdogs ensure that nothing is missed. The government then aggregates all the information into databases and logs that record an individual’s whereabouts and actions; if something new happens, the program automatically sends a notification to the system to ensure that no action goes unrewarded or unpunished.


Artificial intelligence vision recognizing people and objects in the frame.
https://www.pri.org/stories/2018-08-14/laboratory-far-west-chinas-surveillance-state-spreads-quietly

Although this Social Credit System seems to be infringing on many rights Westerners are accustomed to, Eastern societies tend to be focused more on the benefit of the group versus the individual and are more likely to give up certain rights in order to maintain society’s well-being. Several studies have been done to explain the reasons why a proportion of Chinese people support this Social Credit System (https://doi.org/10.1177/1461444819826402). This shows the importance of cultural norms when considering privacy laws, since people of different backgrounds and attitudes might view the balance of security and privacy differently.

The main concern that this system poses is its use to silence societal grievances or governmental opposition. There have been many instances of public outcry against governmental action and policy, such as: political corruption and bribes, poorly built buildings and infrastructure, oppression of ethnic minority groups, and limiting free speech critical of the government. The most notable might be the government’s initial (and possibly continuing) cover up of Coronavirus and persecution of the initial whistleblower, Li Wenliang, for “making false comments” and “spreading rumours” that had “severely disturbed the social order”. When Chinese social media sites fooled with anger at how the government had failed to respond to news appropriately, the sites comments’ were quickly censored. Others may have been deterred for fear of losing social credit points.

Indeed, many political figures, social advocates, or ordinary citizens who have not been inline with the government’s political agenda have been threatened, denied basic services, falsely imprisoned and tortured, coerced to give false testimony, or disappeared without whereabouts. Combined with the omnipresent surveillance state and unforgiving Social Credit System, there isn’t a single individual who isn’t at the mercy of the government’s new way to control society.

Sources:
https://www.washingtonpost.com/news/theworldpost/wp/2018/11/29/social-credit/
https://www.wired.com/story/china-social-credit-score-system/
https://www.youtube.com/watch?v=0cGB8dCDf3c
https://www.bbc.com/news/world-asia-china-53077072