When is faking it alright?

When is faking it alright?
by Randy Moran | March 5, 2021


Photo by Markus Winkler on Unsplash

The AI news has been littered with Deep Fake articles over the last couple of years. Some articles are about using it for fun (CNet), some are using it to demonstrate technical capability, like with recent Tom Cruise fakes (Piper). And, some are using it maliciously to harm or try to sway opinions and rally opposition (“Malicious use of deep fakes is a threat to democracy everywhere”). All of this points to the fact that AI is just technology, a tool to be used for either good or bad purposes.

The recent announcement of WE-FORGE (DARTMOUTH COLLEGE), takes faking in a whole different direction. WE-FORGE can generate fake, realistic documents, not for fun and not quite for malicious reasons, but to obfuscate actual content; for the purposes of counter espionage. This AI approach can be used to hide corporate or national security documents within the noise of numerous other fake documents. As their announcement points out, this noise aspect was used successfully in WWII to thwart efforts in discovering upcoming military maneuvers in Sicily.

The above announcement leads to thinking about its application for obfuscating (hiding) individuals activity from tracking. As we have reviewed in the w231 course work, individuals’ have limited to no control over their data in today’s web, social, and application landscape. We have seen that privacy policies, for the most part, serve and cover a firm more so than the individual. True protective legislation is years away; and will still be guided by procedure over individual rights and likely to be fought hard from highly profitable tech firms. The procedures to control your own information are laborious and don’t completely provide the controls one would want. The only choices are to live with it so you can utilize the service or stop using the service altogether, limiting one’s ability to connect and participate in the good aspects of the technology.

Helen Nissenbaum, whose privacy framework outlined week 5 in “A Contextual Approach to Privacy Online” (Nissenbaum 32-48) was a co-author, along with Finn Brunton, of a book called “Obfuscation: A User’s Guide for Privacy and Protest” (Brunton and Nissenbaum). In that book, they outline obfuscation as “the deliberate addition of ambiguous, confusing, or misleading
information to interfere with surveillance and data collection.” They outline numerous variations for obfuscating your identity; chafing, location spoofing, disinformation, etc. As they state, in chapter three, “privacy is a multi-faceted concept, and a wide range of structures, mechanisms, rules, and practices are available to produce it and defend it.” There are legal mechanisms, there are technical solutions, and there are application options. To these, their goal in that chapter, they feel the need for an individual to utilise obfuscation; to produce noise that looks like normal activity so as to hide the actual activity. It provides an individual way to camouflage activity when other aspects fail. It is similar to the WE_FORGE process above, but for individuals.

To enable that strategy, Helen partnered with Daniel Howe and Vincent Toubiana, to develop a tool called “TrackMeNot” (Howe et al.) that puts these ideas to practice. It provides a browser plugin for Google Chrome and Mozilla Firefox, an option so that your activity can be obfuscated. It generates random search queries through several search services (Google, Bing, Yahoo,etc.) to hide an individuals’ actual search history. One could spend the time to do it manually, but the systematic approach is much more efficient.

While legislation may come in time, and individuals may gain control of their data eventually, they can now hide their activity. This is not necessarily going to be sought out by everyone. It will likely only be used by those aware of the lengths organizations and companies have gone through to identify and categorize users. As the authors put it, “it’s a small revolution” for those interested in mitigating and defeating surveillance. To a common individual, it’s an effort they don’t care to spend. To the few aware individuals, it’s one small step towards gaining back control of one’s own privacy. The browser plugins provide obfuscation for at least this one specific aspect of user activity.

Still, I can see additional applications being developed in the future, in social networking apps and other service areas, utilizing AI to generate noise that the application AI mechanisms are trying to capture and identify. Just as hackers use AI to infiltrate networks (F5), AI is now being used by software (IBM) to identify and counter those attacks. Most folks know AI/ML is being
used to catalog and categorize individuals and their activity; the next obvious step is to use the technology to thwart that activity for those that are concerned. To some, including the companies capturing the data, it may seem wrong to pollute the data. Still, it is justified and warranted to those individuals who care about their privacy since laws have not caught up to stop the proverbial data peeping-toms. In the latter case, they just have to look at more
information, which is no different from what WE-FORGE is trying to accomplish with its counter-espionage tactics.

 

Bibliography

  • Brunton, Finn, and Helen Nissenbaum. Obfuscation: A User’s Guide for Privacy and Protect. The MIT Press, 2015.
  • CNet. “26 Deep fakes that will freak you out.” CNet Pictures, 15 Jan 2020,
    https://www.cnet.com/pictures/26-deepfakes-that-will-freak-you-out/.
  • DARTMOUTH COLLEGE. “Cybersecurity researchers build a better ‘canary trap.’” EurekAlert, American Association for the Advancement of Science, 1 Mar 2021, https://www.eurekalert.org/pub_releases/2021-03/dc-crb022621.php.
  • F5. “AI-powered Cyber Attacks.” 2020, https://www.f5.com/labs/articles/cisotociso/ai-powered-cyber-attacks.Howe, et al. “TrackMeNot.” Trackmenot.io, 2016, http://trackmenot.io/.
  • IBM. “IBM Security.” Artificial intelligence for a smarter kind of cybersecurity, 2021, https://www.ibm.com/security/artificial-intelligence.
  • “Malicious use of deepfakes is a threat to democracy everywhere.” The Startup, 2019, https://medium.com/swlh/malicious-use-of-deepfakes-is-a-threat-to-democracy-everywhere-51a020bd81e.
  • Nissenbaum, Helen. “A Contextual Approach to Privacy Online.” Daedalus, vol. Fall, no. 2011, 2011, pp. 32-48, https://www.amacad.org/sites/default/files/daedalus/downloads/Fa2011_Protecting-the-Internet-as-Public-Commons.pdf.
  • Piper, Daniel. Creative Boq, 1 Mar 2021,
    https://www.creativebloq.com/news/tom-cruise-deepfakes

Concerns with Privacy in Virtual Reality

Concerns with Privacy in Virtual Reality
by Simran Bhatia | February 26, 2021

Jamie was only 13 when she started playing her first game in Virtual Reality (VR). She loved it because she was able to create her own avatar, do virtual fist bumps with people from across the world, all while just wearing a headset and some haptic clothes. However, Jamie didn’t know that her “just 20 minute” VR game session was recording 2 million data points about her. She also didn’t know that the owners of the game that she was playing were selling her data to health insurance companies. Years later, when Jamie went to apply for health insurance, she was turned down because her body movement in VR data classified her as having high likelihood for chronic pain regional syndrome. While this is a made up situation, this is the power of data collected through VR. As of 2021, there are no regularizations or standards for data collected through VR which is scary because the VR market will hit $108 billion this year.

With the VR field expanding into a range of different fields, from healthcare to entertainment, there is high concern of the privacy of data being collected in this field. More different applications for VR means a diverse portfolio and volume of data being collected on each user, and currently with no regulation.

What is VR?

Virtual Reality (VR) is a technology that creates simulations of environments, and enables users to interact in these environments, through different devices. In the past, industries such as aerospace and defense have used VR for training and flight simulation purposes, but more recently, it has become an avid gaming tool, especially in the post-pandemic world. It has been pitched as the next great communications platform and user interface.

What does privacy in AR/VR mean?

As said, with great power comes great responsibility, the fascinating technology of VR brings with it an unprecedented ability to track body motions and consequently collect data on it’s users. Research has shown that the identifiability of users under VR, with specific tasks, the system was able to identify 95% users correctly when trained on less than 5 minutes of tracking data per person. Another research shows that with combined data of eye-gaze, hand position, height, head direction, biological and behavioral characteristics, there is 8 to 12 times better accuracy of identifying users, as compared to chance.

With each individual’s unique patterns of movement, anonymizing VR tracking data is nearly impossible, at least so far. This is because no person has the same hand movement as another. Similar to IP address, zip code and voice print, VR tracking data should be considered as “personally identifiable information” because it can be used to trace an individual’s identity. This type of data is similar to data in health and medical research, such as DNA sequence, which even when stripped of names and other identifying information, can be traced back to individuals through simple compilation with other public data sources. The reason of concern is that unlike medical data, VR tracking data is currently unregulated on how it is collected, used and shared, as it is not monitored by any external entity.

With Oculus dominating the hardware space in the VR industry currently, another area of concern was Oculus’ announcement that it will require a Facebook account for all users. This means that users are forced to accept Facebook’s Community Standards, which means that users can no longer remain completely anonymous on their device and that Facebook will own all VR tracking data along with their social media data, through Facebook, Instagram and Whatsapp. This puts Facebook, as a company, on having monopoly on most parts of a users’ data.


Source

Another privacy threat is posed by the setup of VR devices, with densely packed cameras, microphones and sensors that collect data about a users’ environment. This environment can be a users’ home, office, or community space which is getting exposed, as well.

What can be done in the future?

Privacy in VR will depend on concrete action now, not just through one person or organization, but instead as a community driven action. VR enthusiasts and technology reviewers need to prioritize privacy-conscious practice, and encourage the community to take actions towards regularization of the VR tracking data. VR developers need to take steps to ensure that they make their work transparent, yet secure. Most importantly, industry leaders need to introduce unique principles for monitoring and creating transparency on each part of the VR data process – collection, aggregation, processing, analysis and storage with utmost importance and security. As an industry practice, only data necessary for core functionality of the VR device or it’s software should be collected; moreover, each data point collected should be purposeful and companies should be transparent about the sensitive functionality of the data they collect.

The next step’s responsibility lies on the shoulders of VR users. Users need to be more aware about what they are giving consent to, when they sign up for VR games or other applications. Novice users, like Jamie, need to read the current Terms and Conditions for each part of the VR process and raise their voices to the industry if they are not comfortable with the data that they collect. Users need to be aware of their rights with the VR tracking data now, or else it might be too late.

References

Bailenson, J. (2018). Protecting Nonverbal Data Tracked in Virtual Reality. JAMA Pediatrics, 172(10), 905. https://doi.org/10.1001/jamapediatrics.2018.1909

Erlich, Y., Shor, T., Pe’er, I., & Carmi, S. (2018). Identity inference of genomic data using long-range familial searches. Science, 362(6415), 690–694. https://doi.org/10.1126/science.aau4832

Oculus. (2020, August 27). Facebook Horizon Invite-Only Beta Is Ready For Virtual Explorers | Oculus. Oculus Blog. https://www.oculus.com/blog/facebook-horizon-invite-only-beta-is-ready-for-virtual-explorers/

Pfeuffer, K., Geiger, M. J., Prange, S., Mecke, L., Buschek, D., & Alt, F. (2019). Behavioural Biometrics in VR. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–12. https://doi.org/10.1145/3290605.3300340

The Yale Tribune. (2019, April 12). VR/AR Privacy Concerns Emerging with the Field’s Development. https://campuspress.yale.edu/tribune/vrar-privacy-concerns-emerging-with-the-fields-development/

Miller, M.R., Herrera, F., Jun, H. et al. Personal identifiability of user tracking data during observation of 360-degree VR video. Sci Rep 10, 17404 (2020). https://doi.org/10.1038/s41598-020-74486-y

Diez, M. (2021, January 29). Virtual Reality Will Be A Part Of The Post-Pandemic Built World. Forbes. https://www.forbes.com/sites/forbesrealestatecouncil/2021/02/01/virtual-reality-will-be-a-part-of-the-post-pandemic-built-world/?sh=a08553348ded

Should TikTok be Banned?

Should TikTok be Banned?
by Mikayla Pugel | February 26, 2021

In the last couple of years and with large technology companies, there have been many concerns about data collection and processing, however, the issues were always kept inhouse. With the creation and rise of TikTok, the issue has been taken to another level since the data collection is leaving America. Throughout this article, I will discuss concerns about TikTok and reasons why some people want it banned, as well as walk through some reasons why the concerns may be misplaced and why the ban has not happened yet.

First, many American entities have already banned TikTok from their workers’ devices. Some of these groups include The Democratic National Committee, The Republican National Committee, the Coast Guard, the Marine Corps, and the TSA (Meisenzahl). Leaders from these groups are worried about the app gaining sensitive information from the device it is downloaded to. These worries are not without a warrant as Apple’s iOS 14 caught TikTok secretly accessing user’s clipboards (Doffman, July 09). However, other tech companies were caught doing the same thing, but TikTok was the only second-time offender. The concern around TikTok getting sensitive information is not limited to it being a vast tech company, but mainly because it is a Chinese-based company, and there are concerns about where the data may end up.

TikTok collects an abundance of data from all over the world, and many foreign leaders are concerned that the data may fall into the hands of Communist China. The company has made many claims that they would never give up user data to their government, however, the Chinese National Intelligence Law of 2017 says “any Chinese company can be drafted into espionage, and a company could be forced to hand over the data” (Ghaffary). These concerns of the foreign leaders seem validated and even the government of India has already taken the step to ban the Chinese company (Meisenzahl). However, the ban increased conflict between the two countries, and there would be similar fallout if American’s were to take similar steps.

The US and China already have their issues and there are concerns that if the US were to ban TikTok, the country’s relationship would continue to decline. There is the fear of retaliation from China as well as other Countries following similar bans on all large tech companies, most of which are American (Ghaffary). The Chinese government already has bans on major US tech companies and has worked to create copies of companies like Google, Facebook, and Uber. Americans are concerned that if Countries start to become paranoid with other companies owning their data, then the American economy will be hit hard.

There are many data collection and storage concerns, as there are with most technology companies, however, TikTok has been the leader in one main issue with data, and that is the collection and storage of data collected from children. The US has many laws on what data can be collected from children starting at a certain age, and since TikTok’s main user base is children, they have been at the head of a lot of controversies. TikTok recently agreed to pay $5.7 million in a settlement with the Federal Trade Commission over allegations of illegally collected personal data from children (Doffman, August 11). The FTC has also accused them of exposing locations of young children and not complying when they were instructed to delete certain information that had been collected from minors (Doffman, August 11).

Altogether there are many concerns with data collection and processing from foreign companies, the largest concern may come from the fear of censorship and manipulation of the public opinion within the site (Matsakis). As we have seen with the power Facebook holds over public opinion, TikTok could someday hold this much power, and it would be in the hands of the Chinese Government. Many leaders are concerned about this power, however, banning TikTok would not necessarily free the country from concerns of social media manipulation.

In conclusion, there are valid reasons to be concerned about TikTok, but in contrast, there are a vast number of reasons to not ban it. Many of the concerns brought up could be applied to most American technology companies and because of this, I do not believe the US government is ever going to do anything to remove TikTok’s place in America. Our government instead should be taking a step further to look at policies that apply to all data collected from any company, or how to decrease internet manipulation through education of our citizens, as it seems hypocritical to bash TikTok when we have Facebook to claim as ours.

References:
Doffman, Z. (2020, August 11). TikTok users-here’s why you should be worried. Retrieved February 22, 2021, from https://www.forbes.com/sites/zakdoffman/2020/08/11/tiktok-apple-iphone-google-android-data-security-update-warning-investigation-trump-ban/?sh=3b04029f3436
Doffman, Z. (2020, July 09). Yes, TikTok has a Serious China PROBLEM-HERE’S why you should be concerned. Retrieved February 22, 2021, from https://www.forbes.com/sites/zakdoffman/2020/07/09/tiktok-serious-china-problem-ban-security-warning/?sh=2445db3e1f22
Ghaffary, S. (2020, August 11). Do you really need to worry about your security ON Tiktok? Here’s what we know. Retrieved February 22, 2021, from https://www.vox.com/recode/2020/8/11/21363092/why-is-tiktok-national-security-threat-wechat-trump-ban
Matsakis, L. (n.d.). Does TikTok really pose a risk to US national security? Retrieved February 22, 2021, from https://www.wired.com/story/tiktok-ban-us-national-security-risk/
Meisenzahl, M. (2020, July 13). Trump is considering banning Chinese social media app tiktok.
see the full list of countries, companies, and organizations that have already banned it. Retrieved February 22, 2021, from https://www.businessinsider.com/tiktok-banned-by-countries-organizations-companies-list-2020-7

What’s your data worth?…

What’s your data worth?…
by Anonymous | February 26, 2021

…asks Alexander McCaig, CEO of Tartle at the end of an introductory video on company’s website. According to Alex, commercial enterprises around the globe make billions of dollars every year by selling their customers’ data (McCaig, 2020). Revenues generated by sales to third parties likely pale in comparison to enterprise value created through primary use of data to generate customer insights with potential to increase revenues and lower costs. Despite providing consent, some customers may not be fully aware of of how and what type of data about them is being collected and whether or not it is being sold.

Data privacy laws passed in recent years (e.g. GDPR, CCPA) have provided consumers with better information and greater control over their data. The laws have forced private enterprises and public institutions to offer greater transparency into their data collection, processing, usage and selling practices. Regulators hope that these new laws will lead to an increase in general population’s awareness of how individuals’ data is being used. Furthermore, to the extent that policies are effective, customers are likely to attribute greater, but still unknown, value to their own data.

Tartle, along with a handful of other private companies, believes that data is a precious asset, the value of which can be determined in the open market. Tartle’s success in helping individuals monetize their data ‘asset’ through secure and far-reaching marketplace connecting eager buyers and motivated sellers, at scale, may give society a big hand in equalizing the data privacy playing field.

Ignorance is Bliss, Seduction is Powerful

In an earlier blog post, Robert Hosbach discusses the “privacy paradox,” a phrase used to describe the significant discrepancy between stated concerns about privacy and actions taken to protect it (R. Hosbach, 2021). Lack of action is attributable to a number of factors, with individual ignorance being a meaningful contributor. According to one paper, up to 73% of American adults believe that the presence of privacy policy implies that their data will not be misused (J. Turow et al, 2018). What further exaggerates complacency are deliberate efforts by commercial enterprises to lead consumers into a sense of resignation by relying on four tactics of seduction: placation, diversion, misnaming and using jargon (NA Draper et al, 2019). Consumers need more help, and society needs to do more.

Evolving Policy Landscape, the “Carrot” or the “Stick”

“The new privacy law is a big win for data privacy,” says Joseph Turow, a privacy scholar and professor of communication at the Annenberg School for Communication at the University of Pennsylvania (Knowledge@Wharton, 2019). While 2020 was viewed as a big year for privacy professionals, 2021 may even be bigger. In addition to California passing “CCPA 2.0” late last year, a large number of other states have proposed new legislation. Moreover, with new administration taking office in January, some privacy advocates hope that 2021 will be the year in which U.S. passes GDPR-like federal privacy legislation (Husch Blackwell LLP, 2021). Stricter privacy laws may serve as an effective “stick,” but where is the “carrot”?

“Change Brings Opportunity”

This famous quote by Nido Qubein is used frequently by business leaders facing uncertainty. While evolving regulatory frameworks are likely to disrupt businesses for the benefit of consumers, they are unlikely to slow exponential growth of data. One Mckinsey & Co study points to a 300% growth in IoT to 43 billion data producing devices by 2023 and a 7-fold increase in number of digital interactions by 2025 (McKinsey & Co, 2019). Evolving privacy laws, greater customer awareness, combined with our ever-increasing reliance on data have given birth to companies like Tartle. While motivated by financial gain, these companies are also purpose-driven with potential to reduce income inequality across the globe and put monetary value on individual’s data privacy. So, ask yourself what is your data worth to you, and would you be willing to sell it?

Sources

McCaig, Alexander. Tartle.co (2020, January 8). https://www.youtube.com/watch?v=rslKr3W-Ex8&feature=youtu.be

Maintaining Privacy in Smart Home. Hosbach, Robert (2021, February 19). Retrieved from https://blogs.ischool.berkeley.edu/w231/blog/

Turow, Joseph & Hennessy, Michael & Draper, Nora. (2018). Persistent Misperceptions: Americans’ Misplaced Confidence in Privacy Policies, 2003–2015. Journal of Broadcasting & Electronic Media. 62. 461-478. 10.1080/08838151.2018.1451867.

Draper NA, Turow J. The corporate cultivation of digital resignation. New Media & Society. 2019;21(8):1824-1839. doi:10.1177/1461444819833331

Your Data Is Shared and Sold…What’s Being Done About It?. Knowledge@Wharton (2019, October 28). Retrieved from https://knowledge.wharton.upenn.edu/article/data-shared-sold-whats-done/

The Year To Come In U.S. Privacy & Cybersecurity Law, Husch Blackwell LLP (2021, January 28). Retrieved from https://www.jdsupra.com/legalnews/the-year-to-come-in-u-s-privacy-9238400/

Growing opportunities in the Internet of Things, McKinsey & Co (2019, July 29). Retrieved from
https://www.mckinsey.com/industries/private-equity-and-principal-investors/our-insights/growing-opportunities-in-the-internet-of-things?cid=eml-web

The DNA Race

The DNA Race
Graham Schweer | February 5, 2021

In the news
The January 31, 2021 episode of 60 Minutes sounded alarm bells about a foreign country’s interest in collecting Americans’ health data through offers to help the United States’ response to the COVID-19 pandemic.

On November 30, 2020, Newsweek published an article stating that a certain foreign country would begin requiring digital, genetic test-driven health certificates from airline passengers before boarding flights into that country.

A December 24, 2020 article published by Roll Call described a section of the omnibus spending package passed by the U.S. Congress earlier that week which required the Government Accountability Office to investigate consumer genetic tests from companies such as 23andMe and Ancestry.com and their connections to a foreign country.

What do all of these recent stories have in common?

China is the foreign country that is the subject of concern, and the stories highlight China’s ambition to build a global DNA database.

China’s DNA database is estimated to have samples from 140 million people. The United States has the next largest database with 14 million profiles.

DNA Databases are not new

Based on information from Interpol published in 2014 by the DNA policy initiative, 70 countries have created DNA databases. The United Kingdom built the first DNA database in 1995. The United States followed shortly thereafter, and China’s DNA data gathering began in the early 2000s. In all cases, DNA databases were created for criminal investigation purposes.

DNA database proponents argue their merit with respect to solving crimes. Many high profile, cold cases have been solved using DNA databases. One recent example is the capture of the Golden State serial killer. His identity was uncovered via a match of DNA evidence with information stored in a private company’s DNA database. In fact, China expanded their domestic DNA collection program after using DNA data from a 2016 arrest to solve a murder from 2005.

Made in China 2025

China’s current efforts to build a DNA database appear to extend beyond criminal investigation use cases as well beyond China’s borders. Under the“Made in China 2025” strategic plan announced in 2015, China stated their intentions to become a major player in the global biotechnology sector. One example of the strategy at work is the acquisition of a US genetic sequencing company by a Chinese government-funded company, Beijing Genomics Institute (BGI), which provided BGI with a database of Americans’ DNA.

U.S. officials interviewed in the 60 Minutes episode referenced at the beginning of this post believe that China’s appetite to grow their DNA database is related to their aspiration to become the world’s leader in genetic innovation, disease treatments and vaccines. The U.S. officials contend that China sees the expansion of their DNA database as directly related to their increased statistical chances at discovering genetic breakthroughs.

Ethical considerations
Although viewed with skepticism from a vantage point inside the United States, China’s intentions to build a global DNA database may not be malevolent. However, their current approach is opaque, and the scales are tipped significantly in China’s favor.

  • China should take the following steps to increase transparency into their DNA database:
  • Clarify what data has already been collected and is stored in the DNA database;
  • Inform people that their DNA data has been (will be) added to the DNA database;
  • Give people the option to remove (not contribute) their DNA;
  • Do not seek consent from people who are under duress and seeking testing or treatment of a health care issue;
  • Establish safeguards to ensure DNA data is used only for health improvement-related purposes and not used to harm them in other ways; and
  • Eliminate restrictions that only permit a one-way flow of DNA data and share DNA database records with other countries and healthcare institutions.

Without taking the steps recommended above, skepticism of China’s intentions with respect to its global DNA database program will only intensify and force other countries, including the United States, to join the DNA race.

References:
1. Wertheim, J. (2021, January 31). China’s push to CONTROL americans’ health Care future. Retrieved February 05, 2021, from https://www.cbsnews.com/news/biodata-dna-china-collection-60-minutes-2021-01-31/
2. Chang, G. (2020, November 30). China wants Your DNA-AND it’s up to no GOOD: OPINION. Retrieved February 05, 2021, from https://www.newsweek.com/china-wants-your-dna-its-no-good-opinion-1550998
3. Ratnam, G. (2020, December 24). Hey, soldiers and spies – think twice about that home genetic ancestry test. Retrieved February 05, 2021, from https://www.rollcall.com/2020/12/24/hey-soldiers-and-spies-think-twice-about-that-home-genetic-ancestry-test/
4. Benson, T. (2020, June 30). DNA Databases in the U.S. and China Are Tools of Racial Oppression. Retrieved February 05, 2021, from https://spectrum.ieee.org/tech-talk/biomedical/ethics/dna-databases-in-china-and-the-us-are-tools-of-racial-oppression
5. Global summary. (n.d.). Retrieved February 05, 2021, from http://dnapolicyinitiative.org/wiki/index.php?title=Global_summary#:~:text=According%20to%20Interpol%2C%20seventy%20countries,9%25%20of%20the%20population
6. St. John, P. (2020, December 08). The untold story of how the Golden state killer was found: A covert operation and private DNA. Retrieved February 05, 2021, from https://www.latimes.com/california/story/2020-12-08/man-in-the-window
7. Wee, S. (2020, June 17). China is COLLECTING dna from tens of millions of men and Boys, USING U.S. Equipment. Retrieved February 05, 2021, from https://www.nytimes.com/2020/06/17/world/asia/China-DNA-surveillance.html
8. Atkinson, R. (2019, August 12). China’s biopharmaceutical Strategy: Challenge or complement to U.S. INDUSTRY COMPETITIVENESS? Retrieved February 05, 2021, from https://itif.org/publications/2019/08/12/chinas-biopharmaceutical-strategy-challenge-or-complement-us-industry
9. PRI Staff. (2019, March 15). Does China have Your DNA? Retrieved February 05, 2021, from https://www.pop.org/does-china-have-your-dna-2/

Freedom of Speech vs Sedition

Freedom of Speech vs Sedition
Gajarajan Nagarajan | January 29, 2021


2021 storming of the United States Capitol

Ideas that offend are getting more prominent due to divisive and hateful rhetoric harvested by major political parties, their associated news channels and ever growing, unmonitored social media platforms. As US is reeling over recent storming of US Capitol, passionate debates have commenced across the country on who can be the enforcers? Freedom of speech does have its limits as against threats, racism, hostility violence including acts of sedition. Hate crime laws are constitutional so long as they punish violence or vandalism.

US First amendment protects all types of speech and hence hate speech gets amplified in the new digital era with millions of followers can get induced or get swayed by propaganda. Under the first amendment, there is no such thing as a false idea. However pernicious an opinion may seem; we depend for its correction not on the conscience of judges and juries but on the competition of other ideas.

Weaponization of Social Media

Jan 6th event at US Capitol did trigger an important change across all major social media companies and their primary cloud infrastructure providers. Twitter, Facebook, YouTube, Amazon, Apple and Google banned President Trump and scores of his supporters from their platforms for inciting violence. How big will this challenge remain going forward? Aren’t these companies the original enablers and accelerators with no effective control for violence prevention? Should large media companies take law onto their own hands (or their platforms) while state and federal governments take a pause in moderation? Or is this something that needs action by societies as we the people are the cause of the pervasive and polarizing content creators of conspiracy theories in American Society?

Private companies have shown themselves able to act far more nimbly than our government, imposing consequences on a would-be tyrant who has until now enjoyed a corrosive degree of impunity. But in doing so, these companies have also shown a power that goes beyond that of many nation-states and without democratic accountability. Technology companies have employed AI/ML and NLP tools to help generate more visitors and longer duration of engagement of users in their platforms which has been a breeding ground for hate groups. Negative aspects of this unilateral power exercised by technology companies can become precedent only to be exploited by the enemies of freedom of speech around the world. Dictators, authoritative regimes and those in power can do extreme harm to democracy by colluding or forcing technology companies to bend the rules to satisfy their political gain.

In a democratic government, public opinion impacts everything. It is all important that truth should be the basis of public information. If public opinion is ill formed – poisoned by lies, deception, misrepresentations or mistakes; the consequences could be dire. Government, which is the preservative of the general happiness and safety, cannot be secure if falsehood and malice are injected to rob confidence and trust of the people

Looking back into history combined with data science may provide some options to protect future of our democracy.

  • The Sedition Act of 1918 covers broad range of offenses, notably speech and expression of opinion that cast the government or the war effort in a negative light. In 2007, a bill named “Violent Radicalization and Homegrown Terrorism Prevention Act” was sponsored by Representative Jane Harman (Democrat from California). The bill would have amended Homeland Security Act to add provisions to prevent and control homegrown terrorism and also establish a grant program to prevent radicalization. Congress can be enabled to revisit above bill with bipartisan support.
  • Section 3 of the 14th Amendment provides guidelines including prohibition of current or former military officers, along with current and former federal and state public officials from serving in variety of government offices if they shall have engaged in insurrection or rebellion against the United States Constitution
  • Social media bans are key defense mechanisms and needs to be nurtured, enhanced and implemented across all democratic nations and otherwise. Ability to drive conversation, reaching wider audiences for recruitment and perhaps more important benefit of monetization of anger and distrust by conflict entrepreneurs are effectively neutralized with strong enforcement of social media ban.
  • Consumer influence on large companies have major role in regulating nefarious online media houses. For example, de-platforming pressure to turn off cloud and app store access to Parler (competitor to Twitter); pressure on publishing houses to block book proposals and FCC regulation on podcasts may provide manageable impact for both extreme left and right wing fanatism and fear mongering.

Photo credits:

https://www.latimes.com/world-nation/story/2021-01-15/capitol-riot-police-veterans-extremists
https://www.amazon.com/LikeWar-Weaponization-P-W-Singer/dp/1328695743

Ethical Implications with Autonomous Vehicles

Ethical Implications with Autonomous Vehicles
Surya Gutta | January 29, 2021

Introduction
Autonomous vehicles are poised to revolutionize the transportation industry as they could dramatically reduce automotive accidents. Apart from saving human lives, they can reduce billions of dollars in accident damages in the U.S.[1] They could also give people ample free time and increase productivity by removing time wasted driving. The cost of ride-sharing also decreases as labor accounts for roughly 60%[2] of the taxi business’s total cost.

Autonomous vehicles use either Radar or LiDAR sensors data to detect obstacles, such as human beings, supporting the Advanced Driver Assistance Systems (ADAS). ADAS allows a vehicle to operate autonomously in an environment (other vehicles, bicyclists, pedestrians, traffic signals, and obstacles in the scene). Autonomous vehicles process large amounts of data generated by these sensors, real-time traffic data, and personal data that includes locations, start and stop times.


source: freepik.com

Ethical challenges

Data collection and analysis: Autonomous vehicles collect large amounts of data. The sensors collect human beings’ images (ex: human being/pedestrian as an obstacle in front of the car) without the user’s consent. There is no regulation on how much data can be collected. Once the data is collected, there are no regulations on who can access that data and how it is distributed and stored. Moreover, there will be many implications of a data breach. The collected data can be used for other purposes, without the users’ consent, leading to unintended consequences. The data variation due to human body size and shape might influence the autonomous software’s decision.


source: freepik.com

Quality of vehicle sensors: Sensors are one of the costly components in autonomous vehicles. High-end sensors increase the cost drastically. If the vehicle purchase price increases beyond a specific limit in certain countries, there won’t be incentives from the local government to the vehicle owners. To minimize the cost, vehicle manufacturers might not use all the required sensors[3] at the expense of increased risk to human beings.


source: freepik.com

Jobs: While autonomous vehicles will create jobs in engineering and customer service [4,5], many driver jobs could be lost as there won’t be any need for drivers. More than 3 million taxi, truck, and bus drivers may lose their livelihoods and professions in the U.S.[6] As the accidents decrease due to autonomous vehicles (95% of recent accidents are due to human error[7]), the importance of vehicle insurance might decrease. Also, people working in collision repair centers and chiropractic care centers might lose jobs. People might opt for autonomous ride-shares compared to public transit services[8] because of the cheaper prices offered by autonomous ride-shares, which will impact the jobs in public transit services. What happens to the people dependent on the construction and maintenance of the public transit system? Also, ample parking spaces might not be required, and people either directly or indirectly dependent on them will lose their livelihood. Even though there is a lot of time before autonomous vehicles take over so that the impacted people can change their careers, it’s hard for some people due to their age, family circumstances, etc.

Regulations and Guidelines
Most of the current regulations[9] on the safety of motor vehicles are based on the assumption of humans driving vehicles. New regulations [10,11] should be adopted where ethics should be given utmost importance starting from the vehicle’s design to its adoption in society. Also, there should be transparency on the algorithms being used and data being collected by the autonomous vehicles.

There should be a uniform policy on what data can be collected and how it can be used. The federal government should regulate the data privacy [12] as the vehicle manufacturer can promise to de-identify personal information [13] (what time a user left home and to where the user went), but due to different standards maintained by different manufacturers, there is a risk that some of them will allow re-identification. Since autonomous vehicles are in the early stages, there are many unanswered questions like what’s the expected behavior if the sensors fail? When an accident occurs, who is at fault? The owner or the manufacturer of the autonomous vehicle? All these need to be considered while coming up with regulations and guidelines.

Policymakers should act now to prepare for and minimize disruptions to the millions of jobs due to autonomous vehicles that may come in the future. There should be a timeline to come up with new regulations and guidelines protecting humans and their privacy.

References
[1] Ramsey, M. (2015, March 5). Self-Driving Cars Could Cut 90% of Accidents. WSJ; Wall Street Journal. https://www.wsj.com/articles/self-driving-cars-could-cut-down-on-accidents-study-says-1425567905
[2] Noonan, K. (2019, September 30). What Does the Future Hold for Self-Driving Cars? The Motley Fool; The Motley Fool. https://www.fool.com/investing/what-does-the-future-hold-for-self-driving-cars.aspx
[3] Insider Q&A: Velodyne advocates for safer self-driving cars. (2019, May 19). AP NEWS. https://apnews.com/article/714640aa989846c5bd32cfd12b0e3b9d
[4] Alison DeNisco Rayome. (2019, January 11). Self-driving cars will create 30,000 engineering jobs that the US can’t fill. TechRepublic; TechRepublic. https://www.techrepublic.com/article/self-driving-cars-will-create-30000-engineering-jobs-that-the-us-cant-fill/
[5] Gray, R. (n.d.). Driving your career towards a booming sector. Www.bbc.com. https://www.bbc.com/worklife/article/20181029-driving-your-career-towards-a-boom-sector
[6] Balakrishnan, A. (2017, May 22). Self-driving cars could cost America’s professional drivers up to 25,000 jobs a month, Goldman Sachs says; CNBC. https://www.cnbc.com/2017/05/22/goldman-sachs-analysis-of-autonomous-vehicle-job-loss.html
[7] Crash Stats: Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey. (2015). https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812115
[8] Will autonomous cars change the role and value of public transportation? (2015, June 23). The Transport Politic. https://www.thetransportpolitic.com/2015/06/23/will-autonomous-cars-change-the-role-and-value-of-public-transportation/
[9] Laws and Regulations- As a Federal agency, NHTSA regulates the safety of motor vehicles and related equipment. (2016, August 16). NHTSA. https://www.nhtsa.gov/laws-regulations
[10] Dot/NHTSA Policy Statement Concerning Automated Vehicles 2016 Update to ‘preliminary statement of policy concerning automated vehicles’.(2016). Nhtsa.gov. http://www.nhtsa.gov/staticfiles/rulemaking/pdf/Autonomous-Vehicles-Policy-Update-2016.pdf
[11] NHTSA Federal Automated Vehicles Policy. (2016). https://www.transportation.gov/sites/dot.gov/files/docs/AV%20policy%20guidance%20PDF.pdf
[12] Office, U. S. G. A. (2014). In-Car Location-Based Services: Companies Are Taking Steps to Protect Privacy, but Some Risks May Not Be Clear to Consumers. Www.Gao.Gov, GAO-14-81. https://www.gao.gov/products/GAO-14-81
[13] Goodman, E. P. (2017, July 14). Self-driving cars: overlooking data privacy is a car crash waiting to happen. The Guardian; The Guardian. https://www.theguardian.com/technology/2016/jun/08/self-driving-car-legislation-drones-data-security

Never Let Them See You Sweat

Never Let Them See You Sweat
Steve Dille | February 2, 2021

The global pandemic hasn’t been bad for one company. Peloton, the maker of internet and social media connected exercise bikes has seen an explosion of demand from exercise shut-ins. Peloton bikes let you stream live classes, communicate with other riders, and integrate with social media. President Biden rides a Peloton which has raised some security eyebrows with the NSA. So, just how secure and private is your information on Pelton? Here are answers to some common questions.

How Visible am I?
The Peloton bike has a camera and microphone. But, can Peloton instructors watch me workout and hear me? According to the Peloton Privacy Policy, the camera and microphone can only be activated by you to accept a video chat from another user. The instructors cannot see you.

What Data does Peloton Collect?
When you set up your profile, Peloton asks you to provide information such as a username, email address, weight, height, age, location, birthday, phone number and an image. Only the email address and username are required. Payment information is collected for the monthly subscription but only stored at secure third-party processors.

Peloton also collects information about your exercise participation – date, class, time, total output, and heart rate monitor information. Peloton user profiles are set to public by default, allowing other registered Peloton users to view your fitness performance history, leaderboard name, location and age (if provided). Those users can also contact or follow you through the Peloton service. You have the option to set your profile to “Private,” so only members you approve as followers can see your profile and fitness history.

As you navigate the service, certain passive information is collected through cookies. Peloton uses personal information and other information about you to create anonymized, aggregated demographic, location and device information. This information is used to measure rider interest and usage of various features of the Peloton services.

Does Peloton Sell My Information to Advertisers?
Peloton’s privacy policy states “We currently do not “sell” your information as we understand this term.” However, they seem to “share” your information. The privacy policy contains a section on “Marketing – Interest-Based Advertising and Third-Party Marketing.” Peloton does make your data available for interest-based advertising and may use it in making services available to you that would seem of interest. Peloton enables you to minimize sharing of your information with third parties for marketing purposes with this form.

What About Pelton and Social Media?
This is an area where your privacy can be violated in ways hard to envision if you chose to participate. Peloton offers publicly accessible blogs, social media pages, private messages, video chat, community forums and the ability to connect to Facebook and other fitness gadgets like Fitbit. When you disclose information about yourself in any of these areas, Peloton collects and stores the information. Further, if you choose to submit content, to any public area of the Peloton Service or any other public sites, such content will be considered “public” and will not be subject to the Peloton privacy protections. This can be problematic for riders posting their new personal record to an instructor’s Facebook page. Whether they realize it, they just made some previously private profile information public.

Once you start connecting your Peloton information to social networks, it becomes very possible for others to piece information together about you. For example, Amazon, has a leaderboard group called “Pelozonians.” When you join that group, it is now known that you work at Amazon to anyone on Peloton or the free app.

What Can I Do to Protect My Privacy?
Configuring your settings wrong can allow others to look into your personal information. Remember, your default profile is public so make sure you don’t include private information you don’t want shared like city or age. Better yet, set your profile to private. Make sure your username isn’t easily associated with you offline or on social media so others can’t piece together information about you. Do you really need to post your rides on Facebook? This just opens another complex layer of connection between your personal life and information on Peloton. Remember to use the forms from Peloton to opt out of interest-based advertising.

The Peloton is a wonderful bike requiring a “privacy” update to an old, humorous politeness adage. Today, when you meet someone new, it’s now impolite to ask their age, weight or Peloton leaderboard name.

Peloton Privacy Policy
https://www.onepeloton.com/privacy-policy

Peloton Terms of Service
https://www.onepeloton.com/terms-of-service

Section 230: Congress Seeks Testimony, Ignores It

Section 230: Congress Seeks Testimony, Ignores It
By EJ Haselden, October 30, 2020

It’s a timeless trope from the era of afterschool specials: misbehaving children stand before Mom and Dad’s kitchen-table duumvirate to answer for their schoolyard shenanigans, but the pretense of discipline soon wears through and the scene devolves into a nasty argument between the grownups. The kids’ real punishment is that they are made pawns and captive audience to a painful display of parental dysfunction. So unfolded this week’s Senate hearing on social media regulation, rhetorically titled “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?

Section 230 (47 U.S.C. § 230) is a part of the 1996 Communications Decency Act, and it is perhaps best known for shielding social media companies (among others) from liability for content that their users post:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The titular “bad behavior” and “sweeping immunity” that prompted this hearing, however, relate to another, lesser-known protection granted by Section 230, which shields platforms when they choose to filter, fact-check, or otherwise annotate content that they consider harmful and/or inaccurate:

“No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”

The nominal debate here surrounds the “otherwise objectionable” material in that description. Social media companies have chosen to interpret it as any content of questionable origin or veracity that could result in public harm (most recently regarding health advisories, voter suppression, and influence campaigns orchestrated by foreign intelligence services). Their caution stems from lessons learned in the rapid spread of disinformation leading up to the 2016 election, as well as a once-in-a-century pandemic that has seen deadly irresponsible claims espoused by supposed authority figures. Republican lawmakers claim that this content moderation has disproportionately muted conservative voices on social media. Democratic lawmakers, meanwhile, argue that these companies not only have the right, but the responsibility, to assess content based on its potential consequences and without regard for its ideological bent. It should be noted that multiple independent studies and a Facebook internal audit failed to find the alleged anti-conservative bias, but the fact that right-leaning engagement actually dwarfs that of center and left-leaning sources means that flagging only a small fraction of it still provides ample anecdotal evidence of prejudice (which is obviously enough to prompt Congressional hearings).

The administration has called for an outright repeal of Section 230, despite the fact that this would almost certainly lead to more content restrictions as companies adapt to the increased threat of liability. The consensus on Capitol Hill and in Silicon Valley therefore appears to be some amount of targeted Section 230 reform, while keeping the basic framework intact.

Which brings us back to this week’s hearing (or spectacle, or charade, or sham, depending on whom you ask). The Senate Committee on Commerce, Science, and Transportation subpoenaed the CEOs of Google, Twitter, and Facebook, respectively, to testify on behalf of Social Media. Most commentators agree that the face time with Tech Actual was not spent productively. As with those quarrelling parents, it was never really about the kids.

Republicans’ line of soi-disant questioning focused almost entirely on what they consider censorship of conservatives (69 of 81 questions, per the New York Times), as they demanded examples of the same (loosely defined) censorship directed at liberal outlets. Senator Ron Johnson asked the witnesses about the ideological makeup of their respective workforces—rhetorically, because it would be illegal for them to maintain that sort of record—in an effort to prove anti-conservative bias by virtue of microcultural majority (which almost sounded like an argument for some variant of affirmative action).

Democrats, for their part, focused most of their attention on the legitimacy and impact of the hearing itself, expressing concern that it could serve to intimidate social media companies into relaxing moderation policies at a time when the nation is perhaps most vulnerable to manipulative media. The bulk of their more on-topic questioning concerned dis- and misinformation and what actions the companies were taking to combat it ahead of the election. Still, not that much about Section 230 reform.

In keeping with the scripted, postured non-discussion, the most meaningful witness testimony came in the form of prepared opening statements. In those, Pichai reasserted Google’s anti-bias philosophy and cautioned against reactionary changes to Section 230, Dorsey promoted increased transparency and user inclusion in Twitter’s decision-making processes, and Zuckerberg praised Section 230 while inviting a stricter and more explicit rewrite of its provisions (for which Facebook would gladly provide input). Their full statements are available on the committee’s hearing website.

The timing and tenor of this eleventh-hour pre-election partisan screed exchange never inspired much hope for substantive debate, but even so, there was a jarring lack of effort to better understand the pressing and complex problems that Section 230 is still, at this moment, expected to resolve. The reason this matters, the reason it’s so alarming that neither side was terribly interested in the companies’ offers of greater transparency—something we’d consider a win for democracy in saner times—is that our government has abdicated its responsibility of oversight on this topic except in cases where the threat of enforcement can be used as a political weapon.

In the end, it’s probably fitting that Congress used a social media hearing as a platform to amplify and disseminate entrenched views that they had no intention of rethinking.

 

Photo credits:

Can there truly be ethics in autonomous machine intelligence?

Can there truly be ethics in autonomous machine intelligence?
By Matt White, October 30, 2020

Some would say that we are in the infancy of the fourth industrial revolution, where artificial intelligence and the autonomy it is ushering in are positioned to become life-altering technologies. Most understand the impacts of autonomous technologies as it relates to jobs, they are concerned that autonomous vehicles and robotic assembly lines will relegate them to the unemployment line. But very little thought and conversely research has been done into the ethical implications of autonomous decision making that these systems are confronted with. Although there are far reaching ethical implications with AI and automation there are opposing views of who is truly responsible for the ethical decisions made by an autonomous system. Is it the designer? The programmer? The supplier of the training data? The operator? Or should the system itself should be responsible for any moral or ethical dilemmas and their outcomes.

Take for instance the incident with Uber’s self-driving car a few years ago, where one of its cars killed a pedestrian crossing the road in the middle of the night. The vehicle’s sensors collected data which revealed it was aware of a person crossing in front of its path, but the vehicle took no action and struck and killed the pedestrian. Who is ultimately responsible when an autonomous vehicle kills a person? In this case it was the supervising driver but what happens when there is no driver in the driver seat? What if the vehicle had to make a choice like in the trolley problem, between hitting a child or hitting a grown man? How would it make such a challenge moral decision?

A car parked on a city street


Image Source: Singularity Hub

The Moral Machine, a project from MIT’s Media Lab is tackling just this, developing a dataset on how people would react to particular moral and ethical decisions where it comes to driverless cars. Should you run over 1 disabled person and 1 child or 3 obese people, or should you crash yourself into a barrier and kill your 3 adult passengers to save two men and two women of a healthy weight pushing a baby? However, the thought that autonomous vehicles will base their decisions of morality on crowd-sourced datasets of varying moral perspectives seems absurd. Only those who participate in the process will have their opinions included, anyone can go online and contribute to the dataset without any form of validation, and not withstanding all of the opinions that are not included, there are various moral philosophy theories that could be applied to autonomous ethical decision making that would overrule rules derived from datasets. Does the system follow utilitarianism, Kantianism, virtue ethics, so forth? Although the Moral Machine is considered to be a study in its current incarnation, it uses a very primitive set of parameters (number of people, binary gender, weight, age, visible disability) to allow users to determine the value they place on human life. In real life, real people have more than these handful of dimensions like race, socio-economic status, non-binary gender, and so forth. Could adding these real-life dimensions create a bias that would further de-value people who might meet certain criteria and be in the way of an autonomous vehicle? Might the value placed on a homeless person by less than that of a Wall street stockbroker?

Graphical user interface, diagram


Image Source: Moral Machine

There is certainly a lot to unpack here, especially if we change contexts and look at armed unmanned autonomous vehicles (AUAVs) which are used in warfare to varying degrees. As we transition from remote pilots to fully autonomous war machines, who makes the decision whether to drop a bomb on a school containing 100 terrorists and 20 children? Does the operator absolve himself of any responsibility when the AUAV makes the decision to drop a bomb and kill innocent people? Does the programmer or the trainer of the system bear any responsibility?

As you can see the idea of ethical decision making by autonomous systems is highly problematic and presents some very serious challenges that require further research and exploration. Systems that are designed to have a moral compass will not be sufficient, as they will adopt the moral standpoint of its creators. Training data is likely to be short-sighted, shallow in dimensions and biased based on the ethical standpoints of its contributors. It is obvious that the issue of ethical decision making in autonomous system needs further discourse and research in order to ensure that future systems that we come to rely on can make ethical decisions in a manner that demonstrates no bias; or perhaps we may have to accept that in fact autonomous machines will not be able to make ethical decisions in an unbiased manner.

References: