Time to Embrace Home Automation or Worry About Privacy?

Time to Embrace Home Automation or Worry About Privacy? 
By Anonymous | September 19, 2021

“Good night, Alexa,” as you heading to the bed, “Good night,” Alexa WHISPERED. When this function is first discovered by users, a lot are thrilled and scared since the home assistant devices seem to understand and behave more like human than expected. There’re growing number of smart home assistant prevailing in the market in the past few years, and the user amounts are increasing exponentially and leading to the prosperous of the home automation industry. As more and more people are relying on the convenience that home assistant such as Amazon Echo, Google Home, could bring to life, more concerns are brought up about the personal data and privacy of the users.

Does the home assistants always listen to us?

When home automation first came into the sight of the public, easy commands are expected and executed: maybe answering questions by retrieving information online, turning lights on and off, setting alarms, playing specific music or a genre based on the description, etc. Then, more can be connected, not limited to mobile devices, cameras, TV, Air Conditioner, Electrical cars, as long as they are manufactured as “works with” smart home assistants.

How fascinating the smart home automation works

Over years of transformation, now these home assistants fueled by artificial intelligence can observe and collect data in real-time for a specific user and make sense of a certain command under the given situation. Despite of the fact that some users are reporting that such devices are not smart enough in certain scenarios causing false alarms and silly responses, the others are surprised by what they are capable of and worried about their lives getting controlled over by these smart home assistants.

“They listen to our talking all the time,” shouts are expressed to the public the year Alexa introduced to the public. It’s rational and reasonable to have such concerns, and understandable for the public to worry about the privacy and security of their personal information.

Smart home assistants can respond to vocal command either via recording data continuously or waiting for a signature word to be activated or awakened. Either way, as using these devices and features, people are exposing their personal data collected by these home assistants to some extent, depending on the ethics of the companies. As an example of personal identification information, let alone accounts and credentials that are linked, how about voices? They can recognize the host’s voice and decide on whether to respond or execute the requests received or not. It’s already a question whether anonymous still exist in the world of Internet.

There’s no promise made by the companies or privacy and terms, that data collected are only authorized information or with consent. It’s also vague how they may use these personal data since when user signing the consent form, it’s saying “help improve our performance and functionality” while not mentioning a word on how specifically these will be performed on the data collected.

Artificial intelligence brings undeniable convenience to human lives, and the public, or users, are crucial in providing data fed to the algorithms and artificial intelligence to make the improvements and progress. However, gaining solid support from the public is fundamental in making further progresses speaking of these smart home assistant devices in the market. To achieve the optimal outcomes both to users and to the companies, more efforts should be made on the transparency on they data collection and addressing the privacy concerns. And it’s always your choice to embrace such home automation revolution or saying no!

References:
https://www.businessinsider.com/amazon-echo-alexa-whisper-mode-how-to-turn-on-2018-10
https://www.nytimes.com/wirecutter/blog/amazons-alexa-never-stops-listening-to-you/
Amazon Privacy Notice: https://www.amazon.com/gp/help/customer/display.html?nodeId=468496&tag=thewire06-20&tag=thewire06-20
https://www.forbes.com/sites/enriquedans/2020/12/19/isnt-about-time-you-started-teaching-your-digital-home-assistant-some-newtricks/?sh=4ffb9517169c
https://www.techsafety.org/homeautomation

South Korea’s PIPC Fines Tech Giants

South Korea’s PIPC Fines Tech Giants
By Anonymous | September 19, 2021

It is no brainer that South Korea is one of the world’s most technologically advanced and digitally connected countries with the highest average internet connection speed worldwide. With internet infrastructure that places a high priority in numerous governmental regulations, there comes a need for strong data privacy and only recently has a central administrative agency been established to govern data-related policies. Under the newly amended Personal Information Protection Act (PIPA), the Personal Information Protection Commission (PIPC) has become South Korea’s data protection authority since August 5, 2020. Since then, the PIPC has asserted various fines and corrective actions against tech giants such as Facebook, Google, and Netflix due to a major privacy audit that was conducted in 2020.

Out of all, Facebook was fined the largest penalty for privacy violations related to the collection of facial recognition data without the users’ consent. Personal images and social security numbers of over 200,000 platform users were collected for facial recognition templates, while failing to obtain consent, notify users, and submit required information when requested by PIPC. In the context of Respect for Persons, a basic ethical principle outlined in The Belmont Report, it seems that Facebook has neglected the need to provide users an opportunity to opt out of the facial recognition data collection process through proper consent and further ignored providing sufficient information to their users when migrating personal data to third parties or overseas. Prior to the collection and unauthorized transfer of personal data, there must’ve been widespread agreement of the consent process through the following domains: information, comprehension, and voluntariness. Thus, Facebook has been ordered to destroy any personal data that was associated with the facial recognition initiatives.

As for Google and Netflix, less penalties were issued for privacy violations in which Netflix failed to gain proper consent for collecting and transferring personal user’s data while Google ended up not violating the PIPA but still received legal recommendations on clarity and improvements to their personal data collection processes. Even so, Microsoft was fined insignificantly compared to Facebook for privacy violations related to leaked e-mail addresses, with 144 belonging to South Korean citizens and taking 11 days to publish a notification in Korean which was supposed to be performed within 24 hours under the PIPA. Like Facebook’s case, one of the Belmont Report principles, for Justice, seems to have been violated as the benefits and risks of the organization’s users weren’t appropriately assessed. It is also worth mentioning that users who are already marginalized in their everyday lives outside of these platforms are further burdened by the unprecedented age of market concentration that stems out of the personal data collected from these individuals. Even so, the obligations of Beneficence affect both the organization’s investigators and society at large, because they extend both to research projects and to the entire enterprise of research. In the case of using personal user data, organizations must be required to recognize the longer-term benefits and risks that may result from data collection processes and results.

The question of whether tech giants can cope with the patterns of PIPA moving forward comes into play as PIPC issued a statement that its investigations on privacy violations will continue. Even though certain companies only faced small fines, there also comes the concern of the company’s credibility with millions to billions of users providing personal data on a countless basis. With stronger data regulations coming into play, I hope to see many organizations taking a stronger stance in ensuring the autonomy, well-being, and equal distribution of their platform users.

References

The Unequal American Dream: Hidden Bias in Mortgage Lending AI/ML Algorithms

The Unequal American Dream: Hidden Bias in Mortgage Lending AI/ML Algorithms
By Autumn Rains | September 17, 2021

Owning a home in the United States is a cornerstone of the American Dream. Despite the economic downturn from the Covid-19 pandemic, the U.S. housing market saw double-digit growth rates in home pricing and equity appreciation in 2021. According to the Federal Housing Finance Agency, U.S. house prices grew 17.4 percent in the second quarter of 2021 versus 2020 and increased 4.9 percent from the first quarter of 2021 (U.S. House Price Index Report, 2021). Given these figures, obtaining a mortgage loan has further become vital to the home buying process for potential homeowners. With advancements in Machine Learning within financial markets, mortgage lenders have opted to introduce digital products to speed up the mortgage lending process and serve a broader, growing customer base.

Unfortunately, the ability to obtain a mortgage from lenders is not equal for all potential homeowners due to bias within the algorithms of these digital products. According to the Consumer Financial Protection Bureau (Mortgage refinance loans, 2021):

“Initial observations about the nation’s mortgage market in 2020 are welcome news, with improvements in the overall volume of home-purchase and refinance loans compared to 2019,” said CFPB Acting Director Dave Uejio. “Unfortunately, Black and Hispanic borrowers continued to have fewer loans, be more likely to be denied than non-Hispanic White and Asian borrowers, and pay higher median interest rates and total loan costs. It is clear from that data that our economic recovery from the COVID-19 pandemic won’t be robust if it remains uneven for mortgage borrowers of color.”

New Levels of Discrimination? Or Perpetuation of History?
Exploring the history of mortgage lending in the United States, discrimination based on race has been an undertone in our history. Housing programs under ‘The New Deal’ in 1933 were forms of segregation. People of color were not included in new suburban communities and instead placed into urban housing projects. The following year, the Federal Housing Administration (FHA) was established and created a policy known as ‘redlining.’ This policy furthered segregation for people of color by refusing to issue mortgages for properties in or near African-American neighborhoods. While this policy was in effect, the FHA also offered subsidies for builders who prioritized suburban development project builds, requiring that builders sold none of these homes to African-Americans (Gross, 2017).

Bias in the Algorithms
Researchers at UC Berkeley Haas School of Business discovered that black and Latino borrowers were charged higher interest rates of 7.9 bps both online and in-person in 2019 (Public Affairs & Affairs, 2018). Similarly, The Markup also explored this bias in mortgage lending and found the following about national loan rates:

Holding 17 different factors steady in a complex statistical analysis of more than two million conventional mortgage applications for home purchases, we found that lenders were 40 percent more likely to turn down Latino applicants for loans, 50 percent more likely to deny Asian/Pacific Islander applicants, and 70 percent more likely to deny Native American applicants than similar White applicants. Lenders were 80 percent more likely to reject Black applicants than similar White applicants. […] In every case, the prospective borrowers of color looked almost exactly the same on paper as the White applicants, except for their race.

Mortgage lenders approach the digital lending process similarly to traditional banks regarding risk evaluation criteria. These criteria include income, assets, credit score, current debt, and liabilities, among other factors in line with federal guidelines. The Consumer Finance Protection Bureau issued guidelines after the last recession to reduce the risk of predatory lending to consumers. (source) If a potential home buyer does not meet these criteria, they are classified as a risk. These criteria do tend to put people of color at a disadvantage. For example, credit scores are typically calculated based on individual spending and payment habits. Rental payments are typically the most significant payment individuals pay routinely, but these generally are not reported to credit bureaus by landlords. According to an article in the New York Times (Miller, 2020), more than half of Black Americans pay rent. Alanna McCargo, Vice President of housing finance policy at the Urban Institute, further elaborates within the article:

“We know the wealth gap is incredibly large between white households and households of color,” said Alanna McCargo, the vice president of housing finance policy at the Urban Institute. “If you are looking at income, assets and credit — your three drivers — you are excluding millions of potential Black, Latino and, in some cases, Asian minorities and immigrants from getting access to credit through your system. You are perpetuating the wealth gap.” […] As of 2017, the median household income among Black Americans was just over $38,000, and only 20.6 percent of Black households had a credit score above 700.”

 

Remedies for Bias
Potential solutions to reduce hidden bias in the mortgage lending algorithms could include widening the data criteria used for risk evaluation decisions. However, some demographic factors about an individual cannot be considered according to the law. The Fair Housing Act of 1968 states that within mortgage underwriting, lenders cannot consider sex, religion, race, or marital status as part of the evaluation. However, these may be factors by proxy through variables like timeliness of bill payments, a part of the credit score evaluation previously discussed. If Data Scientists have additional data points beyond the scope of the recommended guidelines of the Consumer Finance Protection Bureau, should these be considered? If so, do any of these extra data points include bias directly or by proxy? These considerations pose quite a dilemma for Data Scientists, digital mortgage lenders, and companies involved in credit modeling.

Another potential solution in the digital mortgage lending process could be the inclusion of a diverse team of loan officers in the final step of the risk evaluation process. Until lenders can place higher confidence in the ability of AI/ML algorithms to reduce hidden bias, loan officers should be involved to ensure fair access for all consumers. Tangentially, alternative credit scoring models that include rental history payments should be considered by Data Scientists at mortgage lenders with digital offerings. By doing so, lenders can create a more holistic picture of potential homeowners’ total spending and payment history. This would allow all U.S. residents the equal opportunity to pursue the American dream of homeownership in a time when working from home is a new reality.

 

Works Cited

  • Gross, T. (2017, May 3). A ‘forgotten history’ of how the U.S. government segregated America. NPR. Retrieved September 17, 2021, from https://www.npr.org/2017/05/03/526655831/a-forgotten-history-of-how-the-u-s-government-segregated-america.
  • Miller, J. (2020, September 18). Is an algorithm less racist than a loan officer? The New York Times. Retrieved September 17, 2021, from https://www.nytimes.com/2020/09/18/business/digital-mortgages.html.
  • Mortgage refinance loans drove an increase in closed-end originations in 2020, new CFPB report finds. Consumer Financial Protection Bureau. (2021, August 19). Retrieved September 17, 2021, from https://www.consumerfinance.gov/about-us/newsroom/mortgage-refinance-loans-drove-an-increase-in-closed-end-originations-in-2020-new-cfpb-report-finds/.
  • Public Affairs, U. C. B. N. 13, & Affairs, P. (2018, November 13). Mortgage algorithms perpetuate racial bias in lending, study finds. Berkeley News. Retrieved September 17, 2021, from https://news.berkeley.edu/story_jump/mortgage-algorithms-perpetuate-racial-bias-in-lending-study-finds/.
  • U.S. House Price Index Report 2021 Q2. U.S. House Price Index Report 2021 Q2 | Federal Housing Finance Agency. (2021, August 31). Retrieved September 17, 2021, from https://www.fhfa.gov/AboutUs/Reports/Pages/US-House-Price-Index-Report-2021-Q2.aspx.

Image Sources

  • Picture 1: https://www.dcpolicycenter.org/wp-content/uploads/2018/10/Location_map_of_properties_and_projects-778×1024.jpg
  • Picture 2: https://static01.nyt.com/images/2019/12/08/business/06View-illo/06View-illo-superJumbo.jpg

The Battle Between Corporations and Data Privacy

The Battle Between Corporations and Data Privacy
By Anonymous | September 17, 2021

With each user’s growing digital footprint should come an increase in liability and responsibility for companies. Unfortunately, this isn’t always the case. It’s not surprising that data rights aren’t at the top of the to-do list given that more data usually comes hand in hand with steeply increasing targeted ad revenue, conversion rates and customer insights revenue. Naturally, the question arises: where and how do we draw the line between justifiable company data usage and a data privacy breach?

Preliminary Legal Measures
Measures like General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have set a great precedent for trying to systematically answer this question, but they’re far from widely accepted. The former was enacted in the EU and it allows residents to have control over their data. Companies now have to release information about what information is collected/stored, and customers need to provide consent for data to be collected or to be used to marketing purposes. However, companies have found a loophole by just created 2 streams of data collection (one for the EU and one for countries outside of it) instead of changing data policies world-wide. The latter only impacts the state of California, and not many other states have followed suit. Seeing as state by state policies can greatly complicate compliance and data storage measures, companies have actually stepped up to influence and speed up these measures.

Big-Tech’s Influence on Legislation
Surprisingly, Big-Tech (Facebook, Amazon, Microsoft, Google, Apple, etc.) is actually on the front lines of pushing states to pass privacy laws; although it’s not so much a heroic act of benevolence as it is a crafty way to control the stringency of privacy measures put into place. In fact, Virginia’s recently passed Privacy Law was reportedly co-authored by Amazon and Microsoft, and it’s now in consideration in 14 other states using the same exact or even a weaker legal framework. These bills are strongly backed by all of Big-Tech and are quickly moving through the process due to pressure from countless company lobbyists. The biggest impact of these bills is that consumers cannot sue companies for violations of the law. Another key point is that the default setting for users is to opt into tracking unless the user combs through the settings to opt out of it. The industry is counting on the idea that if the country is flooded with these weaker state laws, they will essentially be able to disregard the harsher state laws like the CCPA. In figure below, you can see just how much companies are spending on legislation within one state:


Image: The amount of money spent on lobbying in Connecticut by Big-Tech

Good News on the Horizon
However, it’s important to note that this doesn’t mean that data privacy is a lost cause and that legislation is not effective. Indeed, there are some corporations taking privacy into their own hands and creating large scale impacts. The most scalable of which, is Apple, which released a new data privacy measure that now requires every user to knowingly opt in or out of data tracking for every single app they use. While this was met with heavy backlash from ad-revenue and user data dependent companies such as Facebook, Apple has remained firm in its decision to mandate user opt-in permission for data tracking. Their decision has resulted in less than 33% of iOS users opting in to the tracking which is a massive hit to the ad-tech industry.

Furthermore, as iOS users have been opting out of tracking, advertisers can’t bid on them so the lack of iOS users has driven up advertisement demand for Android users. As a result, Android ad prices are now about 30% higher than ad prices for iOS users, and companies are choosing to move their ads to Android powered devices. For some context, digital-ad agency Tinuiti’s Facebook clients went from year-over-year spend of 46% for Android users in May to 64% in June. The clients’ iOS spending saw a corresponding slowdown, from 42% in May to 25% in June. Despite these drawbacks, this move alone is forcing companies everywhere to change their data tracking policies because while they’re escaping state and federal privacy measures, they’re getting blocked by wide-reaching, software platform-based privacy laws.

References

  • https://themarkup.org/privacy/2021/04/15/big-tech-is-pushing-states-to-pass-privacy-laws-and-yes-you-should-be-suspicious
  • https://www.coreview.com/blog/alpin-gdpr-fines-list/
  • https://www.bloomberg.com/news/articles/2021-07-14/facebook-fb-advertisers-impacted-by-apple-aapl-privacy-ios-14-changes
  • https://www.facebook.com/business/help/331612538028890?id=428636648170202
  • https://www.theverge.com/2020/3/3/21153117/congress-tech-regulation-privacy-bill-coppa-ads-laws-legislators
  • https://www.macworld.com/article/344420/app-tracking-transparency-privacy-ad-tracking-iphone-ipad-how-to-change-settings.html

The intersection of public health and data privacy with vaccine passports

The intersection of public health and data privacy with vaccine passports
By Anonymous | September 17, 2021

Countries, states, and cities are implementing and enforcing vaccine passports. The use of vaccine passports is to provide individuals greater protection against the spread of COVID-19; however, the safety provided comes with concerns over data privacy and ensuring its safe protection, too. On the one hand, vaccine passports supply a universal and standardized solution for ensuring individuals are vaccinated when entering high-exposure areas, such as traveling and large indoor gatherings. With the standardization comes data privacy risks and concerns with respect to the Fair Information Practice Principles.

Return to Normalcy
Since the beginning of the pandemic, travelling and tourism declined due to legal restrictions coupled with peoples’ fear of contracting the virus during travel. Vaccine passports give individuals the relief of knowing that others around them are vaccinated, too, while businesses receive an opportunity to attract more customers. The chart on the left illustrates the dip in tourism and flight travel during the pandemic; whereas the chart on the right shows the global recognition of multiple vaccines. All to indicate that there are several vaccines that are recognized around the world for potential vaccine passports and that the travel and tourism industries would benefit from such programs.

Not only do businesses benefit from vaccine passports as it could attract more customers to return, but unemployed workers would as well. Unemployed individuals would benefit from vaccine passports as it would trigger an increase in customer activity, which, in turn, increases the need for businesses to hire more. The image below visualizes the hardest hit sectors by change in employment. The largest negative changes were seen in industries that rely on large gatherings and crowds. Thus, if vaccine passports can bring us back to normalcy faster, then businesses can recover faster and more people can be re-hired.

Transparency
The European Union is rolling out a vaccine passport with its data privacy grounded in the GDPR framework, which addresses transparency – individuals should be given detailed information on how the data will be collected, used, and maintained – with the GDPR’s transparency, purpose limitation, and security principles. Through the GDPR, individuals will have a transparent understanding of the purpose for which the data will be used, and only for that purpose, and be ensured that the data will be “processed in a manner that ensures appropriate security of the personal data” (ICO 2018). However, this only applies to the individuals with the EU vaccine passport. There are other countries, Malaysia and China for example, that do not follow GDPR as the basis of its data transparency for vaccine passports. This can cause a concern for how the data could be used for other purposes post-pandemic. A vaccine passport gives transparency to businesses and governments on the vaccination status, the individuals participating should receive the same level of transparency on how their data will be stored and for what direct purposes.

Individual Participation & Purpose Specifications
Individual participation with vaccine passports comes into question as countries and governments that require vaccine passports to attend indoor dining, large indoor events, etc. force participation. If an individual wants to participate in such events, then their participation in enrolling in a vaccine passport system is required. This forced consent for an individual to provide data to be able to enjoy these activities causes an ethical dilemma of what other activities could soon require individuals to use a vaccine passport and risk their personal data privacy. In addition, the term length of vaccine passports is unknown as the pandemic continues to fluctuate, which causes issues with the purpose specifications principle – clearly stated uses of the collected data. The issue is that individuals that provide their personal information for entry into a vaccination program may not know how long their data will be kept as the use case for it could continue to be extended if never retired.

Accountability and Auditing
With the United States rejecting a federal vaccine passport, states, cities, and private entities have developed and instituted their own vaccine approval programs. The uncoordinated effort for a single, standardized program within the U.S. brings attention to accountability and auditing problems in ensuring proper training to all the people involved in the data collection, processing, and storage components. States and cities may have training programs for data collection, but private entities that are looking to rebound from a tough 2020 economic dip may not have the resources and time to train their employees and contractors on proper data privacy practices. Therefore, by the federal government not implementing a nationwide program, individuals that consent to providing their data for vaccine-proof certification may risk potential data concerns with a lack of training for people collecting and using the data.

Summary
Vaccine passports have great potential in limiting the spread of the virus by giving individuals and organizations visibility and assurance of vaccination status for large groups. However, vaccine certification programs need to give individuals transparency into the clear and specific uses of their information, provide term limits for purpose specifications, and ensure that people who will be collecting, using, and storing the data are properly trained in data privacy practices. If these concerns are addressed, then we could see more adoption of vaccine passports to combat the spread of the virus. If not, then individuals’ mistrust of data privacy will persist and returning back to normalcy may take longer than hoped.

Elevator Pitch
Vaccine passports have great potential in limiting the spread of the virus by giving individuals and organizations visibility and assurance of vaccination status for large groups. However, vaccine certification programs need to give individuals transparency into the clear and specific uses of their information, provide term limits for purpose specifications, and ensure that people who will be collecting, using, and storing the data are properly trained in data privacy practices. If these concerns are addressed, then we could see more adoption of vaccine passports to combat the spread of the virus. If not, then individuals’ mistrust of data privacy will persist and returning back to normalcy may take longer than hoped.

 

References

  • Baquet, D. (2021, April 22). The controversy over vaccination passports. The New York Times. Retrieved September 13, 2021, from https://www.nytimes.com/2021/04/22/opinion/letters/covid-vaccination-passports.html.
    BBC. (2021, July 26). Covid passports: How do they work around the world? BBC News. Retrieved September 14, 2021, from https://www.bbc.com/news/world-europe-56522408.
    Martin, G. (2021, April 28). Vaccine passports: Are they legal-or even a good idea? UC Berkeley Public Health. Retrieved September 14, 2021, from https://publichealth.berkeley.edu/covid-19/vaccine-passports-are-they-legal-or-even-a-good-idea/.
  • Schumaker, E. (2021, April 10). What to know about COVID-19 vaccine ‘passports’ and why they’re controversial. ABC News. Retrieved September 14, 2021, from https://abcnews.go.com/Health/covid-19-vaccine-passports-controversial/story?id=76928275.
  • The principles. ICO. (2018, May 25). Retrieved September 14, 2021, from https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/.
  • Turner-Lee, N., Lai, S., & Skahill, E. (2021, July 28). Vaccine passports underscore the necessity of U.S. privacy legislation. Brookings. Retrieved September 14, 2021, from https://www.brookings.edu/blog/techtank/2021/06/28/vaccine-passports-underscore-the-necessity-of-u-s-privacy-legislation/.

The battle between COVID contact tracing and privacy

The battle between COVID contact tracing and privacy
By Anonymous | July 9, 2021

In an effort to curb COVID-19 case counts, many countries have been employing contact tracing apps as a way of tracking infections. Although implementations can differ, the main idea is that users would download an app onto their phone, which would notify them if they have possibly been exposed to COVID-19 from being in close proximity to someone who has tested positive. This sounds good in theory, until you realize the privacy implications – the developer of the app would have unhindered access to all of your movements, including where you go, who you meet, who you live with, where they go, who they meet, and so on. Classifying countries into three groups – authoritarian countries using authoritarian measures, free countries using authoritarian measures under emergency powers, and free countries using standard measures – we see that a perfect balance between contact tracing and privacy has been difficult to achieve. Let’s take a look at a few examples of each group.

Authoritarian and authoritarian
China has taken strong measures to contain the virus, including an app that labels each individual as green (safe), yellow (potentially exposed), and red (high risk). The privacy issues are clear here, with the invasive app tracking all movements, and the algorithm of coloring individuals being a blackbox without any transparency. Although these issues remain, Chinese contact tracing has nevertheless been successful. The city of Shenzhen managed to reduce the average time of identifying and isolating potential patients from 4.6 days to 2.7 days, leading to a reproductive number of 0.4 (anything below 1 indicates that the outbreak will die).
Russia (Moscow in particular) has also taken a strong approach, forcing residents to download a QR-code based system and monitoring citizens’ movements within the city. Even with this invasive approach, Moscow has seen mass hospitalizations, second and third waves, and successive record-shattering daily death counts.

Free but temporarily authoritarian
Israel has successfully implemented a system where the Shin Bet (domestic security service) receives PII (personally identifiable information) of COVID-19 positive patients from the Health Ministry, which is then cross-referenced with a database that can identify people that came in close contact with the patient in the last two weeks. During the scheme’s first rollout, cases successfully decreased to single digits, until the whole operation was shut down by a supreme court ban. By the time the system returned under a new law three months later, Israel was well into its second wave, and case counts doubled thrice before starting to decrease again.
France initially tried to amend its emergency law to allow collection of health and location data using “any measure”. This was ultimately rejected as being too invasive, even under emergency powers. The French contact tracing app also faced major issues, sending only 14 notifications after 2 million downloads, ultimately leading a quarter of its users to uninstall the app.

Free and free
Taiwan has managed to implement a contact tracing system relying entirely on civil input and open-source software, enhancing privacy by decentralizing the data, not requiring registration, and using time-insensitive Bluetooth data. Radically different from other countries’ systems in its heavy emphasis on everything being open-source, the Taiwanese method has allowed efficient and effective contact tracing while minimizing privacy infringements.
Japan had originally used a manual form of contact-tracing, relying mainly on individually calling citizens. Once this became infeasible with large case counts and an unwillingness from respondents to fully disclose information, the government developed an app (COCOA) designed to notify users of potential exposures using Bluetooth technology, only to find out 4 months later that a bug had caused the app to fail to send notifications, drawing widespread condemnation.

Relations with privacy laws
It is important that contact tracing measures are compatible with relevant privacy laws, and that curtailments to civil liberties are kept to only what is necessary. Countries have been grappling with this issue ever since COVID-19 specific tracing apps have been available. One of the first countries to roll out a tracing app, Norway, ended up having its Data Protection Authority order the Norwegian Institute of Public Health to suspend the tracing app’s usage, as well as delete all data that was collected by it just two months after it first became available. Lithuania similarly suspended the usage of a tracing app after fears of violating EU privacy laws. Germany proposed amendments to laws that would allow broad collection of contact details and location data to fight the pandemic; both were rejected as being too invasive. Although the European General Data Protection Regulation (GDPR) creates strict limits for the collection and processing of data, it allows for some exceptions during public health emergencies, provided that the data is only used for its stated purpose – which brings us into the final important section.

Only using data for health purposes
Two principles offered by the Fair Information Practice Principles (FIPPs) provide a checklist to make sure that data collected through these systems are used appropriately – the principles of purpose specification (be transparent about how data is used) and minimization (only collect what is necessary). A privacy policy should be made public to clearly say what the tracing system can and cannot do. A lack of clear boundaries can quickly become a slippery slope of misuse, corruption, and distrust towards the government. Singapore’s contact tracing for example, originally stated that data would “only be used solely for the purpose of contact tracing of persons possibly exposed to covid-19.” Months later, the government admitted that data was used for criminal investigations, forcing both the privacy policy and the relevant legislation to be amended.

Putting everything in context
It is important to remember that contact tracing apps are simply just one part of the equation. For both countries with success and countries without success using these apps, correlation does not mean causation. We need to evaluate these systems in the greater context of the whole pandemic – although it is understandable for countries to temporarily grant emergency powers and curtail some civil liberties, we need to holistically evaluate whether the benefits of such systems outweigh the potential risks or information that we give up, and that appropriate measures are put in place to minimize potential misuses and abuses of data.

References:
https://futurism.com/contact-tracing-apps-china-coronavirus
https://www.cidrap.umn.edu/news-perspective/2020/04/study-contact-tracing-slowed-covid-19-spread-china
https://www.dailymail.co.uk/news/article-9730113/Moscow-gripped-growing-Covid-catastrophe-Russian-capital-records-144-deaths-24-hours.html
https://www.cbsnews.com/news/coronavirus-pandemic-russia-digital-tracking-system-moscow/

How Israel’s COVID-19 mass surveillance operation works


https://www.politico.eu/article/french-contact-tracing-app-sent-just-14-notifications-after-2-million-downloads/
https://www.oecd.org/coronavirus/policy-responses/ensuring-data-privacy-as-we-battle-covid-19-36c2f31e/
https://covirus.cc/social-distancing-app-intro.html
https://asia.nikkei.com/Spotlight/Comment/Japan-s-flawed-COVID-19-tracing-app-is-digital-black-eye-for-Tokyo
https://www.bloomberg.com/news/videos/2020-07-22/contact-tracing-effective-without-invading-privacy-taiwan-digital-minister-explains-video

Contact Tracing COVID-19 Throws a Curveball to GDPR, Data Rights


https://www.technologyreview.com/2021/01/05/1015734/singapore-contact-tracing-police-data-covid/
https://www.csoonline.com/article/3606437/data-privacy-uproar-in-singapore-leads-to-limits-on-contact-tracing-usage.html

Photos:

How Israel’s COVID-19 mass surveillance operation works


https://buzzorange.com/techorange/en/2021/05/18/gdpr-compliant-app-fights-covid-19-with-privacy-in-mind/

Cross-Border Transfer of Data – Case Study of Didi

Cross-Border Transfer of Data – Case Study of Didi
By Elizabeth Zhou | July 9, 2021

Didi, the uber of China, submitted its prospectus in the United States on June 10, and officially went public on June 30th. July 1st was the Chinese party’s 100th anniversary celebration. On July 2nd, the Cyberspace Administration of China issued an announcement to initiate a cyber security review of Didi. At the same time, Didi was removed from the Chinese app/android store on July 4th. Because of this regulator change, Didi lost US$15 billion in US stock market, and it is going to be sued by American shareholders over stock plunge caused by the regulatory changes. Didi’s failure is not only caused by the special political environment of China, but also caused by Didi’s negligence in cross-border transfer of data.

What is cross- border transfer of data? “What data can be transferred out?” and “what data must be stored inside of the country?” are two major questions around this topic. In fact, different countries have different policies. For example, in European countries, GDPR stipulates that personal data can flow freely within the European Union or the European Economic Area, while outflow of the European Economic Area, Cross-border transfers of personal data to a third country must be based on an adequacy decision or another valid data transfer mechanism, such as Binding Corporate Rules, Contract Clauses and EU-US Privacy Shield. While for CCPA, there are no such restrictions on cross-border transfer of data. China has the most strict management of cross-border data. Chinese Cyber Security Law (CCSL) stipulates that the important data should be stored in the territory. If it is really necessary to transfer to overseas due to business needs, a safety assessment shall be carried out in accordance with the measures formulated by the relevant departments of the State Council. Compared with Europe and the United States, China’s cross-border data is subject to strict scrutiny to ensure personal privacy and national security.

Why Didi Fail?

The United States introduced a Foreign Company Accountability Act (HFCA) last year, which specifically mentions that as long as it is a company going public in the United States, it must accept the review from the US Public Company Accounting Supervision Committee. This requirement is indeedly strict, therefore the Chinese company faces either being reviewed with the entire accounting manuscript or it will be forced to delist. But this review actually violated Chinese securities laws. Mentioned above, CCSL has special requirements that no unit or individual can provide relevant information and data overseas without authorization from the State Council. Because of this dilemma, Didi chooses bypassed the domestic process and submitted its data to the US directly. And because of Didi’s negligence on CCSL, Chinese government enforced strict regulatory changes on Didi, causing the following punishments.

What we learned from Didi?

Cross-border transfer of data involves two countries’ policies which rises up the barrier for companies that want to go to overseas. Especially under countries have special political weather, companies should be more cautious and patient.

References

https://www.scmp.com/business/banking-finance/article/3140272/didi-chuxing-sued-american-s

hareholders-over-stock-plunge

https://www.futurelearn.com/info/courses/general-data-protection-regulation/0/steps/32449

https://www.clearygottlieb.com/-/media/files/alert-memos-2018/2018_07_13-californias-groundbr

eaking-privacy-law-pdf.pdf

http://www.casted.org.cn/channel/newsinfo/8127

https://www.google.com/url?q=https://www.sec.gov/news/press-release/2021-53&sa=D&source

=editors&ust=1625904213435000&usg=AOvVaw03YVDnVY1fJPWB8oIwWwuk

Censorship on Instagram and the Ethics of Instagram’s Policy and Algorithm

Censorship on Instagram and the Ethics of Instagram’s Policy and Algorithm
By Anonymous | July 9, 2021

Facebook-owned Instagram disproportionately censors some folks more than others. For example, folks who are hairy, fat, dark skinned, disabled, or not straight-passing, are more likely to be censored. Additionally, folks who express dissent against an institution are often suppressed.

Illustration by Gretchen Faust of an image posted by Petra Collins, which was initially removed by Instagram.

In this article, I want to address the ethics of Instagram’s policies around who is and who is not given space to be themselves on this platform.

Instagram’s community guidelines state that most forms of nudity are not allowed to be posted, except for “photos in the context of breastfeeding, birth giving and after-birth moments, health-related situations (for example, post-mastectomy, breast cancer awareness or gender confirmation surgery) or an act of protest.” This raises the question of what counts as “an act of protest”? And who is permitted to protest on this platform?

Sheerah Ravindren, whose pronouns are they/she, is a creative assistant, activist, and model. You may have seen them in Beyonce’s “Brown Skin Girl” music video. In their bio, they write that they are a hairy, dark-skinned, Tamil, nonbinary, immigrant femme. Sheerah uses their platform to advocate for marginalized folks and raise awareness around issues that affect them and their communities. She speaks out about the genocide against Tamil people, aims to normalize melanin, body hair, marks, and rolls, and adds a dimension of digital representation for nonbinary folks of the diaspora.

Among the various types of content Sheerah posts, they have posted images of themselves with captions that convey their intent to protest eurocentric beauty standards and societal norms of femininity. Before posting images where they are not wearing a top, they edit the images to fully cover nipples, in order to meet Instagram’s community guidelines. However, when posting such content, Sheerah has been censored by Instagram — their posts have been taken down, and they have received the following message: “We removed your post because it goes against our Community Guidelines on nudity or sexual activity.” Sheerah’s post was not considered an act of protest from Instagram’s perspective and instead was sexualized unnecessarily. There are numerous other Instagram posts, depicting people who are lighter skinned, less hairy, and skinnier, wearing similar outfits, that did not get removed. For instance, there are many photography accounts that feature skinny hairless white women who are semi-clothed/semi-nude as well. What made Sheerah’s post less appropriate?

Instagram notification indicating that the user’s post has been taken down because it does not follow the community guidelines on nudity or sexual activity.

The policies around what is considered appropriate to post on Instagram seem to be inconsistently enforced, which could be due to algorithmic bias. The algorithm that determines whether a post complies with guidelines may perform better (with higher accuracy) on posts that depict lighter skinned, less hairy, and skinnier folks. This could be due to the model being trained on data that is not fully representative of the population (the training data may lack intersectional representation), among other potential factors. Moreover, the caption that accompanies an image may not be taken into account by the algorithm; but captions could be critical to contextualizing images and recognizing posts that are forms of protest.

In the context of justice, a basic ethical principle outlined in The Belmont Report, it seems that the benefits and risks of Instagram’s algorithm are not evenly distributed across users. Folks who are already marginalized in their everyday lives outside of Instagram are further burdened by the sociotechnical harm they experience on Instagram when their posts are taken down. The erasure of marginalized folks on this platform upholds existing systems of oppression that shame, silence, and devalue people who are dark skinned, fat, hairy, disabled, or trans, and those who do not conform to heternormative ideals.

While Instagram’s help center contains documentation on how a user can report content that they think violates the community guidelines, there is no documentation accessible to the user on how to submit an appeal. If a user posts something that follows the community guidelines but is misclassified by the algorithm or misreported by another user and thereby deemed inappropriate, does the user have a real opportunity to advocate for themselves? Is the evaluation process of users’ appeals fair and consistent?

When Sheerah’s post was taken down, they submitted an appeal, and their post was later put back up. But shortly afterwards, their post was taken down again, and they received the same message as before. This back and forth reveals that Instagram may not have updated their algorithm after reviewing the appeal. By not making that update, Instagram missed a crucial step towards taking accountability, serving the user, and preparing their service to not make the same mistakes when other users post similar content down the line. Presenting the option to appeal but not responding to the appeal in a serious manner is disrespectful to the user’s time.

Currently, Instagram’s community guidelines and the algorithm that enforces it do not protect all users equally, and the appeal process seems performative and ineffective in some situations. The algorithm behind Instagram’s censorship needs transparency, and so does the policy for how Instagram handles appeals. Moreover, the guidelines need to be interpreted more comprehensively regarding what is considered an act of protest. Instagram developers and policymakers must take action to improve the experience of users who bear the most consequences at this time. In the future, I hope to see dark skinned, hairy, queer women of color, like myself, take space on digital platforms without being censored.

References

What is fair machine learning? Depends on your definition of fair.

What is fair machine learning? Depends on your definition of fair.
By Anonymous | July 9, 2021

Machine learning models are being used to make increasingly complex and impactful decisions about people’s lives, which means that the mistakes they make can be equally as complex and impactful. Even the best models will fail from time to time — after all, all models are wrong, but some are useful — but how and for whom they tend to fail is a topic that is gaining more attention.

One of the most popular and widely used metrics for evaluating model performance is accuracy. Optimizing for accuracy teaches machines to make as few errors as possible given the data they have access to and other constraints; however, chasing accuracy alone often fails to consider the context behind the errors. Existing social inequities are encoded in the data that we collect about the world, and when that data is fed to a model, it can learn to “accurately” perpetuate systems of discrimination that lead to unfair outcomes for certain groups of people. This is part of the reason behind a growing push for data scientists and machine learning practitioners to make sure that they include fairness alongside accuracy as part of their model evaluation toolkit.

Accuracy doesn’t guarantee fairness.

In 2018, Joy Buolamwini and Timnit Gebru published Gender Shades, which demonstrated how overall accuracy can paint a misleading picture of a model’s effectiveness across different demographics. In their analysis of three commercial gender classification systems, they found that all three models performed better on male faces than female faces and lighter faces than darker faces. Importantly, they noted that evaluating accuracy with intersectionality in mind revealed that even for the best classifier, “darker females were 32 times more likely to be misclassified than lighter males.” This discrepancy was the result of a lack of phenotypically diverse datasets as well as insufficient attention paid to creating facial analysis benchmarks that account for fairness.

Buolamwini and Gebru’s findings highlighted the importance of disaggregating model performance evaluations to examine accuracy not only within sensitive categories, such as race and gender, but also across their intersections. Without this kind of intentional analysis, we may continue to produce and deploy highly accurate models that nonetheless distribute this accuracy unfairly across different populations.

What does fairness mean?

Recognizing the importance of evaluating fairness across various sensitive groups is the first step, but how do we measure fairness in order to optimize for it in our models?

Researchers have found several different definitions. One common measure is statistical or demographic parity. Suppose we had an algorithm that screened job applicants based on their resumes — we would achieve statistical parity across gender if the fraction of acceptances from each gender category was the same. In other words, if the model accepted 40% of the female applicants, it should accept roughly 40% of the applicants from each of the other gender categories as well.

Another definition known as predictive parity would ensure similar fractions of correct acceptances from each gender category (i.e. if 40% of the accepted female applicants were true positives, a similar percentage of true positives should be observed among accepted applicants in each gender category).

A third notion of fairness is error rate balance, which we would achieve in our scenario if the false positive and false negative rates were roughly the same across gender categories.

These are a few of many proposed mathematical definitions of fairness, each of which has its own advantages and drawbacks. Some definitions can even be contradictory, adding to the difficulty of evaluating fairness in real-world algorithms. A popular example of this was the debate surrounding COMPAS, a recidivism prediction tool that did not achieve error rate balance across Black and White defendants but did satisfy the requirements for predictive parity. In fact, because the base recidivism rate for both groups was not the same, researchers proved that it wasn’t possible for the tool to satisfy both definitions at once. This led to disagreement over whether or not the algorithm could be considered fair.

Fairness can depend on the context.

With multiple (and sometimes mutually exclusive) ways to measure fairness, choosing which one to apply requires consideration of the context and tradeoffs. Optimizing for fairness often comes at some cost to overall accuracy, which means that model developers might consider setting thresholds that balance the two.

In certain contexts, these thresholds are encoded in legal rules or norms. For example, the Equal Employment Opportunity Commission uses the four-fifths rule, which enforces statistical parity in employment decisions by setting 80% as the minimum ratio for the selection rate between groups based on race, sex, or ethnicity.

In other contexts, the balance between fairness and accuracy is left to the discretion of the model makers. Platforms such as Google What-IfAI Fairness 360, and other automated bias detection tools can aid in visualizing and understanding that balance, but it is ultimately up to model builders to evaluate their systems based on context-appropriate definitions of fairness in order to help mitigate the harms of unintentional bias.

Apple Takes a Stand For Privacy, Revolutionary? Or is there an underlying motive?

Apple Takes a Stand For Privacy, Revolutionary? Or is there an underlying motive?
By Anonymous | July 9, 2021

On April 26th, 2021, Apple released their new software update iOS 14.5 with a slew of features, including their highly discussed privacy feature. Tim Cook speaks out on a virtual International Privacy Day Panel saying, “If a business is built on misleading users, on data exploitation, on choices that are no choices at all, then it does not deserve our praise. It deserves reform.” His speech takes a jab at Facebook’s stance on privacy, but it can be implicated on a larger level where Tim places Apple at the forefront for privacy advocacy. However, is Apple really making such a revolutionary change?

Before we can answer that question, what does Apple’s privacy feature specify? Apple’s website highlights App Tracking Transparency where it requires apps to obtain user consent prior to data tracking for third-party companies. It still allows the original company to track user data and allows parent companies to track user data from their subsidiaries. For example, Facebook can utilize data it gathers from Instagram. However, it does not allow for data to be shared to a data broker or other third-party if the user does not explicitly give consent to third-party tracking.

Apple’s App Tracking Transparency feature asks user consent for third-party app data tracking

 

So what does this mean for the ordinary user? Actually, it means a lot. People, for the most part, have notably breezed through the indigestible terms and conditions of many applications and Apple has provided a concise, comprehensible pathway for human involvement in data collection. Human involvement, as one of the algorithmic transparency standards Nicholas Diakopoulos advocates for in his article Accountability in Algorithmic Decision Making, is important as it gives users insight to the data collection and usage process allowing them to make informed decisions. This new point of contestation in the data pipeline is absolutely revolutionary.

But what does that mean for companies that benefit from third-party tracking? Facebook criticizes Apple’s position on privacy as they claim that this new privacy feature stifles individual advertising and is detrimental to small businesses. There exists a constant tension between transparency and competitive advantage and companies like Facebook are concerned over their potential loss in profits. So when Facebook makes these claims and threatens an antitrust lawsuit over utilizing their market power to force third-party companies to abide by rules that Apple branded apps are not required to follow, it calls into question whether Apple is indeed taking a stand for user privacy or it is acting in its own self interest.

CEOs of Facebook and Apple, Mark Zuckerberg and Tim Cook respectively, in contention over user privacy

Whether there is an underlying or motivating self-interest for Apple to feature its new privacy design, it stands to reason that adding a new point of contestation in the data pipeline is a landmark proposition.

References