The battle between COVID contact tracing and privacy

The battle between COVID contact tracing and privacy
By Anonymous | July 9, 2021

In an effort to curb COVID-19 case counts, many countries have been employing contact tracing apps as a way of tracking infections. Although implementations can differ, the main idea is that users would download an app onto their phone, which would notify them if they have possibly been exposed to COVID-19 from being in close proximity to someone who has tested positive. This sounds good in theory, until you realize the privacy implications – the developer of the app would have unhindered access to all of your movements, including where you go, who you meet, who you live with, where they go, who they meet, and so on. Classifying countries into three groups – authoritarian countries using authoritarian measures, free countries using authoritarian measures under emergency powers, and free countries using standard measures – we see that a perfect balance between contact tracing and privacy has been difficult to achieve. Let’s take a look at a few examples of each group.

Authoritarian and authoritarian
China has taken strong measures to contain the virus, including an app that labels each individual as green (safe), yellow (potentially exposed), and red (high risk). The privacy issues are clear here, with the invasive app tracking all movements, and the algorithm of coloring individuals being a blackbox without any transparency. Although these issues remain, Chinese contact tracing has nevertheless been successful. The city of Shenzhen managed to reduce the average time of identifying and isolating potential patients from 4.6 days to 2.7 days, leading to a reproductive number of 0.4 (anything below 1 indicates that the outbreak will die).
Russia (Moscow in particular) has also taken a strong approach, forcing residents to download a QR-code based system and monitoring citizens’ movements within the city. Even with this invasive approach, Moscow has seen mass hospitalizations, second and third waves, and successive record-shattering daily death counts.

Free but temporarily authoritarian
Israel has successfully implemented a system where the Shin Bet (domestic security service) receives PII (personally identifiable information) of COVID-19 positive patients from the Health Ministry, which is then cross-referenced with a database that can identify people that came in close contact with the patient in the last two weeks. During the scheme’s first rollout, cases successfully decreased to single digits, until the whole operation was shut down by a supreme court ban. By the time the system returned under a new law three months later, Israel was well into its second wave, and case counts doubled thrice before starting to decrease again.
France initially tried to amend its emergency law to allow collection of health and location data using “any measure”. This was ultimately rejected as being too invasive, even under emergency powers. The French contact tracing app also faced major issues, sending only 14 notifications after 2 million downloads, ultimately leading a quarter of its users to uninstall the app.

Free and free
Taiwan has managed to implement a contact tracing system relying entirely on civil input and open-source software, enhancing privacy by decentralizing the data, not requiring registration, and using time-insensitive Bluetooth data. Radically different from other countries’ systems in its heavy emphasis on everything being open-source, the Taiwanese method has allowed efficient and effective contact tracing while minimizing privacy infringements.
Japan had originally used a manual form of contact-tracing, relying mainly on individually calling citizens. Once this became infeasible with large case counts and an unwillingness from respondents to fully disclose information, the government developed an app (COCOA) designed to notify users of potential exposures using Bluetooth technology, only to find out 4 months later that a bug had caused the app to fail to send notifications, drawing widespread condemnation.

Relations with privacy laws
It is important that contact tracing measures are compatible with relevant privacy laws, and that curtailments to civil liberties are kept to only what is necessary. Countries have been grappling with this issue ever since COVID-19 specific tracing apps have been available. One of the first countries to roll out a tracing app, Norway, ended up having its Data Protection Authority order the Norwegian Institute of Public Health to suspend the tracing app’s usage, as well as delete all data that was collected by it just two months after it first became available. Lithuania similarly suspended the usage of a tracing app after fears of violating EU privacy laws. Germany proposed amendments to laws that would allow broad collection of contact details and location data to fight the pandemic; both were rejected as being too invasive. Although the European General Data Protection Regulation (GDPR) creates strict limits for the collection and processing of data, it allows for some exceptions during public health emergencies, provided that the data is only used for its stated purpose – which brings us into the final important section.

Only using data for health purposes
Two principles offered by the Fair Information Practice Principles (FIPPs) provide a checklist to make sure that data collected through these systems are used appropriately – the principles of purpose specification (be transparent about how data is used) and minimization (only collect what is necessary). A privacy policy should be made public to clearly say what the tracing system can and cannot do. A lack of clear boundaries can quickly become a slippery slope of misuse, corruption, and distrust towards the government. Singapore’s contact tracing for example, originally stated that data would “only be used solely for the purpose of contact tracing of persons possibly exposed to covid-19.” Months later, the government admitted that data was used for criminal investigations, forcing both the privacy policy and the relevant legislation to be amended.

Putting everything in context
It is important to remember that contact tracing apps are simply just one part of the equation. For both countries with success and countries without success using these apps, correlation does not mean causation. We need to evaluate these systems in the greater context of the whole pandemic – although it is understandable for countries to temporarily grant emergency powers and curtail some civil liberties, we need to holistically evaluate whether the benefits of such systems outweigh the potential risks or information that we give up, and that appropriate measures are put in place to minimize potential misuses and abuses of data.

References:
https://futurism.com/contact-tracing-apps-china-coronavirus
https://www.cidrap.umn.edu/news-perspective/2020/04/study-contact-tracing-slowed-covid-19-spread-china
https://www.dailymail.co.uk/news/article-9730113/Moscow-gripped-growing-Covid-catastrophe-Russian-capital-records-144-deaths-24-hours.html
https://www.cbsnews.com/news/coronavirus-pandemic-russia-digital-tracking-system-moscow/
https://www.brookings.edu/techstream/how-israels-covid-19-mass-surveillance-operation-works/
https://www.politico.eu/article/french-contact-tracing-app-sent-just-14-notifications-after-2-million-downloads/
https://www.oecd.org/coronavirus/policy-responses/ensuring-data-privacy-as-we-battle-covid-19-36c2f31e/
https://covirus.cc/social-distancing-app-intro.html
https://asia.nikkei.com/Spotlight/Comment/Japan-s-flawed-COVID-19-tracing-app-is-digital-black-eye-for-Tokyo
https://www.bloomberg.com/news/videos/2020-07-22/contact-tracing-effective-without-invading-privacy-taiwan-digital-minister-explains-video

Contact Tracing COVID-19 Throws a Curveball to GDPR, Data Rights


https://www.technologyreview.com/2021/01/05/1015734/singapore-contact-tracing-police-data-covid/
https://www.csoonline.com/article/3606437/data-privacy-uproar-in-singapore-leads-to-limits-on-contact-tracing-usage.html

Photos:
https://www.brookings.edu/techstream/how-israels-covid-19-mass-surveillance-operation-works/
https://buzzorange.com/techorange/en/2021/05/18/gdpr-compliant-app-fights-covid-19-with-privacy-in-mind/

Cross-Border Transfer of Data – Case Study of Didi

Cross-Border Transfer of Data – Case Study of Didi
By Elizabeth Zhou | July 9, 2021

Didi, the uber of China, submitted its prospectus in the United States on June 10, and officially went public on June 30th. July 1st was the Chinese party’s 100th anniversary celebration. On July 2nd, the Cyberspace Administration of China issued an announcement to initiate a cyber security review of Didi. At the same time, Didi was removed from the Chinese app/android store on July 4th. Because of this regulator change, Didi lost US$15 billion in US stock market, and it is going to be sued by American shareholders over stock plunge caused by the regulatory changes. Didi’s failure is not only caused by the special political environment of China, but also caused by Didi’s negligence in cross-border transfer of data.

What is cross- border transfer of data? “What data can be transferred out?” and “what data must be stored inside of the country?” are two major questions around this topic. In fact, different countries have different policies. For example, in European countries, GDPR stipulates that personal data can flow freely within the European Union or the European Economic Area, while outflow of the European Economic Area, Cross-border transfers of personal data to a third country must be based on an adequacy decision or another valid data transfer mechanism, such as Binding Corporate Rules, Contract Clauses and EU-US Privacy Shield. While for CCPA, there are no such restrictions on cross-border transfer of data. China has the most strict management of cross-border data. Chinese Cyber Security Law (CCSL) stipulates that the important data should be stored in the territory. If it is really necessary to transfer to overseas due to business needs, a safety assessment shall be carried out in accordance with the measures formulated by the relevant departments of the State Council. Compared with Europe and the United States, China’s cross-border data is subject to strict scrutiny to ensure personal privacy and national security.

Why Didi Fail?

The United States introduced a Foreign Company Accountability Act (HFCA) last year, which specifically mentions that as long as it is a company going public in the United States, it must accept the review from the US Public Company Accounting Supervision Committee. This requirement is indeedly strict, therefore the Chinese company faces either being reviewed with the entire accounting manuscript or it will be forced to delist. But this review actually violated Chinese securities laws. Mentioned above, CCSL has special requirements that no unit or individual can provide relevant information and data overseas without authorization from the State Council. Because of this dilemma, Didi chooses bypassed the domestic process and submitted its data to the US directly. And because of Didi’s negligence on CCSL, Chinese government enforced strict regulatory changes on Didi, causing the following punishments.

What we learned from Didi?

Cross-border transfer of data involves two countries’ policies which rises up the barrier for companies that want to go to overseas. Especially under countries have special political weather, companies should be more cautious and patient.

References

https://www.scmp.com/business/banking-finance/article/3140272/didi-chuxing-sued-american-s

hareholders-over-stock-plunge

https://www.futurelearn.com/info/courses/general-data-protection-regulation/0/steps/32449

https://www.clearygottlieb.com/-/media/files/alert-memos-2018/2018_07_13-californias-groundbr

eaking-privacy-law-pdf.pdf

http://www.casted.org.cn/channel/newsinfo/8127

https://www.google.com/url?q=https://www.sec.gov/news/press-release/2021-53&sa=D&source

=editors&ust=1625904213435000&usg=AOvVaw03YVDnVY1fJPWB8oIwWwuk

Censorship on Instagram and the Ethics of Instagram’s Policy and Algorithm

Censorship on Instagram and the Ethics of Instagram’s Policy and Algorithm
By Anonymous | July 9, 2021

Facebook-owned Instagram disproportionately censors some folks more than others. For example, folks who are hairy, fat, dark skinned, disabled, or not straight-passing, are more likely to be censored. Additionally, folks who express dissent against an institution are often suppressed.

Illustration by Gretchen Faust of an image posted by Petra Collins, which was initially removed by Instagram.

In this article, I want to address the ethics of Instagram’s policies around who is and who is not given space to be themselves on this platform.

Instagram’s community guidelines state that most forms of nudity are not allowed to be posted, except for “photos in the context of breastfeeding, birth giving and after-birth moments, health-related situations (for example, post-mastectomy, breast cancer awareness or gender confirmation surgery) or an act of protest.” This raises the question of what counts as “an act of protest”? And who is permitted to protest on this platform?

Sheerah Ravindren, whose pronouns are they/she, is a creative assistant, activist, and model. You may have seen them in Beyonce’s “Brown Skin Girl” music video. In their bio, they write that they are a hairy, dark-skinned, Tamil, nonbinary, immigrant femme. Sheerah uses their platform to advocate for marginalized folks and raise awareness around issues that affect them and their communities. She speaks out about the genocide against Tamil people, aims to normalize melanin, body hair, marks, and rolls, and adds a dimension of digital representation for nonbinary folks of the diaspora.

Among the various types of content Sheerah posts, they have posted images of themselves with captions that convey their intent to protest eurocentric beauty standards and societal norms of femininity. Before posting images where they are not wearing a top, they edit the images to fully cover nipples, in order to meet Instagram’s community guidelines. However, when posting such content, Sheerah has been censored by Instagram — their posts have been taken down, and they have received the following message: “We removed your post because it goes against our Community Guidelines on nudity or sexual activity.” Sheerah’s post was not considered an act of protest from Instagram’s perspective and instead was sexualized unnecessarily. There are numerous other Instagram posts, depicting people who are lighter skinned, less hairy, and skinnier, wearing similar outfits, that did not get removed. For instance, there are many photography accounts that feature skinny hairless white women who are semi-clothed/semi-nude as well. What made Sheerah’s post less appropriate?

Instagram notification indicating that the user’s post has been taken down because it does not follow the community guidelines on nudity or sexual activity.

The policies around what is considered appropriate to post on Instagram seem to be inconsistently enforced, which could be due to algorithmic bias. The algorithm that determines whether a post complies with guidelines may perform better (with higher accuracy) on posts that depict lighter skinned, less hairy, and skinnier folks. This could be due to the model being trained on data that is not fully representative of the population (the training data may lack intersectional representation), among other potential factors. Moreover, the caption that accompanies an image may not be taken into account by the algorithm; but captions could be critical to contextualizing images and recognizing posts that are forms of protest.

In the context of justice, a basic ethical principle outlined in The Belmont Report, it seems that the benefits and risks of Instagram’s algorithm are not evenly distributed across users. Folks who are already marginalized in their everyday lives outside of Instagram are further burdened by the sociotechnical harm they experience on Instagram when their posts are taken down. The erasure of marginalized folks on this platform upholds existing systems of oppression that shame, silence, and devalue people who are dark skinned, fat, hairy, disabled, or trans, and those who do not conform to heternormative ideals.

While Instagram’s help center contains documentation on how a user can report content that they think violates the community guidelines, there is no documentation accessible to the user on how to submit an appeal. If a user posts something that follows the community guidelines but is misclassified by the algorithm or misreported by another user and thereby deemed inappropriate, does the user have a real opportunity to advocate for themselves? Is the evaluation process of users’ appeals fair and consistent?

When Sheerah’s post was taken down, they submitted an appeal, and their post was later put back up. But shortly afterwards, their post was taken down again, and they received the same message as before. This back and forth reveals that Instagram may not have updated their algorithm after reviewing the appeal. By not making that update, Instagram missed a crucial step towards taking accountability, serving the user, and preparing their service to not make the same mistakes when other users post similar content down the line. Presenting the option to appeal but not responding to the appeal in a serious manner is disrespectful to the user’s time.

Currently, Instagram’s community guidelines and the algorithm that enforces it do not protect all users equally, and the appeal process seems performative and ineffective in some situations. The algorithm behind Instagram’s censorship needs transparency, and so does the policy for how Instagram handles appeals. Moreover, the guidelines need to be interpreted more comprehensively regarding what is considered an act of protest. Instagram developers and policymakers must take action to improve the experience of users who bear the most consequences at this time. In the future, I hope to see dark skinned, hairy, queer women of color, like myself, take space on digital platforms without being censored.

References

What is fair machine learning? Depends on your definition of fair.

What is fair machine learning? Depends on your definition of fair.
By Anonymous | July 9, 2021

Machine learning models are being used to make increasingly complex and impactful decisions about people’s lives, which means that the mistakes they make can be equally as complex and impactful. Even the best models will fail from time to time — after all, all models are wrong, but some are useful — but how and for whom they tend to fail is a topic that is gaining more attention.

One of the most popular and widely used metrics for evaluating model performance is accuracy. Optimizing for accuracy teaches machines to make as few errors as possible given the data they have access to and other constraints; however, chasing accuracy alone often fails to consider the context behind the errors. Existing social inequities are encoded in the data that we collect about the world, and when that data is fed to a model, it can learn to “accurately” perpetuate systems of discrimination that lead to unfair outcomes for certain groups of people. This is part of the reason behind a growing push for data scientists and machine learning practitioners to make sure that they include fairness alongside accuracy as part of their model evaluation toolkit.

Accuracy doesn’t guarantee fairness.

In 2018, Joy Buolamwini and Timnit Gebru published Gender Shades, which demonstrated how overall accuracy can paint a misleading picture of a model’s effectiveness across different demographics. In their analysis of three commercial gender classification systems, they found that all three models performed better on male faces than female faces and lighter faces than darker faces. Importantly, they noted that evaluating accuracy with intersectionality in mind revealed that even for the best classifier, “darker females were 32 times more likely to be misclassified than lighter males.” This discrepancy was the result of a lack of phenotypically diverse datasets as well as insufficient attention paid to creating facial analysis benchmarks that account for fairness.

Buolamwini and Gebru’s findings highlighted the importance of disaggregating model performance evaluations to examine accuracy not only within sensitive categories, such as race and gender, but also across their intersections. Without this kind of intentional analysis, we may continue to produce and deploy highly accurate models that nonetheless distribute this accuracy unfairly across different populations.

What does fairness mean?

Recognizing the importance of evaluating fairness across various sensitive groups is the first step, but how do we measure fairness in order to optimize for it in our models?

Researchers have found several different definitions. One common measure is statistical or demographic parity. Suppose we had an algorithm that screened job applicants based on their resumes — we would achieve statistical parity across gender if the fraction of acceptances from each gender category was the same. In other words, if the model accepted 40% of the female applicants, it should accept roughly 40% of the applicants from each of the other gender categories as well.

Another definition known as predictive parity would ensure similar fractions of correct acceptances from each gender category (i.e. if 40% of the accepted female applicants were true positives, a similar percentage of true positives should be observed among accepted applicants in each gender category).

A third notion of fairness is error rate balance, which we would achieve in our scenario if the false positive and false negative rates were roughly the same across gender categories.

These are a few of many proposed mathematical definitions of fairness, each of which has its own advantages and drawbacks. Some definitions can even be contradictory, adding to the difficulty of evaluating fairness in real-world algorithms. A popular example of this was the debate surrounding COMPAS, a recidivism prediction tool that did not achieve error rate balance across Black and White defendants but did satisfy the requirements for predictive parity. In fact, because the base recidivism rate for both groups was not the same, researchers proved that it wasn’t possible for the tool to satisfy both definitions at once. This led to disagreement over whether or not the algorithm could be considered fair.

Fairness can depend on the context.

With multiple (and sometimes mutually exclusive) ways to measure fairness, choosing which one to apply requires consideration of the context and tradeoffs. Optimizing for fairness often comes at some cost to overall accuracy, which means that model developers might consider setting thresholds that balance the two.

In certain contexts, these thresholds are encoded in legal rules or norms. For example, the Equal Employment Opportunity Commission uses the four-fifths rule, which enforces statistical parity in employment decisions by setting 80% as the minimum ratio for the selection rate between groups based on race, sex, or ethnicity.

In other contexts, the balance between fairness and accuracy is left to the discretion of the model makers. Platforms such as Google What-IfAI Fairness 360, and other automated bias detection tools can aid in visualizing and understanding that balance, but it is ultimately up to model builders to evaluate their systems based on context-appropriate definitions of fairness in order to help mitigate the harms of unintentional bias.

Apple Takes a Stand For Privacy, Revolutionary? Or is there an underlying motive?

Apple Takes a Stand For Privacy, Revolutionary? Or is there an underlying motive?
By Anonymous | July 9, 2021

On April 26th, 2021, Apple released their new software update iOS 14.5 with a slew of features, including their highly discussed privacy feature. Tim Cook speaks out on a virtual International Privacy Day Panel saying, “If a business is built on misleading users, on data exploitation, on choices that are no choices at all, then it does not deserve our praise. It deserves reform.” His speech takes a jab at Facebook’s stance on privacy, but it can be implicated on a larger level where Tim places Apple at the forefront for privacy advocacy. However, is Apple really making such a revolutionary change?

Before we can answer that question, what does Apple’s privacy feature specify? Apple’s website highlights App Tracking Transparency where it requires apps to obtain user consent prior to data tracking for third-party companies. It still allows the original company to track user data and allows parent companies to track user data from their subsidiaries. For example, Facebook can utilize data it gathers from Instagram. However, it does not allow for data to be shared to a data broker or other third-party if the user does not explicitly give consent to third-party tracking.

Apple’s App Tracking Transparency feature asks user consent for third-party app data tracking

 

So what does this mean for the ordinary user? Actually, it means a lot. People, for the most part, have notably breezed through the indigestible terms and conditions of many applications and Apple has provided a concise, comprehensible pathway for human involvement in data collection. Human involvement, as one of the algorithmic transparency standards Nicholas Diakopoulos advocates for in his article Accountability in Algorithmic Decision Making, is important as it gives users insight to the data collection and usage process allowing them to make informed decisions. This new point of contestation in the data pipeline is absolutely revolutionary.

But what does that mean for companies that benefit from third-party tracking? Facebook criticizes Apple’s position on privacy as they claim that this new privacy feature stifles individual advertising and is detrimental to small businesses. There exists a constant tension between transparency and competitive advantage and companies like Facebook are concerned over their potential loss in profits. So when Facebook makes these claims and threatens an antitrust lawsuit over utilizing their market power to force third-party companies to abide by rules that Apple branded apps are not required to follow, it calls into question whether Apple is indeed taking a stand for user privacy or it is acting in its own self interest.

CEOs of Facebook and Apple, Mark Zuckerberg and Tim Cook respectively, in contention over user privacy

Whether there is an underlying or motivating self-interest for Apple to feature its new privacy design, it stands to reason that adding a new point of contestation in the data pipeline is a landmark proposition.

References

How young is too young for social media ?

How young is too young for social media ?
By Mariam Germanyan | July 9, 2021

Children’s Online Privacy Protection Act (COPPA)

Time traveling back to 1998, the Federal Trade Commission (FTC) surveyed about 200 websites with user profiles and discovered that 89 percent of them collect personal data and information from children aka minors. Based on those children, 46 percent of those websites did not disclose that such personal data was being collected or what the collected data was being used for. Ultimately not complying with Solove’s Taxonomy in explaining data collection, data usage, and data storage. The survey results aided Congress in developing the Children’s Online Privacy Protection Act (COPPA) of 1998, 15 U.S.C. 6501–6505 requiring the FTC to issue and enforce regulations concerning children’s online privacy leading to a law that limits/restricts the online collection of data and information from children under the age of 13. Applying the protection only to children under 13, recognizing that younger children are considered more vulnerable to overreaching by marketers and may not understand the safety and privacy issues created by the online collection of personal information.

Consider Your First Social Media Account
Taking that into consideration, I ask you to think and reflect on the following questions listed below:
* How old were you when you created your first social media account?
* Why did you feel it was necessary to make an account in the first place?
* Did you lie about your age in order to register for an account?
* If so, why did you do it?
* How easy was it to make the account?
* Were you aware of the Privacy Policy associated with the account?
* Did you read it?
* Looking back from an older age, was it worth it?

Advantages and Disadvantages of Social Media at an Early Age
Now let’s consider the advantages and disadvantages of starting social media at an early age. The advantages of starting social media at a young age include but not limited to allowing such users a place to converse and connect with others within their community which is known to make individuals feel better about themselves by growing more confident. A major advantage would be to help start activism in individuals at an early age where they can find support and either join or start their own communities where they fit in. On the other hand, the disadvantages include that at the age of 13, the prefrontal cortex is barely halfway developed putting such young users at a risky age and in a vulnerable phase because they are not aware of what is possible as they experiment more and encounter risky opportunities. In addition, the majority of children are not aware of data privacy, can risk their safety as they can potentially be exposed to strangers, cyberbullies, content they should not see, and potentially develop mental issues as social media can take a toll on mental health on appearance and expectations at an early age. As they advance through their years as a teenager, they become more aware of such risks and consequences of online behavior. On a final note, we should also consider the amount of productive time that children waste on such social media accounts.

After reading this blog post and knowing what you know now, I ask you to consider what age you consider old enough for social media.

References

Children Under 13 Should Not Be Allowed on Social Media, and Here is Why. (2020,
September 3). BrightSide €” Inspiration. Creativity. Wonder.
https://brightside.me/inspiration-family-and-kids/children-under-13-should-not-be-allowed-on-social-media-and-heres-why-798768/

Complying with COPPA: Frequently Asked Questions. (2021, January 15). Federal Trade
Commission. https://www.ftc.gov/tips-advice/business-center/guidance/complying- coppa-frequently-asked-questions-0

When is the Right Age to Start Social Media? (2020, August 14). Common Sense Education.
https://www.commonsense.org/education/videos/when-is-the-right-age-to-start-social-media

Data Privacy for Highschoolers 

Data Privacy for Highschoolers 
By Katie Cason | July 9, 2021

While I thankfully didn’t spend time in the supervision office in high school, I often heard of the troublemakers floating in and out of ‘Mr. Seidels Office’ for a wide range of things. If someone wasn’t called in for violence, cheating, or talking back in class, they were reprimanded because of something they had posted on social media or an incriminating text they had sent.

Having graduated highschool 5 years ago now, the rise of minors on social media and with smart phones brings me to question the big issue of data privacy for highschool students. However, as learning has shifted to being remote because of the Covid-19 pandemic, I imagine that this has become a much larger issue.

Screening Highschool Students’ Social Media with AI

One specifically concerning application of AI is the use of big data analytics to screen students’ social media accounts for potential violence. Many highschool administrations have been turning to this type of technology to prevent violence and bad things from happening on their campuses. While one can see the good that could come from this form of prevention, students do not consent to this intrusion of their privacy. This could also be harmful to students who are flagged incorrectly- ie. the potential for false positives from this technology could be detrimental to a students’ relationship with the adults at school. And lastly, many educators are not trained to use big data, and they are not qualified to monitor the public to reprimand people.

Covid-19 Bringing Students Online and Their Privacy into Question

With the onset of the global pandemic in March 2020, high school students shifted entirely to online learning meaning that every engagement with their educators was through an online platform. These platforms include and are not limited to Zoom, Skype, Babbel, Brainpop, and Honorlock to enable teachers and students to interact through video conferencing, classroom activities, message communication, and test proctoring. The first concerning point about the use of these platforms is that due to the extensive and inconvenient amount of technical jargon in the privacy policies of these educational digital tools, students and teachers do not know what data is being collected about them. A simple data breach could bring their worlds’ crashing down. As the people in charge rushed to transition schools online, legal requirements for privacy agreements were often waived. Additionally, many of the online tools used initially were free methods found by teachers; eventually, these tools became un-free and students and teachers coughed up credit card information bringing up the issue of the security of financial information. There is overwhelming concern among parents over the selling of their children’s data to big tech firms.

It is clear that the convenience and efficiency of technological tools to school administrators and teachers often causes them to overlook the potential harms it could cause their students. There is little legislation in place to protect minors and their data privacy especially when it can be seen as protecting others. This is an issue like that of many data privacy issues in that legislators are slow to catch up with the fast paced innovations of the tech industry.

References:

https://www.edweek.org/technology/massive-shift-to-remote-learning-prompts-big-data-privacy-concerns/2020/03

https://flaglerlive.com/120860/social-sentinel-pt/

https://gizmodo.com/schools-are-using-ai-to-check-students-social-media-for-1824002976

https://ssd.eff.org/en/module/privacy-students

Japan Using AI as Matchmaker

Japan Using AI as Matchmaker
By Joshua Chung | July 9, 2021

In order to combat low birth rates and an aging population, the Japanese government has invested $19.2 million into AI technology that would aid in matchmaking for Japanese citizens. The hope is that this artificial intelligence would increase dating and therefore birthrate by providing compatible matches based on the hobbies, interests, values of users. Individual prefectures have been offering matchmaking services for a few years, however, these simplistic algorithms have not been very effective so far. Using AI for matchmaking is common amongst online dating services, which is the most commonly used method of meeting potential love interests in the US. however, the key differences lie in the autonomy of the user and the authority of the matchmaker.

Unlike online dating services which allow users to look through the list of potential suitors and notifying users of their match when there is a mutual interest, the government’s dating service is designed to give matches solely based on the algorithm’s recommendation. This prevents users from being able to have any input in the decision-making process. Of course, users can choose not to meet these matches, but users may feel more pressured to due to the nature of the dating service. First, the dating service is government-sponsored, which may give the impression to users of both the service’s proficiency to find optimal matches and a sense of urgency in finding a potential spouse. This may pressure users to follow through with matches made by the system or even start dating, whether or not they actually liked their match. Second, using a sophisticated AI system that is hardly interpretable by a layman could influence the opinion users to have regarding the trustworthiness of matches. By preventing users from seeing others that the system did not match them with, all the trust has been placed in the system’s competency. These differences have led to a somewhat dystopian version of a commonly used dating tool. Although results have been positive so far, further time will be needed to see how efficacious this tool has led to marriages and hopefully higher birth rates. In fact, it is difficult to say whether or not this AI system is really the silver bullet to Japan’s low birth rate to begin with.


As AI has continued to advance and solve problems that were once deemed as intractable, the public has seemed to put greater trust in the ability and efficacy of such algorithms. This trust has led to greater adoption of such technology and with it many benefits, but not without harms as well. At times, these technologies have been seen as a panacea to all types of problems, and this usage of AI to tackle low birth rates is an example. Critics have pointed out that dating may not even be the primary reason for low birth rates, but rather issues such as lower income levels and lack of support for working mothers. The money invested in new AI technologies may have been better used tackling these problems instead. Only time will tell.

 

References
https://www.bbc.com/news/world-asia-55226098

Algorithms for Love: Japan Will Soon Launch an AI Dating Service

https://english.kyodonews.net/news/2021/01/29224cc39864-japan-govt-banking-on-ai-to-help-singles-find-love.html

https://www.vox.com/recode/2020/2/14/21137096/how-tinder-matches-work-algorithm-grindr-bumble-hinge-algorithms

Image Sources

15 Things to Know About Baby


https://en.wikipedia.org/wiki/Heart_symbol

Do you like it? Or do you like it?

Do you like it? Or do you like it?
By Anonymous | July 9, 2021

Imagine physics governed by positive, but not negative; computer science described with one, but not zero; the human brain functioning with excitation, but not inhibition; a debate with pro, but not con; logic constructed with true, but not false. Indeed this is an impossible world, for if the aforementioned had such imbalanced representations, none of them could exist. The problem is that despite this self-evident conclusion, many ubiquitous systems in our data driven world are intentionally designed to represent information asymmetrically in order to extract some form of value from its misrepresentation.

When information systems are created, the representation of the information that is relevant to the system is intentionally defined through its design. Furthermore the design of this representation determines the ways users can interact with the information it represents, and the motives that underlie these intentional decisions are often shaped by social, economic, technical, and political dynamics. The objective of this discussion is to consider the value of misrepresentation, the design of misrepresentation, and the ethical implications this has for individuals. For the sake of brevity, the intentional design of a single feature within Facebook’s user interface, will serve as the archetype of this analysis.

Consider for a moment that the Facebook “Like” button exists without a counterpart. There is no “Dislike” button and no equivalent symbolic way to represent an antithetical position to the self-declared proponents of the information whom had the ability to express their position with a “Like”. Now consider that approximately 34.53% of Earth’s human population is exposed to the information that flows through Facebook’s network every month (monthly active users). Given the sheer scale of this system, it seems logical to expect the information on this network to have the representational capacity to express the many dimensions that information often comes in. However, this is not the world we live in, and Facebook’s decision to design and implement a system-wide unary representation of data was a business decision, not a humanitarian one.

Value. In recent fMRI experiments it has been shown that there is a positive correlation between the number of “Likes” seen by a user and neural activity in regions of the brain that control attention, imitation, social cognition, and reward processing. The general exploitation of these neural mechanisms by advertisers is what forms the foundation of what is known today as the Attention Economy. In the Attention Economy what is valuable to the business entities that comprise this economy is the user’s attention. Analogously, just as coal can be transformed into electrical energy, a user’s attention can be transformed into monetary gains. Therefore, if harvesting more user attention equates to greater monetary gains, the problem for advertisement companies such as Facebook becomes finding mechanisms to efficiently yield higher crops of user attention. To no surprise, the representation of data in Facebook’s user interface is designed to support this exact purpose.

By design. Newton’s first law of motion states that in an inertial frame of reference an object either remains at rest or continues to move at a constant velocity unless acted upon by a force. Similarly, information propagates through a network with many of the same properties that an object moving through space has. As such, information will continue to propagate through a network if external forces do not inhibit its propagation. In regard to Facebook, the mechanism that would serve to modulate the propagation of information through the network is the “Dislike” button. If there was the ability to “Dislike” information on Facebook in the same way that one can “Like” information, then the value of the information could, at a minimum, be appraised by calculating the difference between the likes and the dislikes. However, since the amount of disagreement or dislike isn’t visibly quantified, all information will always have a positive level of agreement or “Like”, giving the impression that the information is undisputedly of positive or true value.

The other prominent factor that influences the propagation of data on Facebook’s network is the amount of quantified endorsement (i.e. number of likes) a post has. In addition to Facebook’s black boxed algorithms used for Newsfeed and other content suggestions, the number of likes a piece of information has induces a psychological phenomena known as social endorsement. Social endorsement is the process of information gaining momentum (i.e. gaining more likes at a non-linear rate that accelerates with endorsement count) via the appearance of having high social endorsement, which influences other users to “Like” the information with less or no personal scrutiny at all.

Putting these factors together, consider the “Pizzagate” debacle that occurred back in 2016. Pizzagate is the name of a conspiracy theory that picked up traction on conspiracy theory websites and was subsequently amplified by Facebook through increased exposure from social endorsement and sharing. The information propagated through Facebook’s network reached a level of exposure that compelled Edgar Maddison Welch to seize the establishment with an AR-15 in order to determine if there was in fact a child prostitution ring. Needless to say, this was a false story, and he was arrested.

So what’s the big idea?

Asymmetrical representations of data on these types of ubiquitous systems perpetuate a distorted society. Whether it be a democratic election, the Pizzagate debacle, or someone losing their savings to a cryptocurrency scam video promoting some new coin with “100K Likes”; information, can reach beyond the digital realms of the internet to effect the physical world we inhabit. Ergo, the representations that we use in the tools that come to govern our society, matter.

Sources:

Kelkar, Shreeharsh. “Engineering a Platform: Constructing Interfaces, Users, Organizational Roles, and the Division of Labor.” 2018. doi:10.31235/osf.io/z6btm.

L.E. Sherman, A.A. Payton, L.M. Hernandez, et al. The Power of the “Like” in Adolescence: Effects of Peer Influence on Neural and Behavioral Responses to Social Media. Psychol Sci, 27 (7) (2016), pp. 1027-1035

Andrew Quodling PhD Candidate. “The Dark Art of Facebook Fiddling with Your News Feed.” The Conversation. May 08, 2019. Accessed July 8, 2021. http://theconversation.com/the-dark-art-of-facebook-fiddling-with-your-news-feed-31014.

Primack, Brian A., and César G. Escobar-Viera. “Social Media as It Interfaces with Psychosocial Development and Mental Illness in Transitional Age Youth.” Child and Adolescent Psychiatric Clinics of North America. April 2017. Accessed July 9, 2021. https://www.ncbi.nlm.nih.gov/pubmed/28314452.

A “COMPAS” That’s Pointing in the Wrong Direction

A “COMPAS” That’s Pointing in the Wrong Direction
By Akaash Kambath | July 9, 2021

Though I have mentioned a number of ways in which COMPAS fails to comport with the vision that the actors surrounding the tool had in mind, I am not trying to suggest that technology cannot be used to help the criminal justice system at all; in fact, a number of studies have mentioned how imperfect algorithms have helped the criminal justice system in a number of ways (Corbett-Davies, Goel, Gonz lez-Bail n 2017). I just do not believe that it can help in this situation—determining an individual’s likelihood of recidivating cannot be done in a truly unbiased way, and the implementation of COMPAS has only hurt the criminal justice system’s efforts in incarcerating people of all races in a fair manner. COMPAS’s effect in hurting the criminal justice system and people of color further provides a strong reminder of the imperativeness in conducting thorough research in evaluating the costs and benefits of all other alternatives when deciding to implement a new model or quantitative tool to solve a problem. As Sheila Jasanoff emphasizes in The Ethics of Invention, this includes the null alternative. She laments the fact that assessments and evaluations on quantitative risk assessment tools “are typically undertaken only when a technological project…is already well underway” (Jasanoff 2016). Researchers of these quantitative risk assessment tools also rarely consider the null option as an alternative, noting that the idea of “doing without a product or process altogether” is never put under investigation. Jasanoff’s work suggests that although actors pushing for a new technology may believe that it will fulfill a vision that sees humanity benefitting significantly, without complete consideration and research done on all of its alternatives, the implementation of such novel technology in areas not restricted to the criminal justice system may end up hurting humankind more than helping.

It is well known that there are a number of issues that plague the effectiveness and fairness of today’s criminal justice system. African-Americans have been the victims of disparaging prejudice throughout the system’s history; when examining the imprisonment rate (defined as the number of prisoners per 100,000 people) between whites and nonwhites, the imprisonment rate for black prisoners was “nearly six times the imprisonment rate for whites”, a gross misrepresentation of the true demographics of the current U.S. population (Gramlich 2019). Although efforts have been made to decrease this disparity between the incarceration rates of whites and nonwhites, there is still a long way to go. One way that the government has tried to improve the criminal justice system is by using a quantitative risk assessment tool to try and determine a convicted person’s likelihood of re-offending in an unbiased and fair way. The Correctional Offender Management Profiling for Alternative Sanction (COMPAS) score is a risk-assessment tool that purports to predict how likely someone is to recidivate. It was introduced by a company named Equivant, which uses a proprietary algorithm to calculate the score, and a higher COMPAS score suggests that the convicted individual is more likely to offend. The COMPAS score is given to judges when they are determining sentencing, and they use the results of the risk assessment tool alongside the convicted person’s criminal record to make their decision.

One can believe that the actors involved in the creation and implementation of the COMPAS score in the criminal justice system, i.e. employees and executives at Equivant, policymakers, and judges using the tool, had a vision that this tool would help create a safer society by keeping those that are more “at risk” of recidivating off of the streets and behind bars in a fair and accurate way. In recent years, however, COMPAS has come under much scrutiny for doing the contrary. In 2016, ProPublica investigated COMPAS’s performance and reported that the risk assessment tool has “falsely flag[ged] black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants” (Angwin et al. 2016). ProPublica’s article also mentions that COMPAS scores have incorrectly classified whites as well, stating that white prisoners that were given low-risk scores ended up recidivating at a higher rate than predicted. Instead of judging an indiviudal’s risk of re-offending in an unbiased way, the COMPAS score does the opposite—it reaffirms society’s racial stereotypes of viewing whites as “less dangerous,” and rendering people of color as inherently more dangerous and more likely to commit a crime. Furthermore, a 2018 article published by The Atlantic conducted research to challenge the claim that the algorithm’s predictions were better than human ones. 400 volunteers were given short “descriptions of defendants from the ProPublica investigation, and based on that, they had to guess if the defendant would commit another crime within two years.” In a stunning outcome, the volunteers answered correctly 63 percent of the time, and that number rose to 67 percent when their answers were pooled. This contrasted to the COMPAS algorithms accuracy level of 65 percent, suggesting that the risk assessment tool is “barely better than individual guessers, and no better than a crowd” (Yong 2018). Compounding this deficiency, the secretive algorithm behind the COMPAS score was discovered to be no better in performance than a basic regression model constructed by two Dartmouth students, and although this may suggest that the risk assessment tool may be unsophisticated, it is more likely that the tool has hit a “ceiling of sophistication.”

Though I have mentioned a number of ways in which COMPAS fails to comport with the vision that the actors surrounding the tool had in mind, I am not trying to suggest that technology cannot be used to help the criminal justice system at all; in fact, a number of studies have mentioned how imperfect algorithms have helped the criminal justice system in a number of ways (Corbett-Davies, Goel, Gonzlez-Bail n 2017). I just do not believe that it can help in this situation—determining an individual’s likelihood of recidivating cannot be done in a truly unbiased way, and the implementation of COMPAS has only hurt the criminal justice system’s efforts in incarcerating people of all races in a fair manner. COMPAS’s effect in hurting the criminal justice system and people of color further provides a strong reminder of the imperativeness in conducting thorough research in evaluating the costs and benefits of all other alternatives when deciding to implement a new model or quantitative tool to solve a problem. As Sheila Jasanoff emphasizes in The Ethics of Invention, this includes the null alternative. She laments the fact that assessments and evaluations on quantitative risk assessment tools “are typically undertaken only when a technological project…is already well underway” (Jasanoff 2016). Researchers of these quantitative risk assessment tools also rarely consider the null option as an alternative, noting that the idea of “doing without a product or process altogether” is never put under investigation. Jasanoff’s work suggests that although actors pushing for a new technology may believe that it will fulfill a vision that sees humanity benefitting significantly, without complete consideration and research done on all of its alternatives, the implementation of such novel technology in areas not restricted to the criminal justice system may end up hurting humankind more than helping.

Works Cited

1. John Gramlich, “The gap between the number of blacks and whites in prison is shrinking,” Pew Research Center, April 30, 2019,

https://www.pewresearch.org/fact-tank/2019/04/30/shrinking-gap-between-number-of-blacks-and-whites-in-prison/.

2. Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, “Machine Bias,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

3. Ed Yong, “A Popular Algorithm Is No Better at Predicting Crimes Than Random People,” The Atlantic, January 17, 2018, https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/.

4. Sam Corbett-Davies, Sharad Goel and Sandra Gonz lez-Bail n, “Even Imperfect

Algorithms Can Improve the Criminal Justice System”, The New York Times, December20, 2017, https://www.nytimes.com/2017/12/20/upshot/algorithms-bail-criminal-justice-system.html.

5. Sheila Jasanoff, “Risk and Responsibility,” in The Ethics of Invention: Technology and the Human Future, 2016, pp. 31-58.

6. Image source for first image: https://datastori.es/wp-content/uploads/2016/09/2016-09-23-15540.png