How young is too young for social media ?

How young is too young for social media ?
By Mariam Germanyan | July 9, 2021

Children’s Online Privacy Protection Act (COPPA)

Time traveling back to 1998, the Federal Trade Commission (FTC) surveyed about 200 websites with user profiles and discovered that 89 percent of them collect personal data and information from children aka minors. Based on those children, 46 percent of those websites did not disclose that such personal data was being collected or what the collected data was being used for. Ultimately not complying with Solove’s Taxonomy in explaining data collection, data usage, and data storage. The survey results aided Congress in developing the Children’s Online Privacy Protection Act (COPPA) of 1998, 15 U.S.C. 6501–6505 requiring the FTC to issue and enforce regulations concerning children’s online privacy leading to a law that limits/restricts the online collection of data and information from children under the age of 13. Applying the protection only to children under 13, recognizing that younger children are considered more vulnerable to overreaching by marketers and may not understand the safety and privacy issues created by the online collection of personal information.

Consider Your First Social Media Account
Taking that into consideration, I ask you to think and reflect on the following questions listed below:
* How old were you when you created your first social media account?
* Why did you feel it was necessary to make an account in the first place?
* Did you lie about your age in order to register for an account?
* If so, why did you do it?
* How easy was it to make the account?
* Were you aware of the Privacy Policy associated with the account?
* Did you read it?
* Looking back from an older age, was it worth it?

Advantages and Disadvantages of Social Media at an Early Age
Now let’s consider the advantages and disadvantages of starting social media at an early age. The advantages of starting social media at a young age include but not limited to allowing such users a place to converse and connect with others within their community which is known to make individuals feel better about themselves by growing more confident. A major advantage would be to help start activism in individuals at an early age where they can find support and either join or start their own communities where they fit in. On the other hand, the disadvantages include that at the age of 13, the prefrontal cortex is barely halfway developed putting such young users at a risky age and in a vulnerable phase because they are not aware of what is possible as they experiment more and encounter risky opportunities. In addition, the majority of children are not aware of data privacy, can risk their safety as they can potentially be exposed to strangers, cyberbullies, content they should not see, and potentially develop mental issues as social media can take a toll on mental health on appearance and expectations at an early age. As they advance through their years as a teenager, they become more aware of such risks and consequences of online behavior. On a final note, we should also consider the amount of productive time that children waste on such social media accounts.

After reading this blog post and knowing what you know now, I ask you to consider what age you consider old enough for social media.

References

Children Under 13 Should Not Be Allowed on Social Media, and Here is Why. (2020,
September 3). BrightSide €” Inspiration. Creativity. Wonder.
https://brightside.me/inspiration-family-and-kids/children-under-13-should-not-be-allowed-on-social-media-and-heres-why-798768/

Complying with COPPA: Frequently Asked Questions. (2021, January 15). Federal Trade
Commission. https://www.ftc.gov/tips-advice/business-center/guidance/complying- coppa-frequently-asked-questions-0

When is the Right Age to Start Social Media? (2020, August 14). Common Sense Education.
https://www.commonsense.org/education/videos/when-is-the-right-age-to-start-social-media

Data Privacy for Highschoolers 

Data Privacy for Highschoolers 
By Katie Cason | July 9, 2021

While I thankfully didn’t spend time in the supervision office in high school, I often heard of the troublemakers floating in and out of ‘Mr. Seidels Office’ for a wide range of things. If someone wasn’t called in for violence, cheating, or talking back in class, they were reprimanded because of something they had posted on social media or an incriminating text they had sent.

Having graduated highschool 5 years ago now, the rise of minors on social media and with smart phones brings me to question the big issue of data privacy for highschool students. However, as learning has shifted to being remote because of the Covid-19 pandemic, I imagine that this has become a much larger issue.

Screening Highschool Students’ Social Media with AI

One specifically concerning application of AI is the use of big data analytics to screen students’ social media accounts for potential violence. Many highschool administrations have been turning to this type of technology to prevent violence and bad things from happening on their campuses. While one can see the good that could come from this form of prevention, students do not consent to this intrusion of their privacy. This could also be harmful to students who are flagged incorrectly- ie. the potential for false positives from this technology could be detrimental to a students’ relationship with the adults at school. And lastly, many educators are not trained to use big data, and they are not qualified to monitor the public to reprimand people.

Covid-19 Bringing Students Online and Their Privacy into Question

With the onset of the global pandemic in March 2020, high school students shifted entirely to online learning meaning that every engagement with their educators was through an online platform. These platforms include and are not limited to Zoom, Skype, Babbel, Brainpop, and Honorlock to enable teachers and students to interact through video conferencing, classroom activities, message communication, and test proctoring. The first concerning point about the use of these platforms is that due to the extensive and inconvenient amount of technical jargon in the privacy policies of these educational digital tools, students and teachers do not know what data is being collected about them. A simple data breach could bring their worlds’ crashing down. As the people in charge rushed to transition schools online, legal requirements for privacy agreements were often waived. Additionally, many of the online tools used initially were free methods found by teachers; eventually, these tools became un-free and students and teachers coughed up credit card information bringing up the issue of the security of financial information. There is overwhelming concern among parents over the selling of their children’s data to big tech firms.

It is clear that the convenience and efficiency of technological tools to school administrators and teachers often causes them to overlook the potential harms it could cause their students. There is little legislation in place to protect minors and their data privacy especially when it can be seen as protecting others. This is an issue like that of many data privacy issues in that legislators are slow to catch up with the fast paced innovations of the tech industry.

References:

https://www.edweek.org/technology/massive-shift-to-remote-learning-prompts-big-data-privacy-concerns/2020/03

https://flaglerlive.com/120860/social-sentinel-pt/

https://gizmodo.com/schools-are-using-ai-to-check-students-social-media-for-1824002976

https://ssd.eff.org/en/module/privacy-students

Japan Using AI as Matchmaker

Japan Using AI as Matchmaker
By Joshua Chung | July 9, 2021

In order to combat low birth rates and an aging population, the Japanese government has invested $19.2 million into AI technology that would aid in matchmaking for Japanese citizens. The hope is that this artificial intelligence would increase dating and therefore birthrate by providing compatible matches based on the hobbies, interests, values of users. Individual prefectures have been offering matchmaking services for a few years, however, these simplistic algorithms have not been very effective so far. Using AI for matchmaking is common amongst online dating services, which is the most commonly used method of meeting potential love interests in the US. however, the key differences lie in the autonomy of the user and the authority of the matchmaker.

Unlike online dating services which allow users to look through the list of potential suitors and notifying users of their match when there is a mutual interest, the government’s dating service is designed to give matches solely based on the algorithm’s recommendation. This prevents users from being able to have any input in the decision-making process. Of course, users can choose not to meet these matches, but users may feel more pressured to due to the nature of the dating service. First, the dating service is government-sponsored, which may give the impression to users of both the service’s proficiency to find optimal matches and a sense of urgency in finding a potential spouse. This may pressure users to follow through with matches made by the system or even start dating, whether or not they actually liked their match. Second, using a sophisticated AI system that is hardly interpretable by a layman could influence the opinion users to have regarding the trustworthiness of matches. By preventing users from seeing others that the system did not match them with, all the trust has been placed in the system’s competency. These differences have led to a somewhat dystopian version of a commonly used dating tool. Although results have been positive so far, further time will be needed to see how efficacious this tool has led to marriages and hopefully higher birth rates. In fact, it is difficult to say whether or not this AI system is really the silver bullet to Japan’s low birth rate to begin with.


As AI has continued to advance and solve problems that were once deemed as intractable, the public has seemed to put greater trust in the ability and efficacy of such algorithms. This trust has led to greater adoption of such technology and with it many benefits, but not without harms as well. At times, these technologies have been seen as a panacea to all types of problems, and this usage of AI to tackle low birth rates is an example. Critics have pointed out that dating may not even be the primary reason for low birth rates, but rather issues such as lower income levels and lack of support for working mothers. The money invested in new AI technologies may have been better used tackling these problems instead. Only time will tell.

 

References
https://www.bbc.com/news/world-asia-55226098

Algorithms for Love: Japan Will Soon Launch an AI Dating Service

https://english.kyodonews.net/news/2021/01/29224cc39864-japan-govt-banking-on-ai-to-help-singles-find-love.html

https://www.vox.com/recode/2020/2/14/21137096/how-tinder-matches-work-algorithm-grindr-bumble-hinge-algorithms

Image Sources

15 Things to Know About Baby


https://en.wikipedia.org/wiki/Heart_symbol

Do you like it? Or do you like it?

Do you like it? Or do you like it?
By Anonymous | July 9, 2021

Imagine physics governed by positive, but not negative; computer science described with one, but not zero; the human brain functioning with excitation, but not inhibition; a debate with pro, but not con; logic constructed with true, but not false. Indeed this is an impossible world, for if the aforementioned had such imbalanced representations, none of them could exist. The problem is that despite this self-evident conclusion, many ubiquitous systems in our data driven world are intentionally designed to represent information asymmetrically in order to extract some form of value from its misrepresentation.

When information systems are created, the representation of the information that is relevant to the system is intentionally defined through its design. Furthermore the design of this representation determines the ways users can interact with the information it represents, and the motives that underlie these intentional decisions are often shaped by social, economic, technical, and political dynamics. The objective of this discussion is to consider the value of misrepresentation, the design of misrepresentation, and the ethical implications this has for individuals. For the sake of brevity, the intentional design of a single feature within Facebook’s user interface, will serve as the archetype of this analysis.

Consider for a moment that the Facebook “Like” button exists without a counterpart. There is no “Dislike” button and no equivalent symbolic way to represent an antithetical position to the self-declared proponents of the information whom had the ability to express their position with a “Like”. Now consider that approximately 34.53% of Earth’s human population is exposed to the information that flows through Facebook’s network every month (monthly active users). Given the sheer scale of this system, it seems logical to expect the information on this network to have the representational capacity to express the many dimensions that information often comes in. However, this is not the world we live in, and Facebook’s decision to design and implement a system-wide unary representation of data was a business decision, not a humanitarian one.

Value. In recent fMRI experiments it has been shown that there is a positive correlation between the number of “Likes” seen by a user and neural activity in regions of the brain that control attention, imitation, social cognition, and reward processing. The general exploitation of these neural mechanisms by advertisers is what forms the foundation of what is known today as the Attention Economy. In the Attention Economy what is valuable to the business entities that comprise this economy is the user’s attention. Analogously, just as coal can be transformed into electrical energy, a user’s attention can be transformed into monetary gains. Therefore, if harvesting more user attention equates to greater monetary gains, the problem for advertisement companies such as Facebook becomes finding mechanisms to efficiently yield higher crops of user attention. To no surprise, the representation of data in Facebook’s user interface is designed to support this exact purpose.

By design. Newton’s first law of motion states that in an inertial frame of reference an object either remains at rest or continues to move at a constant velocity unless acted upon by a force. Similarly, information propagates through a network with many of the same properties that an object moving through space has. As such, information will continue to propagate through a network if external forces do not inhibit its propagation. In regard to Facebook, the mechanism that would serve to modulate the propagation of information through the network is the “Dislike” button. If there was the ability to “Dislike” information on Facebook in the same way that one can “Like” information, then the value of the information could, at a minimum, be appraised by calculating the difference between the likes and the dislikes. However, since the amount of disagreement or dislike isn’t visibly quantified, all information will always have a positive level of agreement or “Like”, giving the impression that the information is undisputedly of positive or true value.

The other prominent factor that influences the propagation of data on Facebook’s network is the amount of quantified endorsement (i.e. number of likes) a post has. In addition to Facebook’s black boxed algorithms used for Newsfeed and other content suggestions, the number of likes a piece of information has induces a psychological phenomena known as social endorsement. Social endorsement is the process of information gaining momentum (i.e. gaining more likes at a non-linear rate that accelerates with endorsement count) via the appearance of having high social endorsement, which influences other users to “Like” the information with less or no personal scrutiny at all.

Putting these factors together, consider the “Pizzagate” debacle that occurred back in 2016. Pizzagate is the name of a conspiracy theory that picked up traction on conspiracy theory websites and was subsequently amplified by Facebook through increased exposure from social endorsement and sharing. The information propagated through Facebook’s network reached a level of exposure that compelled Edgar Maddison Welch to seize the establishment with an AR-15 in order to determine if there was in fact a child prostitution ring. Needless to say, this was a false story, and he was arrested.

So what’s the big idea?

Asymmetrical representations of data on these types of ubiquitous systems perpetuate a distorted society. Whether it be a democratic election, the Pizzagate debacle, or someone losing their savings to a cryptocurrency scam video promoting some new coin with “100K Likes”; information, can reach beyond the digital realms of the internet to effect the physical world we inhabit. Ergo, the representations that we use in the tools that come to govern our society, matter.

Sources:

Kelkar, Shreeharsh. “Engineering a Platform: Constructing Interfaces, Users, Organizational Roles, and the Division of Labor.” 2018. doi:10.31235/osf.io/z6btm.

L.E. Sherman, A.A. Payton, L.M. Hernandez, et al. The Power of the “Like” in Adolescence: Effects of Peer Influence on Neural and Behavioral Responses to Social Media. Psychol Sci, 27 (7) (2016), pp. 1027-1035

Andrew Quodling PhD Candidate. “The Dark Art of Facebook Fiddling with Your News Feed.” The Conversation. May 08, 2019. Accessed July 8, 2021. http://theconversation.com/the-dark-art-of-facebook-fiddling-with-your-news-feed-31014.

Primack, Brian A., and César G. Escobar-Viera. “Social Media as It Interfaces with Psychosocial Development and Mental Illness in Transitional Age Youth.” Child and Adolescent Psychiatric Clinics of North America. April 2017. Accessed July 9, 2021. https://www.ncbi.nlm.nih.gov/pubmed/28314452.

A “COMPAS” That’s Pointing in the Wrong Direction

A “COMPAS” That’s Pointing in the Wrong Direction
By Akaash Kambath | July 9, 2021

Though I have mentioned a number of ways in which COMPAS fails to comport with the vision that the actors surrounding the tool had in mind, I am not trying to suggest that technology cannot be used to help the criminal justice system at all; in fact, a number of studies have mentioned how imperfect algorithms have helped the criminal justice system in a number of ways (Corbett-Davies, Goel, Gonz lez-Bail n 2017). I just do not believe that it can help in this situation—determining an individual’s likelihood of recidivating cannot be done in a truly unbiased way, and the implementation of COMPAS has only hurt the criminal justice system’s efforts in incarcerating people of all races in a fair manner. COMPAS’s effect in hurting the criminal justice system and people of color further provides a strong reminder of the imperativeness in conducting thorough research in evaluating the costs and benefits of all other alternatives when deciding to implement a new model or quantitative tool to solve a problem. As Sheila Jasanoff emphasizes in The Ethics of Invention, this includes the null alternative. She laments the fact that assessments and evaluations on quantitative risk assessment tools “are typically undertaken only when a technological project…is already well underway” (Jasanoff 2016). Researchers of these quantitative risk assessment tools also rarely consider the null option as an alternative, noting that the idea of “doing without a product or process altogether” is never put under investigation. Jasanoff’s work suggests that although actors pushing for a new technology may believe that it will fulfill a vision that sees humanity benefitting significantly, without complete consideration and research done on all of its alternatives, the implementation of such novel technology in areas not restricted to the criminal justice system may end up hurting humankind more than helping.

It is well known that there are a number of issues that plague the effectiveness and fairness of today’s criminal justice system. African-Americans have been the victims of disparaging prejudice throughout the system’s history; when examining the imprisonment rate (defined as the number of prisoners per 100,000 people) between whites and nonwhites, the imprisonment rate for black prisoners was “nearly six times the imprisonment rate for whites”, a gross misrepresentation of the true demographics of the current U.S. population (Gramlich 2019). Although efforts have been made to decrease this disparity between the incarceration rates of whites and nonwhites, there is still a long way to go. One way that the government has tried to improve the criminal justice system is by using a quantitative risk assessment tool to try and determine a convicted person’s likelihood of re-offending in an unbiased and fair way. The Correctional Offender Management Profiling for Alternative Sanction (COMPAS) score is a risk-assessment tool that purports to predict how likely someone is to recidivate. It was introduced by a company named Equivant, which uses a proprietary algorithm to calculate the score, and a higher COMPAS score suggests that the convicted individual is more likely to offend. The COMPAS score is given to judges when they are determining sentencing, and they use the results of the risk assessment tool alongside the convicted person’s criminal record to make their decision.

One can believe that the actors involved in the creation and implementation of the COMPAS score in the criminal justice system, i.e. employees and executives at Equivant, policymakers, and judges using the tool, had a vision that this tool would help create a safer society by keeping those that are more “at risk” of recidivating off of the streets and behind bars in a fair and accurate way. In recent years, however, COMPAS has come under much scrutiny for doing the contrary. In 2016, ProPublica investigated COMPAS’s performance and reported that the risk assessment tool has “falsely flag[ged] black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants” (Angwin et al. 2016). ProPublica’s article also mentions that COMPAS scores have incorrectly classified whites as well, stating that white prisoners that were given low-risk scores ended up recidivating at a higher rate than predicted. Instead of judging an indiviudal’s risk of re-offending in an unbiased way, the COMPAS score does the opposite—it reaffirms society’s racial stereotypes of viewing whites as “less dangerous,” and rendering people of color as inherently more dangerous and more likely to commit a crime. Furthermore, a 2018 article published by The Atlantic conducted research to challenge the claim that the algorithm’s predictions were better than human ones. 400 volunteers were given short “descriptions of defendants from the ProPublica investigation, and based on that, they had to guess if the defendant would commit another crime within two years.” In a stunning outcome, the volunteers answered correctly 63 percent of the time, and that number rose to 67 percent when their answers were pooled. This contrasted to the COMPAS algorithms accuracy level of 65 percent, suggesting that the risk assessment tool is “barely better than individual guessers, and no better than a crowd” (Yong 2018). Compounding this deficiency, the secretive algorithm behind the COMPAS score was discovered to be no better in performance than a basic regression model constructed by two Dartmouth students, and although this may suggest that the risk assessment tool may be unsophisticated, it is more likely that the tool has hit a “ceiling of sophistication.”

Though I have mentioned a number of ways in which COMPAS fails to comport with the vision that the actors surrounding the tool had in mind, I am not trying to suggest that technology cannot be used to help the criminal justice system at all; in fact, a number of studies have mentioned how imperfect algorithms have helped the criminal justice system in a number of ways (Corbett-Davies, Goel, Gonzlez-Bail n 2017). I just do not believe that it can help in this situation—determining an individual’s likelihood of recidivating cannot be done in a truly unbiased way, and the implementation of COMPAS has only hurt the criminal justice system’s efforts in incarcerating people of all races in a fair manner. COMPAS’s effect in hurting the criminal justice system and people of color further provides a strong reminder of the imperativeness in conducting thorough research in evaluating the costs and benefits of all other alternatives when deciding to implement a new model or quantitative tool to solve a problem. As Sheila Jasanoff emphasizes in The Ethics of Invention, this includes the null alternative. She laments the fact that assessments and evaluations on quantitative risk assessment tools “are typically undertaken only when a technological project…is already well underway” (Jasanoff 2016). Researchers of these quantitative risk assessment tools also rarely consider the null option as an alternative, noting that the idea of “doing without a product or process altogether” is never put under investigation. Jasanoff’s work suggests that although actors pushing for a new technology may believe that it will fulfill a vision that sees humanity benefitting significantly, without complete consideration and research done on all of its alternatives, the implementation of such novel technology in areas not restricted to the criminal justice system may end up hurting humankind more than helping.

Works Cited

1. John Gramlich, “The gap between the number of blacks and whites in prison is shrinking,” Pew Research Center, April 30, 2019,

https://www.pewresearch.org/fact-tank/2019/04/30/shrinking-gap-between-number-of-blacks-and-whites-in-prison/.

2. Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, “Machine Bias,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

3. Ed Yong, “A Popular Algorithm Is No Better at Predicting Crimes Than Random People,” The Atlantic, January 17, 2018, https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/.

4. Sam Corbett-Davies, Sharad Goel and Sandra Gonz lez-Bail n, “Even Imperfect

Algorithms Can Improve the Criminal Justice System”, The New York Times, December20, 2017, https://www.nytimes.com/2017/12/20/upshot/algorithms-bail-criminal-justice-system.html.

5. Sheila Jasanoff, “Risk and Responsibility,” in The Ethics of Invention: Technology and the Human Future, 2016, pp. 31-58.

6. Image source for first image: https://datastori.es/wp-content/uploads/2016/09/2016-09-23-15540.png

Telematics in automobile insurance : Are you giving too much for little in return?

Telematics in automobile insurance : Are you giving too much for little in return?
By Aditya Mengani | July 9, 2021

Telematics based programs like  “Pay as you drive” have been dramatically transforming the automobile insurance industry over the past few years. Many traditional insurance providers like AllState, Progressive, Geico etc. and newer startups like Metromile, Root and even car manufacturers like Tesla have been introducing programs or planning new ones centered around “Pay as you drive” type of model where a user consents to providing his driving data using telematics devices.

The data collected is used to determine the risk of the customer and provide discounts or other offers tailored in a personalized way to the customer based on his driving habits. The only caveat being the user subscribes to a continuous feedback of data points from the vehicle using telematics that can be embedded in the car using built-in sensors or plug in devices using GPS or mobile phones. This could open up a can of worms, with regards to issues of privacy and ethics surrounding the collected data. Many consumers were skeptical of enrolling to these programs, but recently due to the pandemic, a lot has been changing with the perception of users as many more consumers are enrolling hoping to reduce their insurance costs due to lesser travelling habits. This trend is expected to continue in future with more and more consumers opting in for these services.

There is a lack of transparency as to what gets collected by these telamatics as many insurance providers provide a vague definition of various metrics and what constitutes driving behaviour. In traditional insurance multiple factors affect the risk and in turn the premium paid by the customer like location, age, gender, marital status, years of driving experience, driving and claims history, vehicle information etc. With the “Pay as you drive”, insurance providers claim that, additionally, they track real time metrics related to driving habits which include speed, acceleration, braking, miles driven, time of the day etc and what gets collected varies by each insurance provider. For example, In 2015 AllState obtained a patent that can use sensors and cameras to detect potential sources of driver distraction within a vehicle and also has potential to evaluate heart rate, blood pressure, and electrocardiogram signals that are recorded from steering wheel sensors. Concerns similar to these have made consumers skeptical about enrolling in these programs.

Another challenging aspect is algorithmic transparency. With wide spread consumption of telematics data, insurance providers and regulators need to define a clear set of factors collected for actuarial justification of rating premiums and underwriting the policies. Most of the algorithms are proprietary and insurance providers do not release data as to how their algorithms use the factors to derive a score. From a fairness and transparency perspective, there is very less choice and information available for consumers to decide before opting in to these programs.

Widespread usage of artificial intelligence based telematics in predicting risks can pose predictive privacy harms to the consumer and without proper regulations insurers can collect a variety of data that does not have any causative effect on the factors used for predicting risk scores. Currently such regulations are not enforced across many countries. This can also lead to discriminatory practices and create unintended biases during the data collection and exploration processes.

Who gets to build these telematics services is another thing to worry about. Most of the telematics services are created by third party vendors like verisk who are non-insurance firms providing software and analytical solutions for these programs. Thus these providers escape the radar of regulators who only scrutinize the insurance providers but do not have similar protocols established over these non-insurance vendors.

With chances of collecting a myriad amount of data and lack of transparency and regulations, there is a chance that the data can be used for advertising or can be attempted to sell to third parties for monetization purposes. Even though regulations like CCPA enforce companies to provide information regarding what they do with the data, there is a chance that these regulations might change over time in future, and consumers who are already enrolled in programs might not be aware of the changes to the policy terms.

Finally privacy would be another major issue that could pose many dangers to the consumer. Many companies retain the data collected forever or won’t state clearly in their privacy policies what they intend to do with the data collected after a period of time or what is their retention policy for the data. If there are no proper measures taken to protect the data collected, this could lead to privacy breaches.

With all the above risks laid out, for a consumer it all comes down to doing a risk vs return decision. Will a consumer give a lot of his information for little in return is something that varies by an individual’s perception of the processes practiced and his needs with respect to insurance prices. As far as it goes with the regulators and law, they are still evolving and embracing this new arena and still need to do a lot of catching up. All we can do for now is to hope that in future, a comprehensive set of laws and regulations can help this new area of insurance thrive to success and make a conscious choice on our needs.

References:

https://www.forbes.com/advisor/car-insurance/usage-based-insurance/

https://www.insurancethoughtleadership.com/are-you-ready-for-telematics/

https://content.naic.org/cipr_topics/topic_telematicsusagebased_insurance.htm

https://www.chicagotribune.com/business/ct-allstate-car-patent-0827-biz-20150826-story.html.

https://dataethics.eu/insurance-companies-should-balance-personalisation-and-solidarity/

https://consumerfed.org/reports/watch-where-youre-going/

https://www.accenture.com/nl-en/blogs/insights/telemethics-how-to-materialize-ethics-in-pay-how-you-drive-insurance

https://www.internetjustsociety.org/challenges-of-gdpr-telematics-insurance

The Death of the Third-party Cookie

The Death of the Third-party Cookie
By Anonymous | July 9, 2021

It’s a Sunday afternoon. You’re on your couch looking at a pair of sneakers on Nike.com but decide not to buy them because you already have a pair that’s slightly worn out. Then, you navigate to Facebook to catch-up on your friends’ lives. You scroll through various posts, some more interesting than others, and lo and behold, you see an ad for the same exact pair of sneakers that you just saw.

If you’ve ever wondered why this happens, it’s because of something called third-party cookies. These cookies allow information about your activity on one website to be used to target advertisements to you on another website.

What are third-party cookies?
A cookie is a small text file that your browser stores on your computer to keep track of your online activity. For example, when you’re shopping online, cookies store information about the items in your shopping cart. A third-party cookie is a cookie that was created by a website other than the one you are visiting. In this example, you’re visiting facebook.com but the cookie was created by nike.com.

Cookies Store Various Forms of User Data
How Third-party Cookies are Generated

What are third-party cookies used for?
They are primarily used for online advertising, so that ad companies can target advertisements towards specific users based on what they know about them.

Do all browsers support third-party cookies?
Two major browsers, Safari and Firefox, both block third-party cookies by default. Google Chrome, the most popular browser, still allows third-party cookies, but Google announced in Jan 2020 that Chrome would stop supporting them in the next few years.

Why is “the death of the third-party cookie” significant?
A large part of why browsers are no longer supporting third-party cookies is a change in public opinion. With incidents like Facebook’s Cambridge Analytica scandal, where a third-party company misused Facebook’s user data to manipulate voters, consumers have become increasingly aware and concerned about data privacy. Because targeted ads are so prevalent, it presents one of the biggest pain points. According to a poll conducted by the Pew Research Center, 72% of Americans worry that everything they do is being tracked by various parties, which is not too far from the truth.

The “death of the third-party cookie” means that advertisers will no longer be able to track users across different domains, such that the cookies created on a particular website can only affect the user’s experience on that site. This is called a first-party cookie. This means that it will be more difficult for an advertiser to develop a user profile based on your actions, given that they cannot consolidate your actions between various sites.

With third-party cookies going away, advertisers will be increasingly reliant on first-party data, data collected directly from the user (e.g. name, email, and viewed content) for targeting advertising. Hence, users will have to be more attentive to the data points that they willingly provide online and how they can be used.

Does this mean I should be less worried about ad tracking?
Yes and no. Although the phasing out of third-party cookies helps reduce privacy harms committed by Adtech firms, it also results in more power for companies like Facebook that not only have an immense amount of user data but also a large stake in the ad industry. New approaches to targeted advertising are already in the works as a replacement for third-party cookies, and it is yet to be seen how well these will guard user privacy.

References
* https://qz.com/2000490/the-death-of-third-party-cookies-will-reshape-digital-advertising/
* https://blog.hubspot.com/marketing/third-party-cookie-phase-out?toc-variant-a=
* https://www.mckinsey.com/business-functions/marketing-and-sales/our-insights/the-demise-of-third-party-cookies-and-identifiers
* https://www.epsilon.com/us/insights/trends/third-party-cookies

Are we ready to trust machines and algorithms to decide, for all?

Are we ready to trust machines and algorithms to decide, for all?
By Naga Chandrasekaran | July 9, 2021

Science Fiction to Reality:

I wake up to soothing alarm music and mutter, “Alexa, turnoff!” I pickup espresso from automated coffee machine and begin my zoom workday. The Ring doorbell alerts me about prepaid lunch, delivered by Uber Eats. After lunch, I interview candidates recommended  to me by an algorithm. After work, I ride an autonomous Tesla car to my date I had met on Tinder. Back home, I share my day on social media accounts and watch a Netflix recommended movie. Around midnight, I ask Alexa to switch off the lights and set an alarm for the morning. Machines are becoming our life partner! 

Digital Transformation Driving Data Generation Imbalance:

Through seamless integration of technology into every aspect of life, we share personally identifiable information (PII) and beyond, generating over 2.5 exabytes of data per day [1]. Advances in semiconductors, algorithmic power, and availability of big data has led to significant progress in data science, artificial intelligence (AI), and machine learning (ML). This progress is helping solve cosmetic, tactical, and strategic issues impacting individuals and societies universally. But, is it making the world a better place, for all? 

Digitally Connected World Driving Information Flow [2]

Digital transformation is influencing only part of the world’s population. In January 2021, 59.5% of the global population were active internet users [3]. The number dropped further for usage of digital devices at edge. These users contribute to data generation. The categories and classifications created by data scientists is only a representation of wealthy individuals and developed nations that created the data. So, such classifications are incomplete. 

Interconnected world also generates data from unassuming participants who are thrust into a system through surveillance and interrogation [4, 6]. Even for willing participants, privacy challenges emerge when their data is used outside the original context [5]. Data providers  share a high degree of risk to harm vs benefit, from data leaks [6]. Privacy policies, established by organizations that collect such data, is focused on defensive measures instead of ethics. These issues drive users to avoid participation or provide limited information, which leads to inaccuracies in the dataset. 

As pointed out by Sandra Harding, “all scientific knowledge is always socially situated [7]”. Knowledge generated by data has inbuilt bias and exclusions. In addition, timely access to this data is limited to few. The imbalance generated from power position, social settings, data inaccuracy, and incomplete datasets create bias in our accumulated knowledge.

AI Cannot be Biased!

We apply this imbalanced knowledge to further our wisdom, which is to discern and make decisions. This is the field of AI/ML, also termed predictive analytics. When our data and knowledge are inaccurate and biased, our algorithms and decisions reconfirm our bias (Amazon recruiting). When decisions have limited impact (e.g., movie recommendations), we have the opportunity to explore algorithmic decision making. However, when decisions have deep societal impact (e.g., crime sentencing), would we turn our decision making to AI? [8, 9]

Big data advocates claim that with sufficient data we can reach same conclusions as scientific inquiry, however, data is just an input with inherent issues. There are other external factors that shape reality. We have to interrogate how the data was generated: Who are included and excluded? Does the variance count for diversity? Whose interests are represented? Without such exploration of the input data, the outputs do not represent the true world. To become wiser, we have to recognize that our knowledge is incomplete and algorithms are biased.

Collaborative Human – Machine Model:

In the scene enacted at the beginning of this article, it appeared that humans are making decisions while enjoying technological benefits. However, it is possible that our decisions are influenced by hidden technology bias. As depicted in Disney-Pixar movie Wall-E, are we creating a world where humans will forget their purpose? 

Scene from Wall-E showing Humans Living a Digitally Controlled Life

With these identified issues in the digitally transforming world and associated dataset, how can we progress? Technology is always a double edged sword. It can force change in the world, despite the social systems as well as its converse. The interplay between technology and people who interact with it is extremely critical in making sure the social fabric is protected and moving in the right direction. We cannot delegate all our decisions to algorithms and machines with the identified data issues. We need to continue to optimize our data and algorithms with human judgment [10]. Data scientists have a role to play beyond data analysis. Power delegation and distribution between humans and machines are extremely important in making the digitally transformed world a better place for all. 

Collaborative Human-Machine Model [10]

References:

[1] Jacquelyn Bulao, 2021, How much data is created everyday in 2021, Link

[2] https://www.securehalo.com/improve-third-party-risk-management-program-four-steps/

[3] Global digital population, Statista analysis, 2021, Link

[4] Daniel Solove, 2006, A Taxonomy of Privacy, Link

[5] Helen Nissenbaum, 2004, Privacy as Contextual Integrity, Link

[6] The Belmont Report, 1979, Link

[7] Sandra Harding, 1986, The Science Question in Feminism, Link 

[8] Ariel Conn, 2017, When Should Machines Make Decisions, Link

[9] Janssen et al., 2019, History and Future of Human Automation Interaction, Link

[10] Eric Colson, 2019, What AI Driven Decision Making Looks Like, Link

Do we actually agree to these terms and conditions?

Do we actually agree to these terms and conditions?
By Anonymous | July 9, 2021

Pic 1 Fictional Representation of Terms of Service Agreement Buttons

Every time I go on to a new website or online service there is a pop up of a terms of service agreement and privacy policy. Now this pop up agreement covers three quarters of the page and at the bottom right has two buttons, I Agree or Decline. Now in this scenario, do you think I read every line of this long document carefully and take time to consider what I am agreeing to or do you think I quickly move the scrollbar to the bottom without reading a single word and press I Agree without thinking much about it. Like most of the online population, I always do the latter. In fact, a 2017 study by Deloitte found that 91% of consumers accept the terms and conditions without reading them (Cakebread, 2017). ProPrivacy.com, a digital privacy group claims that number is higher with only 1% of subjects in a social experiment actually reading the terms of conditions (Sandle, 2020). The other 99% of survey respondents actually agreed to absurd things from the terms and conditions like permission to give their mother full access to their browsing history, the name rights to their first-born child, the ability to “invite” a personal FBI agent to Christmas dinner for the next 10 years, and so forth (Sandle, 2020). Now since they clicked on the I Agree button, does that mean that if ProPrivacy.com really wanted to name their first-born child, could they dispute that? This question of ability to dispute an agreed upon terms of service boils down to the question, “Does clicking the I Agree button signify their informed consent?”

I argue that even though they indeed pressed the button, it isn’t informed consent through the lens of the first belmont principle, respect for persons. The Belmont Report was published in 1979 in response to ethical failures in medical research. In the Belmont Report, it outlined three ethical principles for the protection of human subjects in research: 1) Respect for persons; 2) Beneficence; and Justice. Respect for persons is about participants giving informed consent to be in the research. Informed consent is broken down further to say that participants should be presented with relevant information in a comprehensible format and then should voluntarily agree to participate. Do the terms of service include relevant information in a comprehensible format? This is debatable, as the terms of service do include all the relevant information but it is always too long to read it in a reasonable amount of time.

Pic 2: Word length of the terms and privacy policies of top sites and apps

Terms and conditions are not in a comprehensive format as they are really difficult to read and often employ legalese, terminology and phrasing used by those in the legal field. A study from Boston College Law found that the terms of service of the top 500 websites in the U.S., had the average reading level of articles in academic journals which do not have the terminology used by the general public(Benoliel & Becher, 2019). So even if people try to carefully read these long terms of service, they may not understand what they are agreeing to. In terms of voluntarily agreeing to terms of service, while it isn’t forced acceptance, the acceptance of the terms is needed to use the website. So saying no to the terms of service isn’t a no penalty result, rather it will prevent you from the service that you wanted.

Pic 3: Infographic of the obscure wording of terms of service

So, how can we turn the agreements of terms of service into actual informed consent? Some ideas are: having summaries on the side of the terms of service so that it is more comprehensible, having important parts bolded and highlighted for people to notice and read them, and making a mandatory wait time before they can click on the Agree button so that people must spend some time on the terms of services to read it through.

References

Benoliel, U., & Becher, S. I. (2019). The Duty to Read the Unreadable. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3313837

Cakebread, C. (2017, November 15). You’re not alone, no one reads terms of service agreements. Business Insider. https://www.businessinsider.com/deloitte-study-91-percent-agree-terms-of-service-without-reading-2017-11.

Sandle, T. (2020, January 29). Report finds only 1 percent reads ‘Terms & Conditions’. Digital Journal. https://www.digitaljournal.com/business/report-finds-only-1-percent-reads-terms-conditions/article/566127.

Facing the Issues of Facial Recognition and Surveillance

Facing the Issues of Facial Recognition and Surveillance
By Anonymous | July 9, 2021

Facial recognition technology is an already well developed and widespread technology which is rapidly expanding to cover all areas of the public and even private life. Its expansion into a near-ubiquitous presence threatens not only individuals’ most fundamental definitions of privacy but also the freedoms of assemblage and protest. Facial recognition technology serves the interests of the existing power structure. Its negative implications do not just pertain to the ways in which it can be used to infringe on an individual’s privacy. Rather, its reach and potential harm is much broader, infringing on society and the power of groups.

One ostensible justification for facial recognition technology is the use by police departments in protecting against crime. However, as the ACLU has pointed out, the narratives that support this use are deceptive. For instance, in 2016 the Detroit Police Department partnered with eight gas stations to install real-time camera connections with police headquarters as part of a ground-breaking crime-fighting partnership between local businesses, the City of Detroit and community groups called “Project Green Light Detroit.” This collaboration was presented to the community as a positive step along the line of a neighborhood watch system.

A store displaying their partnership with project green light

However, facial recognition technology is a form of general surveillance that allows monitoring of community members even without a warrant or determination of probable cause for the need to monitor them. This is one reason the ACLU is concerned about the expanded use of facial recognition technology in our society: it could easily be used for general surveillance searches because there are ample databases of individual photographs from each state’s motor vehicle license identifying photographs. (https://www.aclu.org/issues/privacy-technology/surveillance-technologies/face-recognition-technology.)

Individual privacy concerns are impacted by the use of facial recognition technology because it can be used in a passive way that does not require the knowledge, consent, or participation of the subjects. Additionally, there are security concerns related to the storage and use of facial recognition data. Facial data is very sensitive and can be quite accurate in identifying an individual. It is not clear that firms and government agencies have adequately managed the security of this data.

https://www.crainsdetroit.com/article/20180104/news/649206/detroit-aims-to-mandate-project-green-light-crime-monitoring

However, even more concerning is the broader societal impact that results from the widespread use of facial recognition data. Because of the extremely broad scope of facial recognition’s surveillance and power, there are more than just individual rights that need to be protected: it is the nature of society as a whole that is at risk of changing. The philosopher Michel Foucault considered the societal impact of surveillance systems, and he used the example of a panopticon to illustrate his theory. The panopticon explanation is an apt metaphor for the far-reaching societal impact of facial recognition systems, as well.

In 1791, British utilitarian philosopher Jeremy Bentham articulated the concept of a panopticon as a type of institutional building designed for social control. (Jeremy Bentham (1791). Panopticon, or The Inspection House.) The building’s design allows a single guard to observe and monitor all of the prisoners of an institution without the inmates being able to tell whether they are being watched at any particular moment.