Physical Implications of Virtual Smart Marketing

Physical Implications of Virtual Smart Marketing: How the Rise in Consumerism Powered by AI/ML Fuel Climate Change
By Anonymous | March 16, 2022

Introduction
Suspiciously relevant ads materialize in our social media feeds, our e-mails, and even texts. It’s become commonplace for digital marketing groups to invest in teams of data scientists with the hopes of building the perfect recommendation engine. At a glance, sales are increasing, web traffic is at an all-time high, and feedback surveys imply a highly satisfied customer base. But at what cost? This rise of consumerism, incited by big data analytics, has caused in increase in carbon emissions due to heightened manufacturing and freight. In this blog post, I will explore machine learning techniques used to power personalized advertisements in the consumer goods space, the resulting expedited rise of consumerism, and how our planet, in turn, is adversely affected.

Data Science in the Retail Industry
Data science enables retailers to utilize customer data in a multitude of ways, actively growing sales and improving profit margins. Recommendation engines consume your historical purchase history to predict what you’ll buy next. Swaths of transaction data are used to optimize pricing strategy across all the board. Computer vision is expanding, used to power the augmented reality features in certain mobile apps, such as IKEA’s app that customers can use to virtually place furniture into their very own homes.


Source

But arguably one of the largest use cases would have to be personalized marketing and advertising. Both proprietary and third-party machine learning algorithms have massively improved with time, predicting the unique purchases a single consumer will make with tremendous accuracy. According to a 2015 McKinsey Report, research shows that personalization can deliver five to eight times the ROI on marketing spend and lift sales 10 percent or more [1]. Modern day retailers understand this lucrativeness, and in turn, scramble to assemble expert data science teams. But what of their understanding of the long-term implications beyond turning a profit?

The Rise in Consumerism
As data science continues to assert its dominance in the consumer goods industry, customers are finding it hard to resist such compelling marketing. This pronounced advancement in marketing algorithms has unabashedly caused a frenzy in purchases by consumers throughout the years. According to Oberlo, the US retail sales number has grown to $5.58 trillion in the year 2020—the highest US retail sales recorded in a calendar year so far. This is a 36 percent increase over nine years, from 2011 [2]. These optimized marketing campaigns, coupled with the advent of nearly instantaneous delivery times (looking at you, Amazon Prime), have fostered a culture that sanctions excessive amounts of consumer spending.


Source

The Greater Rise in Temperature
To keep up with demand, retailers must produce a higher volume of goods. Unfortunately, this increased production will lead to higher pollution rates from both a manufacturing and freight standpoint. These retailers primarily use coal-based energy for their manufacturing, which emits greenhouse gases into the atmosphere. These goods are then transported in bulk by truck, train, ship, or aircraft, exuding carbon dioxide and further exacerbating the problem.

Although consumer goods production is not solely responsible for all emissions, it undeniably contributes to the exponential warming of the planet. According to the National Geographic, NOAA and NASA confirmed that 2010 to 2019 was the hottest decade since record keeping began 140 years ago [3].


Source

Furthermore, these purchased goods will eventually comprise earth’s MSW, or municipal solid waste (various items consumers throw away after they are used). The United States Environmental Protection Agency claims that the total generation of MSW in 2018 was 292.4 million tons, which was approximately 23.7 million tons more than the amount generated in 2017. This is a marked increase from the 208.3 million tons of MSW in 1990 [4]. The decomposition of organic waste in landfills produces a gas which is composed primarily of methane, another greenhouse gas contributing to climate change [5]. There are clearly consequential and negative effects of this learned culture of consumerism.

What You Can Do
To combat climate change, begin by understanding your own carbon footprint. You can perform your own research or you can use one of the many tools available on the internet, such as a carbon footprint calculator (https://www.footprintcalculator.org). If you incorporate less unprocessed foods into your diet, include more locally sourced fruits and vegetables, and avoid eating meat, you are taking small but important steps in the fight against climate change. Consider carpooling or taking public transit to work and/or social events to decrease carbon emissions from your commute. Steps like these seem small, but they build good habits and cultivate lifestyle changes that contribute to the health of our planet.

References

[1] https://www.mckinsey.com/~/media/McKinsey/Business%20Functions/Marketing%20and%20S ales/Our%20Insights/EBook%20Big%20data%20analytics%20and%20the%20future%20of%20m arketing%20sales/Big-Data-eBook.ashx
[2] https://www.oberlo.com/statistics/us-retail-sales
[3] https://www.nationalgeographic.com/science/article/the-decade-we-finally-woke-up-to- climate-change
[4] https://www.epa.gov/facts-and-figures-about-materials-waste-and-recycling/national- overview-facts-and-figures- materials#:~:text=The%20total%20generation%20of%20MSW,208.3%20million%20tons%20in% 201990.
[5] https://www.crcresearch.org/solutions- agenda/waste#:~:text=The%20decomposition%20of%20organic%20waste,potential%20impact %20to%20climate%20change.

Predictive policing algorithms: Put garbage in, get garbage out

Predictive policing algorithms: Put garbage in, get garbage out
By Elise Gonzalez | March 16, 2022


Image source: https://tinyurl.com/nz8n7xda

In recent years, “data-driven decision making” has seen a big increase in use across industries [1]. One industry making use of this approach, which relies on data rather than just human intuition to inform decisions, is law enforcement. Predictive policing tools have been developed to alert police as to where crime is likely to occur in the future, so that they can more effectively and efficiently deter it.

In a different and unbiased world, maybe tools like this would be reliable. In reality, because of the way they are designed, predictive policing tools merely launder the bias that has always existed in policing.

So, how are these tools designed? Let’s use two popular predictive policing softwares as examples: PredPol and Azavea’s HunchLab, which have been used in Los Angeles, New York, and Philadelphia, among other, smaller cities [2]. Each of these companies has designed an algorithm, or a set of instructions on how to handle different situations, that is equipped to rank locations on their relative future crime risk. These algorithms base that risk on any past instances of crime at or around each location. That information comes from historical policing data. PredPol uses addresses where police have made arrests or filed crime reports; HunchLab uses the same, as well as addresses to which police have been called in the past [3, 4]. This information is presented to the algorithm as a good and true indicator of where crimes occur. The algorithm then makes predictions of where crimes are likely to occur in the future based on the examples it has seen, and nothing else. Those predictions are used to inform decisions around where police should patrol, or where their presence may be the strongest crime deterrent.


HunchLab (left) and PredPol (right) user interfaces.
Image sources: https://tinyurl.com/2p8vbh7x (top), https://tinyurl.com/2u2u7cpu (bottom)

Algorithms like these lose their credibility because they base predictions of future crime on past police activity in an area. We know from years of research on the subject that minority and particularly Black communities in the United States are over-policed relative to their majority white counterparts [5]. For example, Black and white people are equally likely to possess or sell drugs in the United States, but Black people are arrested at a rate 3 to 5 times higher than whites nationally [6]. Policing trends like this one cause Black communities to be over-represented in police records. This makes them far more likely to appear as hot-spots for crime, when in reality they are hot-spots for police engagement.

Calls for service also do not represent the actual incidence of crime. Popular media has reported many examples in the last few years of police being called on Black Americans who are simply going about their lives – barbecuing at a park, sitting at Starbucks, or eating lunch [7]. Because of examples like this, presenting calls for service as a good and true representation of where crimes occur is misleading.

In short, predictive policing algorithms do not have minds of their own. They cannot remove bias from the data they are trained on. They cannot even identify bias in that data. They take as fact what we know to be the result of years of biased policing – that more crime happens in neighborhoods with more Black residents, and that less crime happens in majority white neighborhoods. This leads them to make predictions for future crimes that reproduce that bias. This is the idea of garbage in, garbage out: “If you produce something using poor quality materials, the thing that you produce will also be of poor quality” [8]. As these allegedly unbiased algorithms and those like them are increasingly used to make life-altering decisions, it is critically important to be aware of the ways that they can reproduce human bias. In this case, as with many, human bias is never removed from the process of making predictions; it is only made more difficult to see.

References
[1] Harvard Business Review Analytic Services. (2012). The Evolution of Decision Making: How Leading Organizations Are Adopting a Data-Driven Culture. Harvard Business Review. https://hbr.org/resources/pdfs/tools/17568_HBR_SAS%20Report_webview.pdf

[2] Lau, T. (2020, April 1). Predictive Policing Explained. Brennan Center for Justice. Retrieved March 10, 2022, from https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained

[3] PredPol. (2018, September 30). Predictive Policing Technology. https://www.predpol.com/technology/

[4] Team Upturn. HunchLab — a product of Azavea · Predictive Policing. (n.d.). Team Upturn Gitbooks. https://teamupturn.gitbooks.io/predictive-policing/content/systems/hunchlab.html

[5] American Civil Liberties Union. (2020, December 11). ACLU News & Commentary. Retrieved March 10 2022 from https://www.aclu.org/news/criminal-law-reform/what-100-years-of-history-tells-us-about-racism-in-policing/

[6] Human Rights Watch. (2009, March 2). Decades of Disparity. Retrieved March 10, 2022, from https://www.hrw.org/report/2009/03/02/decades-disparity/drug-arrests-and-race-united-states#

[7] Hutchinson, B. (2018, October 20). From “BBQ Becky” to “Golfcart Gail,” list of unnecessary 911 calls made on blacks continues to grow. ABC News. Retrieved October 3, 2022, from https://abcnews.go.com/US/bbq-becky-golfcart-gail-list-unnecessary-911-calls/story?id=58584961

[8] The Free Dictionary by Farlex. (n.d.) garbage in, garbage out. Collins COBUILD Idioms Dictionary, 3rd ed. (2012). Retrieved March 10 2022 from https://idioms.thefreedictionary.com/garbage+in%2c+garbage+out

Let’s talk about AirTags!

Let’s talk about AirTags!
By Jillian Luicci | March 16, 2022

Apple released a product called the AirTag in April 2021. The product costs $29 and is advertised to help users keep track of their belongings. (Apple, 2022) Some of the items Apple suggests tracking with an AirTag include keys, bikes, and luggage. The AirTag integrates with “Find My”, an Apple application which previously has been used to access the GPS of Apple products like iPhones and AirPods; the tag uses bluetooth technology to find the GPS of the product using the GPS of other nearby bluetooth-enabled Apple devices.

However, there have been many recent incidents of stalking using AirTags highlighted in news outlets. (Levitt, 2011) In most of these cases, the victims received a notification on their phone that there has been an AirTag tracking them for some amount of hours. The victims initially feel violated because they did not consent to their location data being tracked in the first place. Further, the victim did not consent to the dissemination of this data to the AirTag owner.

Young female character hiding in the bushes and looking through binoculars. Spying, conceptual illustration. Special agent. Secret mission. Sneak peek. Flat vector illustration, clip art

The California Attorney General recently released some privacy recommendations, which appear to have been violated by this abuse of AirTags for stalking. (California, 2014) First, they recommend a “Do Not Track” principle which traditionally refers to notifying and requesting consent from users prior to tracking their clicks and activities when web browsing. This principle draws parallels to the use of Apple AirTags. While the AirTag victim is not web browsing, the “Do Not Track” principle can still be applied to AirTags. Regardless of the technology used for tracking, the principle broadly speaks to the necessity of consent prior to passively tracking people’s data. Additionally, the recommendations include principles around data sharing, individual access, and accountability. These recommendations are gaps in Apple’s privacy policy. In this case, the recommendations are not exhaustive enough to extend to the rights of the victims because they never consented or necessarily reviewed the Apple AirTag policies.

When the stalking victims become aware of the device via the Apple alert, the victims often seek the assistance of police in order to deactivate the AirTag. The victims typically leave disappointed because the police are unable to assist without identifying the physical device. The AirTags are often difficult to find due to the small size of the device and the stalkers deliberately camouflaging the devices. Notably, only Apple users can receive alerts regarding AirTags tracking them. This excludes Android Users from this safety control.

As a result of these stalking cases, Apple recently released updates to AirTag and Find My. (AirTags, 2022) They have included software updates to notify AirTag users that the device is not meant for tracking people and to provide more precise tracking when a device is detected nearby. While these updates define the intent of the product, the changes do not promote informed consent nor does it prevent unwanted data dissemination. Further, these changes can only be effective if the victim is an informed Apple user. There are still risks for people who are targeted that do not know the risks associated with being tracked by an AirTag and the options to remove unwanted AirTags.

Apple should consider performing an ethical assessment of AirTags. The Belmont Report is a respected paper which defines three basic ethical principles: respect for persons, beneficence, and justice.(Belmont, `979) The application of AirTags for stalking violates all three of these principles. First, AirTags violate respect for persons because the victims do not consent to the collection and dissemination of their data. Second, beneficence is violated because the physical risks related to stalking far outweigh the benefits of finding an item such as keys. Third, justice is violated because it is illegal to stalk people. Overall, this product has potentially harmful applications to unsuspecting people. While Apple has attempted to resolve some of the concerns, there are still many glaring problems with AirTags that Apple should address immediately.

References

[1] Apple. 2022. AirTag. [online] Available at: <https://www.apple.com/shop/buy-airtag/airtag/1-pack?afid=p238%7CsyU1UIAS3-dc_mtid_1870765e38482_pcrid_516270124941_pgrid_116439818610_pntwk_g_pchan_online_pexid__&cid=aos-us-kwgo-pla-btb–slid—product-MX532AM%2FA> [Accessed 11 March 2022].

[2] Levitt, M., 2022. AirTags are being used to track people and cars. Here’s what is being done about it. [online] Npr.org. Available at: <https://www.npr.org/2022/02/18/1080944193/apple-airtags-theft-stalking-privacy-tech> [Accessed 11 March 2022].

[3] California Attorney General. Making Your Privacy Practices Public: Recommendations on Developing a Meaningful Privacy Policy. May 2014. https://oag.ca.gov/sites/all/files/agweb/pdfs/cybersecurity/making_your_privacy_practices_public.pdf

[4] Apple. 2022. AirTag. [online] Available at: <https://www.apple.com/shop/buy-airtag/airtag/1-pack?afid=p238%7CsyU1UIAS3-dc_mtid_1870765e38482_pcrid_516270124941_pgrid_116439818610_pntwk_g_pchan_online_pexid__&cid=aos-us-kwgo-pla-btb–slid—product-MX532AM%2FA> [Accessed 11 March 2022].

[5] The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. April 18, 1979. https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf

Online Privacy in a Global App Market

Online Privacy in a Global App Market
By Julia H. | March 9, 2022

The United States’ west coast is home to thousands of technology companies trying to innovate, find a niche and make it big. Inevitably, much of the products developed here reflects it’s western roots and doesn’t adequately consider the risks to its most vulnerable users which may be thousands of miles away. This was the case with Grindr, which prides itself on being the world’s largest social networking app for gay, bi, trans, and queer people. Instead of being a safe space for a marginalized community, a series of security failures combined with not enough emphasis on user privacy has put some LGBTQ+ communities around the world at serious risk over the past decade. Grindr has thankfully responded by making updates that focus on the safety of its users. Still, much can be learned from the ways the platform was abused and how different implementation decisions can be made in order to protect users, especially in high stakes situations.


Human Dignity Trust, Map of Countries that Criminalise LGBT People, 2022

Today, “71 jurisdictions criminalise private, consensual, same-sex sexual activity” [1]. Even in places where it isn’t a criminal offience, individuals can find themselves facing harassment and other hate crimes due to their gender or sexual orientation. In Egypt, for example, police have been known to entrap gay men by detecting their location on apps like Grind and using the existence of the app itself, as well as screenshots and messages from the app, as part of debauchery case [2]. This has been a particularly prevalent problem since 2014 when Grindr security issues, especially surrounding easy access to user location by non-app users, were first brought to light by cybersecurity firm Synack [3]. Grindr’s first response was to note that location sharing can be disabled and to go ahead and disable the feature by default in well known homophobic countries such as Russia, Nigeria, Egypt, Iraq and Saudi Arabia. Despite this, triangulating the location of a user was still possible due to the order in which profiles appear in the app [4].


@Seppevdpll, Trilateration via Grindr, 2018

Sharing exact user location with 3rd parties, or enough information to triangulate an individual, violates privacy laws such as GDPR and California’s CCPA regulation. A huge miss for Grindr outside of how this information could be abused by conservative governments. In parts of California, where Grindr is based, there is a large, vibrant and welcoming gay community. There is a certain level of anonymity in numbers that can be lost elsewhere. Thus, maintaining the safe online space the app was likely meant to be is not just about implementing technical security practices and adhering to legislation. It means taking into account the cultural differences among app users when designing interactions.

Grindr has faced much scrutiny and backlash and has luckily reacted with some updates to its application. They have launched kindr, a campaign to promote “diversity, inclusion, and users who treat each other with respect” [5] that included an update to their Community Guidelines. They have also introduced the ability for users to unsend messages, set an expiration time on the photos they send, and block screenshots [6]. These features, in combination with the use of VPNs, have made it easier for members of the LGBTQ+ community to protect themselves while using Grindr.


Kindr Grindr, 2018

Having a security and privacy-first policy when developing apps should be the standard. Companies all over the world should take on the responsibility of protecting their users with the decisions that are made during design and implementation. Moreover, given the global audience that most companies are targeting these days, they should strive to consider the implications of the technology being released in settings different to those of its developer. Particularly by including input during the development process from different types of users.

Citations
[1] “Map of Countries That Criminalise LGBT People.” Human Dignity Trust, https://www.humandignitytrust.org/lgbt-the-law/map-of-criminalisation.
[2] Brandom, Russell. “Designing for the Crackdown.” The Verge, The Verge, 25 Apr. 2018, https://www.theverge.com/2018/4/25/17279270/lgbtq-dating-apps-egypt-illegal-human-rights.
[3] “Grindr Security Flaw Exposes Users’ Location Data.” NBCNews.com, NBCUniversal News Group, 28 Mar. 2018, https://www.nbcnews.com/feature/nbc-out/security-flaws-gay-dating-app-grindr-expose-users-location-data-n858446.
[4] @seppevdpll. “It Is Still Possible to Obtain the Exact Location of Millions of Men on Grindr.” Queer Europe, https://www.queereurope.com/it-is-still-possible-to-obtain-the-exact-location-of-cruising-men-on-grindr/.
[5] “Kindr Grindr.” Kindr, Grindr, 2018, https://www.kindr.grindr.com/.
[6] King, John Paul. “Grindr Rolls out New Features for Countries Where LGBTQ Identity Puts Users at Risk.” Washington Blade: LGBTQ News, Politics, LGBTQ Rights, Gay News, 13 Dec. 2019, https://www.washingtonblade.com/2019/12/13/grindr-rolls-out-new-features-for-countries-where-lgbtq-identity-puts-users-at-risk/.

Singer, Natasha, and Aaron Krolik. “Grindr and OkCupid Spread Personal Details, Study Says.” New York Times, New York Times, 13 Jan. 2020, https://www.nytimes.com/2020/01/13/technology/grindr-apps-dating-data-tracking.html.

The Digital Rights of LGBTQ+ People: When Technology Reinforces Societal Oppressions.” European Digital Rights (EDRi), 15 Sept. 2020, https://edri.org/our-work/the-digital-rights-lgbtq-technology-reinforces-societal-oppressions.

Your phone is following you around.

Your phone is following you around.
By Theresa Kuruvilla | March 9, 2022

We live in a technologically advanced world where smartphones have become an essential part of our daily lives. Most individuals start their day with a smartphone wake-up alarm, scrolling through daily messages and news items, checking traffic conditions, work emails, calling family, or watching a movie or sports; the smartphone has become a one-stop-shop for everything. However, many people are unaware of what happens behind the screens.

From the time you put the sim card on the phone, regardless of whether it is an android or iPhone, the phone IMEA, hardware serial number, SIM serial number, and IMSI and headphone number will be sent to Apple or Google. The telemetry applications of these companies work on accessing the mac addresses of nearby devices to capture the phone’s GPS location. Many people think turning off GPS location on the phone prevents them from being tracked. They are mistaken. These companies capture your every movement thanks to cell phone technology advancement.

Under law listening to someone else’s phone call without a court order is a federal crime. But no rules prevent private companies from capturing citizens’ precise movements and selling the information for a price. This shows the dichotomy between the legacy methods of privacy invasion and the lack of regulation around the intrusive modern technologies.

On January 6, 2021, a group of Trump supporters’ political rallies turned into a riot at the US Capitol. The event’s digital detritus has been the key to identifying the riot participants: location data, geotagged photos, facial recognition, surveillance cameras, and crowdsourcing. That day, the data collected included location pings for thousands of smartphones, revealing around 130 devices inside the Capitol, exactly when Trump supporters stormed the building. There were no names or phone numbers; however, with the proper use of the technology available, many devices were connected to their owners, tying anonymous locations back to names, home addresses, social networks, and phone numbers of people in attendance. The disconcerting fact is that the technology to gather this data is available for anyone to purchase at an affordable price. Third parties use readily available technology to collect these data. Most consumers whose name is on the list are unaware of their data collected, and it is insecure and vulnerable to law enforcement and bad actors who might use it to inflict harm on innocent people.


(Image: Location pings from January 6, 2021, rally)

Government Tracking Of Capital Mob Riot

For law enforcement to use this data, they must go through courts, warrants, and subpoenas. The data in this example is a bird’s-eye view of an event. But the hidden story behind this is how the new digital era and the various tacit agreements we agree on invade our privacy.

When it comes to law enforcement, this data is primary evidence. On the other hand, these IDs tied to smartphones allow companies to track people across the internet and on their apps. Even though these data are supposed to be anonymous, several tools allow anyone to match the IDs with other databases.

Below example from the New York times shows way to identify anonymous ping.


(Image: Graphical user interface, application)

Description automatically generated

While some Americans might cheer using location databases to identify Trump supporters. But they are ignorant that these commercial databases invade thier user privacy as well. The demand for location data grows daily, and deanonymization has become simpler. While smartphone technology might argue that they provide options to minimize tracking, it is only an illusion of control for individual privacy. The location data is not the only aspect; the tech companies capture every activity with all the smart devices deployed around you with the purpose of making your lives better. With the Belmont Principle of Beneficence, we must maximize the advantage of technology while minimizing the risk. In case of this, even though consumers receive many benefits such as better traffic maps, safer cars, and good recommendations, surreptitiously gathering this data and storing it forever and selling this information to the highest price bidder puts privacy at risk. The privacy acts such as GDPR and CCPR are in place to protect consumer privacy, but this doesn’t protect all people in the same manner. People should have the right to know how their data has been gathered and used. They should be given the freedom to choose a life without surveillance.

References:

Thompson, Stuart A. and Warzel, Charlie (2021, February 6). They stormed the capitol. Their apps tracked them. Medium. https://www.nytimes.com/2021/02/05/opinion/capitol-attack-cellphone-data.html

Thompson, Stuart A. and Warzel, Charlie (2019, December 21). How Your Phone Betrays Democracy. Medium. https://www.nytimes.com/interactive/2019/12/21/opinion/location-data-democracy-protests.html?action=click&module=RelatedLinks&pgtype=Article

The editorial board New York Times (2019, December 21). Total surveillance is not what America signed up for. Medium. https://www.nytimes.com/interactive/2019/12/21/opinion/location-data-privacy-rights.html?action=click&module=RelatedLinks&pgtype=Article

Nissenbaum, Helen F. (2011). A contextual approach to privacy online. Daedalus, the Journal of the American Academy of Arts & Sciences.

Solove, Daniel J. (2006). A Taxonomy of Privacy. University of Pennsylvania Law Review. 154:3 (January 2006), p. 477. Medium. https://ssrn.com/abstract=667622

The Belmont Report (1979).Medium. https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf

A market-based counterweight to AI driven polarization

A market-based counterweight to AI driven polarization
By Anonymous | March 9, 2022

Stuart Russell examines the rise and existential threats of AI in his book Human Compatible. While he takes on a broad range of issues related to AI and Machine Learning, he begins the book by pointing out that AI shapes the way we live today. Take the example of a social media algorithm tasked with increasing user engagement and revenue. It’s easy to see how an AI might do this, but Russell presents an alternative theory to the problem of driving user engagement and revenue. “The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user’s preferences so that they become more predictable.” Russell has a more ambitious goal and continues to pull on this thread to its conclusion where algorithms tasked with one thing create other harms to achieve its end, in this case creating polarization and rage to drive engagement.

As Russell states, AI is present and a major factor in our lives today, so is the harm that it creates. One of these harms is the polarization causes through the distribution of content online such as search engines or social networks. Is there a counterweight to the current the financial incentives of content distribution networks, such as search engines and social networks, that could help create a less polarized society?

The United States is more polarized than ever, and the pandemic is only making things worse. Just before the 2020 presidential election, “roughly 8 in 10 registered voters in both camps said their differences with the other side were about core American values, and roughly 9 in 10—again in both camps—worried that a victory by the other would lead to “lasting harm” to the United States.” (Dimock & Wike, 2021) This gap only widened over the summer leading to Dimock and Wike concluding that the US has become more polarized more quickly than the rest of the world.


(Kumar, Jiang, Jung, Lou, & Leskovec, MIS2: Misinformation and Misbehavior Mining on the Web)

While we cannot attribute all of this to the rise of digital media, social networks, and online echo chambers, they are certainly at the center of the problem and one of the major factors. A study conducted by Hunt Allcott, Luca Braghieri, Sarah Eichmeyer, and Matthew Gentzkow published in the American Economic Review found that users became less polarized when the stopped using social media for only a month. (Allcott, Braghieri, Eichmeyer, & Gentzkow, 2020)

Much of the public focus on social media companies and polarization has focused on the legitimacy of the information presented, fake news. Freedom of speech advocates have pointed out that the label of fake news stifles speech. While many of these voices today come from the right, and much of the evidence of preferential treatment of points of view online does not support their argument, the point of view is valid and should be concerning on the face of the argument. If we accept that there is no universal truth upon which to build facts, then it follows that fake news is similarly what isn’t accepted today. This is not to say all claims online as being potentially seen as the truth in the future but the argument is that just because something is perceived to be fake or false today doesn’t mean it is fake.

This means we would need to create a system to clean up to the divisive online echo chamber not based on truth but based on perspectives presented. Supreme Court Justice Louis D. Brandeis famously said the counter to fake speech is more speech (Jr., 2011) so it is possible to create an online environment where users were presented with multiple points of view instead of the same one over and over.

Most content recommendation algorithms are essentially cluster models where the algorithm presents articles with similar content and points of view are presented to a user as articles they have liked in the past. The simple explanation being, if you like one article, you’ll also be interested in a similar one. If I like fishing articles, I’m more likely to see articles about fishing. While if I read articles about overfishing, I’m going to see articles with that perspective instead. This is a simple example of the problem, where, depending on the point of view one starts with, they only get dragged deeper into that hole. Apply this to politics and the thread of polarization is obvious.

Countering this is possible. Categorize content on multiple vectors including topic and point of view. Then present similar topics with opposing points of view to present not only more speech but more diverse speech to put the power of decision back in the hands of the human and away from the distributer. In response to the recent Joe Rogan controversy, Spotify has pledged to invest $100mm in more diverse voices but has not presented a plan to promote them on the platform to create a more well-rounded listening environment for its users. Given how actively Spotify promotes Joe Rogan, they need a plan to ensure that different voices are as easy to discover, not just available.

The hindrance for any private entity adopting this the same with almost any required change of a shareholder driven company, financial. As Russell pointed out, polarizing the public is profitable as it makes users more predictable and easier to keep engaged. There is a model for which a greater harm is calculated and paid for by a company which has the effect of creating a new market entirely, cap and trade.


(How cap and trade works , 2022)

Cap and trade is the idea where companies are allowed to pollute a certain amount. That creates two types of companies, those that create more pollution than they are allocated and those that produce less. Cap and trade allows polluters to buy the allocation of companies under polluting to ensure an equilibrium.

There is a similar model around speech where companies that algorithmically promote one point of view need to offset that distribution by also promoting the other point of view to the same user or group of users. This has two effects. First, it creates a financial calculous for companies who distribute content on whether they should only promote a single point of view to a subset of users, a model that has been highly profitable in the past, if they need to pay for the balance of speech. While at the same time it creates a new market of companies selling offsets who could only promote opposing points of view to specific groups than they are already receiving, points of view they are less likely to engage with, knowing they can increase their compensation for these efforts by polarizing offenders.

Before this goes to the logical conclusion where two companies, each promoting their opposing points of view so each equally guilty of polarization let’s talk about possibly making this work in practice. There are some very complicated issues that would make this difficult to implement like personal privacy and private companies’ strategy of keeping what they know about their users proprietary.

A social media companies presents an article to a user saying that mask mandates are create an undo negative effect on society. They would then have pushed that user towards one end of the spectrum and would be required to present an article to that user making the argument that mask mandates are necessary to slow the spread of the virus. That social media company could either present that new piece of information themselves or sell that piece of information to another company willing to create an offset to presenting it in an outside platform, creating equilibrium. Here the social media company needs to do the calculous of whether it is more profitable to continue to polarize that user on their platform or to create balance within their own walls.

This example is clearly over simplified, but ‘level of acceptance’ could be quantified, and companies could be required to create balance of opinion for specific users or subsets of users. If 80% of publications are publishing one idea, and 20% of publications are presenting the opposing idea then content distributers would be required to create an 80/20 balance for their users.

This is an imperfect starting point for creating algorithmic balance online but one to discuss an incentive and market-based approach to providing fairness to de-polarize the most polarized society at the most polarized moment in recorded history.

Bibliography
Allcott, H., Braghieri, L., Eichmeyer, S., & Gentzkow, M. (2020). The Welfare Effects of Social Media. American Economic Review.
Dimock , M., & Wike, R. (2021, March 29). America Is Exceptional in Its Political Divide. Retrieved from Pew Trusts: https://www.pewtrusts.org/en/trust/archive/winter-2021/america-is-exceptional-in-its-political-divide
How cap and trade works. (n.d.). Retrieved from nvironmental Defense Fund: https://www.edf.org/climate/how-cap-and-trade-works
How cap and trade works . (2022). Retrieved from Environmental Defense Fund: https://www.edf.org/climate/how-cap-and-trade-works
How cap and trade works . (2022). Retrieved from Environmental Defense Fund: https://www.edf.org/climate/how-cap-and-trade-works
Jr., D. L. (2011, December). THE FIRST AMENDMENT ENCYCLOPEDIA. Retrieved from ntsu.edu: https://www.mtsu.edu/first-amendment/article/940/counterspeech-doctrine#:~:text=Justice%20Brandeis%3A%20%22More%20speech%2C%20not%20enforced%20silence%22&text=%E2%80%9CIf%20there%20be%20time%20to,speech%2C%20not%20enforced%20silence.%E2%80%9D
Kumar, S., Jiang, M., Jung, T., Jle Luo, R., & Leskovec, J. (2018). MIS2: Misinformation and Misbehavior Mining on the Web. the Eleventh ACM International Conference.
Kumar, S., Jiang, M., Jung, T., Lou, R., & Leskovec, J. (n.d.). MIS2: Misinformation and Misbehavior Mining on the Web. 2018.

Dangers of Predicting Criminality

Dangers of Predicting Criminality
By Kritesh Shrestha | March 9, 2022

Facial recognition as a technology has seen major improvements within the last 5 years, today it is common to use facial recognition commercially for biometric identification. According to test conducted by National Institute of Standards and Technology, the highest performing facial identification algorithm as of April 2020 has an error rate of 0.08% compared to 4.1% of highest performing in 2014. [3] Though these improvements are commendable, concerns arise when attempting to apply these algorithms on high stake issues such as criminality.

Tech to Prision Pipeline
On May 5th 2020, Harrisburg University announced that a publication entitled, “A Deep Neural Network Model to Predict Criminality Using Image Processing” is being finalized. In this publication, a group a of Harrisburg University professors and a Ph.D student claim to have developed an automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal. [4] This measure of criminality is said to have an 80% accuracy with no racial bias just by using a picture of an individual’s face. Data being used behind the software is biometric and criminal legal data provided by the New York City Police Department (NYPD). While the intent of this software is to help prevent crime, it caught the eye of 2,435 academics that signed an open letter demanding the research remains unpublished.

Those that signed the open letter, the Coalition for Critical Technology (CCT), raised concerns over the data used to create the algorithm. The CCT argue that data generated by the criminal justice system cannot be used for classifying criminality as the data would unreliable. [5] The dataset contains history of racially bias and unjust convictions which will feed that same bias into the algorithm. Another study, _”The ‘Criminality from Face’ Illusion”_, looking into the plausibility of using predicting criminality with facial recognition asserts, “there is no coherent definition on which to base development of such an algorithm. Seemingly promising experimental results in criminality-from-face are easily accounted for by simple dataset bias”. [2] A study conducted by the National Criminal Justice Reference Service concluded that for sexual assault alone, wrongful conviction occurred at at rate of 11.6%. [6] The use of unreliable data to classify an individual’s likelihood to commit crimes is harmful as it would validate unjust practices that have occurred over the years.

If an individual was wrongly convicted awaiting to be exonerated, their family members or those that look like them might be labeled as “likely” to commit crimes. The study announced by Harrisburg University has since been pulled from the publication public discussion and the CCT.

Resurgence of Physiognomy
While use of facial recognition algorithms as a predictor is relatively new, the practice of using outer appearance to predict characteristics, __physiognomy__, dates back to the 18th century. [1] Physiognomy, in the past, has been used to promote racial bigotry, block immigration, justify slavery, and permit genocide. While physiognomy has been disproven, the pseudo science seems to be on the rise with the increase uses of facial recognition. The issue with Physiognomy lies in the belief that physical features are a good indicators for complex human behavior. The simplistic belief is problematic in that it skips several levels of abstraction, ignoring the…role of learning and environmental factors in human development. [2] I don’t believe predicting criminality in a vacuum is not harmful, though given the history of physiognomy, predicting criminality seems to be regressive.

Conclusion
The use of facial features as an identifier for criminality is inherently bias as it means accepting the assumption that individuals with certain facial features are more likely to commit crime. With the knowledge that bias exists within our criminal justice system it is irresponsible to recommend the use of criminal justice data to predict criminality. The implication of an algorithm being able to predict criminality is frightening as it could be used to further unjust actions.

Open Ended Thought Experiment in Predicting Criminality
What would the world look if an algorithm has reliable data and is 100% accurate at predicting criminality?
– If a child were to be born into this world with all of the features that classify as “likely to commit crime”; should that child be monitored?
– What rights would that child have to their own privacy if the algorithm is certain that the child will be a criminal?
– What does it mean for the future of child, should they be denied rights due to this classification?

References
[1] Arcas, Blaise Aguera y, et al. “Physiognomy’s New Clothes.” _Medium_, Medium, 20 May 2017, https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a.
[2] Bowyer, Kevin W., et al. “The ‘Criminality from Face’ Illusion.” _IEEE Transactions on Technology and Society_, vol. 1, no. 4, 2020, pp. 175–183., https://doi.org/10.1109/tts.2020.3032321.
[3] Crumpler, William. “How Accurate Are Facial Recognition Systems – and Why Does It Matter?” _How Accurate Are Facial Recognition Systems – and Why Does It Matter? | Center for Strategic and International Studies_, 16 Feb. 2022, https://www.csis.org/blogs/technology-policy-blog/how-accurate-are-facial-recognition-systems-%E2%80%93-and-why-does-it-matter#:~:text=Facial%20recognition%20has%20improved%20dramatically,Standards%20and%20Technology%20(NIST).
[4] “Hu Facial Recognition Software Predicts Criminality.” _Harrisburg University_, 5 May 2020, https://web.archive.org/web/20200506013352/https://harrisburgu.edu/hu-facial-recognition-software-identifies-potential-criminals/.
[5] Technology, Coalition for Critical. “Abolish the #TechToPrisonPipeline.” _Medium_, Medium, 21 Sept. 2021, https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16.
[6] Walsh, Kelly, et al. _The Author(s) Shown below Used Federal Funding Provided by …_ Office of Justice Programs, 1 Sept. 2017, https://www.ojp.gov/pdffiles1/nij/grants/251115.pdf.

Considerations for Collecting Social Determinants of Health in Healthcare

Considerations for Collecting Social Determinants of Health in Healthcare
By Anonymous | March 9, 2022

I previously worked in a group that built healthcare technology solutions and ran studies to understand their efficacy. One of the studies that I worked on involved capturing Social Determinants of Health (SDOH). In this blog post I will give a brief overview of SDOH within healthcare systems and then think through some of questions and considerations for collecting SDOH at the point of care.

Background


Figure 1: Overview of Social Determinants of Health
Image source: https://www.kff.org/racial-equity-and-health-policy/issue-brief/beyond-health-care-the-role-of-social-determinants-in-promoting-health-and-health-equity/

SDOH are factors in peoples lives that impact health outcomes and quality of life. These factors include economic stability, physical environment, access to resources, community context, and access to healthcare.(1) There have been many studies that show that SDOH have an impact on health outcomes. (2) Motivated by the need to increase health equity, many healthcare systems are starting to collect SDOH. (3)

Increasingly Electronic Health Records (EHR) are including fields for collecting SDOH data, which means that SDOH data, that have been entered into the EHR, become a part of the patient health record. (4) Healthcare systems are storing, viewing, and in some cases are analyzing patient SDOH data. This also means that patient SDOH can be viewed and analyzed in combindation with patient medical data.

Some healthcare systems have started collecting SDOH data without a clear plan for how to use them. There have been targeted healthcare programs to help address SDOH, such as Kaiser Permanente’s Healthy Eating Active Living Zones Initiative in California, which have had positive results. (5) But overall there are “inadequate healthcare-based solutions for the core problems such as access to care, poverty and food insecurity”. (4) In addition, even though most clinicians recognize the need for treating patients as a whole, SDOH are not their main area of expertise. (6)

Considerations
Here are some considerations for the lifecycle of collecting, storing and analyzing SDOH data within healthcare:

Informed Consent: There are no clear plans in place for how SDOH data will be used, which creates challenges in gathering informed consent given the lack of clarity around what will happen with the data.

Data Completeness: Some communities, especially those at higher risk, are more likely to be hesitant to share SDOH with their clinicians. (7) This creates challenges of self selection bias for the data that are collected. It also creates challenges with the analysis and eventual interventions since the data that are collected are likely to be an incomplete view.

Codification: According to the Healthcare Information and Management Systems Society, some SDOH factors have been codified by the International Classification of Diseases (ICD) but others are still not available. (4) In addition there is no standardized method or survey for collecting SDOH from patients. Not only does this put more emphasis on the SDOH that have been codified but it also makes it difficult to understand and share results.

Storage and Privacy: The Health Insurance Portability and Accountability Act (HIPAA) outlines 18 identifiers that are categorized as Protected Health Information (PHI). (8) HIPAA regulates that PHI data have heightened data security and privacy standards associated with them.


Figure 2: Overview of the 18 identifiers of Protected Health Information
Image source: https://www.iri.com/solutions/data-masking/hipaa/

All healthcare data have increased data security and privacy standards but PHI is the highest level. One challenge of collecting SDOH data is that they are highly sensitive data but do not currently fall within the PHI identifier list so they do not have the same level of security and privacy regulations associated with them. Not only the lack of clarity for regulation but also the ability for someone to have access to a broad dataset about an individual raises concern about the potential for harm if these data were to be leaked.
Actionability: Clinicians will be asked to collect and consider patient’s SDOH as part of the care process but most clinicians have not been trained in how to incorporate SDOH into the treatment plan. (9) This raises questions about the standards of care. It also raises questions about why the data should be collected without a clear plan for use.
Sharing: One of the goals for collecting SDOH data is to improve health outcomes. Some of the potential solutions for improving SDOH are to implement policies and add more resources to communities of need. In order to influence and help implement these solutions, either patient data or the analysis of patient data would need to be shared. This raises some concerns about whether the patients know that their data would be used and shared in this way.

Conclusion
SDOH contribute to approximately 80% of patient conditions and mortality. (14) It’s imperative to address SDOH needs, disparities in healthcare, and work towards more equitable care. It’s equally important to make sure that we are not introducing new challenges, with data and privacy, that could potentially negatively impact patients.

References
https://www.cdc.gov/socialdeterminants/cdcprograms/index.htm
https://nam.edu/social-determinants-of-health-101-for-health-care-five-plus-five/
https://health.gov/healthypeople/objectives-and-data/social-determinants-health
https://www.ncbi.nlm.nih.gov/books/NBK222128/
https://www.himss.org/resources/overcoming-obstacles-social-determinants-health
https://www.ncsl.org/portals/1/documents/health/HealthDisparities1213.pdf
https://www.fortherecordmag.com/archives/JF20p28.shtml
https://www.healthify.us/healthify-insights/benefits-and-challenges-of-sharing-sdoh-data

Beyond Health Care: The Role of Social Determinants in Promoting Health and Health Equity

Predictive Models as a Means to Influence Consumer Behavior

Predictive Models as a Means to Influence Consumer Behavior
By Erick Martinez | March 9, 2022

Background
The world’s largest and most successful tech companies have built their wealth selling ads and promoting various products and services. A significant portion of that success comes from their ability to market and personalize ads down to the individual level, to serve ads which are continuously more and more “relevant” to the consumer. As tech continuously amasses even more granular data and develops increasingly sophisticated models, will their influence become a problem for individual decision making? Should we or could we set a practical ethical limit on the improvement of potent stimuli based on deep learning and other predictive analyses relying on big data? I don’t think there’s any present evidence to support the idea that tech companies can direct our every move in some sort of apocalyptic post-modern sci-fi sort of way. I do believe however, that tech companies have a degree of influence over their users which is at best, significantly persuasive and at worst, manipulative and coercive.

Due Process
I’d like to borrow a legal framework that applies quite directly to our case. Due process outlines the entitlements allowed to an individual throughout their treatment in various legal settings. The need to expand the rights of individuals with respect to big data based systems is echoed in “Big Data and Predictive Reasonable Suspicion”, which concerns the extent to which law enforcement can apply big data based systems in order to “know” a suspect [1]. Such systems circumvent the protections afforded to every citizen against unreasonable search and seizure as from predictive models and extensive databases, law enforcement can justify the seizure of a suspect, a far cry from the limited “small” data afforded by law enforcement in traditional settings [1]. In our consumer structure adequate due process would allow for an individual to appeal a specific model’s determination, their data sources, the extent of personalization permitted in the advertisements they receive, and the persuasive methods found to be effective against an individual. Due process would have been especially useful for Uber drivers as detailed in “How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons” by the New York Times.

Uber made use of various known behavioral science mechanisms: loss aversion, income targeting, compulsion looping, all informed by the massive amounts of data collected on their drivers [2]. Similar techniques can be seen in social media/entertainment sites such as Google, Facebook, Instagram, etc. The fear of missing out on a particular aspect of social groups is reinforced by the ephemeral posting structures of such platforms and mirrors the loss aversion tactics employed by Uber. Compulsion looping is exemplified via the transition to an endless scroll as well as the timed nature of various push notifications; these mechanisms serve to confine the user in a loop of anticipation, challenge, and reward [3].

Conclusion
Users should be able to see how their actions are being influenced and to what extent they are affected; they should be able to see which features factor into how ads are presented and structured within the platform whenever that structure is informed by the data gathered on the individual. The lack of such information was harrowing in the case of Uber drivers, since drivers are independent contractors they cannot be compelled to work a specific schedule, however from insights garnered from the data they have on their drivers they were able to compel drivers to specific locations which was more profitable for Uber but was not necessarily more profitable for the driver. A similar argument is made from digital media companies: offering up more of your data helps make ads more relevant for you, a benefit for the consumer as it’s often framed. However relevant ads are very profitable for such entities and users might not be so keen on the ads no matter their relevancy [4].

References

[1] [Big Data and Predictive Reasonable Suspicion. Andrew Guthrie Ferguson](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2394683)
[2] [How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons. New York Times](​​https://www.nytimes.com/interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html)
[3] [The Compulsion Loop Explained. Game Developer](https://www.gamedeveloper.com/business/the-compulsion-loop-explained)
[4] [Experiencing Social Media Without Ads. Zhivko Illeieff](https://medium.com/swlh/experiencing-social-media-without-ads-56576974b40b)

Valuable AI versus AI with values

Valuable AI versus AI with values
By Ferdous Alam | March 9, 2022

The landscape
Many academics and researchers posit that the emergence of human-level artificial intelligence will be achieved within the next two decades. In 2009, 21 AI (Artificial Intelligence) experts participating in AGI-09 experts believe AGI(Artificial Intelligence) will occur around 2050, and plausibly sooner. In 2012/2013, Vincent C. Muller, the president of the European Association for Cognitive Systems, and Nick Bostrom from the University of Oxford, conducted a survey of AI researcher where 60% responded that AGI likely to happen before 2040. In 2017 May, 352 AI experts who published at the 2015 NIPS and ICML conferences were surveyed resulting in estimate that there’s a 50% chance that AGI will occur until 2060. In 2019, 32 AI experts participated in a survey on AGI timing with 45% of respondents predict a date before 2060.[1]

There is little contention or disagreement in terms of the benefits A.I will provide by analyzing data, integrating information and a much faster rate than humanly possible. However, how we utilize the insights and apply them to form decision making is not an easy problem to solve. The Brooking Institute mentions that “The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole” [2]

Recent surveys showed that the overwhelming majority of Americans (82%) believe that robots and/or AI should be carefully managed. This figure is comparable to with survey results from EU respondents.[3] There however is a caveat when it comes to alignment of the survey result with how we perceive or correlate intelligence with positive traits. Due to what is known as ‘affect heuristic bias’s we often rely on our emotions, rather than concrete information, when making decisions. This leads us to overwhelmingly associate intelligence with positive rather than negative traits or intuitively conclude that those with more intelligence possess other positive traits to a greater extent. Hence, even though we may show an overall concern there is a possibility that we may fall into the pitfalls of miscalculating possible cost associated with AI/AGI adoption.

Embedding values
S. Matthew Liao “argues that human-level AI and superintelligent systems can be assured to be safe and beneficial only if they embody something like virtue or moral character and that virtue embodiment is a more appropriate long-term goal for AI safety research than value alignment.” [4]

In his 1942, before the term AI/AGI was coined, science fiction writer Isaac Asimov in his short story “Runaround”, proposed three laws or robotics which can be seen as a corollary that can be applied to AI/AGI. According to his proposal the First Law states: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Finally, the Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While this is a novel attempt but embedding virtue or principles/laws from a consequentialist perspective might fall short. It is argued “that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know the terminal goals of the system.” [5]
Another heuristic approach might be to consider the four ethical principles by EU High-Level Expert Group on AI that closely resembles the commonly accepted principles of bio ethics, excerpted from Beauchamp and Childress (2008), which include: Principle of respect for autonomy, Principle of beneficence, Principle of nonmaleficence, Principle of justice.

The proposed four principles by this group when it comes to AI [6] are:

I) Respect for human autonomy – AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills. This essentially seems to cover Principle of respect for autonomy, Principle of beneficence from the Bioethics

II) Prevention of harm (Principle of nonmaleficence)- AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings. 30 This entails the protection of human dignity as well as mental and physical integrity.

III) Fairness (Principle of justice)–The substantive dimension implies a commitment to: ensuring equal and just distribution of both benefits and costs, and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatization

IV) Explicability -This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible –explainable to those directly and indirectly affected.
The principle of explicability seems to be a completely new addition to the previous framework which has significant implications. According to Luciano Floridi and Josh Cowls, “the addition of the principle of ‘explicability,’ incorporating both the epistemological sense of ‘intelligibility’ (as an answer to the question ‘how does it work?’) and in the ethical sense of ‘accountability’ (as an answer to the question ‘who is responsible for the way it works?’), is the crucial missing piece of the AI ethics jigsaw.”

Conclusion
The tradeoff between the value that AI promises versus the values we can need to embed within the decision-making process is both intriguing and challenging. Moral values and principles in terms of systematically investigating what makes acts right or wrong has been debated for eons. While an objective value system that is optimal is unlikely to emerge anytime soon, yet the various perspective and the framework proposed would serve as a starting point which we can use to look at different perspective and strive towards a better solution.

References:
1. Cem Dilmegani, (2022). When will singularity happen? 995 experts’ opinions on AGI. https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
2. Darrell M. West and John R. AllenTuesday, (2018). How artificial intelligence is transforming the world. https://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/
3. Baobao Zhang and Allan Dafoe (2019) Artificial Intelligence: American Attitudes and Trends https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/executive-summary.html
4. S. Matthew Liao (2020). Ethics of Artificial Intelligence https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190905033.001.0001/oso-9780190905033-chapter-14
5. Roman Yampolskiy(2019). Unpredictability of AI https://www.researchgate.net/publication/333505954_Unpredictability_of_AI
6. Independent High-Level Expert Group on Artificial Intelligence (2019) https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf
7. Luciano Floridi and Josh Cowls, (2019). A Unified Framework of Five Principles for AI in Society https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/7