Are Internet and Social Medias Making the Society More Polarized?

Are Internet and Social Medias Making the Society More Polarized?
By Shirley Deng | July 19, 2019

The Problem

Misinformation and fake news are the problems we today try very hard to combat, as they tend to result in conspiracy theories and plots, ended in hatred and causing the society to be more polarized. It seems like, these problems are only increasing, making larger impacts and more serious consequence due to ease access to the Internet. Rising scandals and strengthening regulations also help to put all these issues to people’s attention.

The Factors

Yet, the society, people and government institutions are also putting the blames to the Internet and more specially, social medias. On October 2018, Peter Bergen and David Sterman addressed on New American that, today, the main terrorist problem in the United States today is one of individuals radicalized by a diverse array of ideologies absorbed from the Internet [1]. Also, as the Stanford professor, Francis Fukuyama points out, polarization might be caused and fostered by many reasons. Though Americans are sorting themselves out geographically, living in increasingly politically homogeneous neighborhoods, social media and the proliferation of media channels via the Internet and TV has played a role allowing people to communicate exclusively with people like themselves [2].

Unquestionably, the development of Internet has enabled connection between people disregard geographic barriers, fostered all kinds of conversations in no time, and given people access to content in their own preference, all thanks to social media and recommendation algorithms. However, in psychology studies, there are also lots of other factors indicating why conspiracy theories spread fast and adopted in scale. Specifically, people who have a low level of analytic thinking, like to overestimate the casual tendency between co-occurring events, or are anxious and feel powerless are more likely to turn to conspiracy theories [3].

A group of researchers from Laboratory of Computational Social Science and other institutions have run an experiment on Facebook to test the difference of spreading patterns on scientific topics and conspiracy (rumors) [4]. The only difference between a science topic and a conspiracy rumor is that whether it is validated with a process. Their experiment and model resulted in very interesting findings that: regardless science topics or conspiracy rumors, when people first receive such information, they tend to share with their close friends first. In other words, most of times, information is taken by a friend who have the same profile (polarization), belonging to the same echo chamber. Users tend to aggregate in communities of interests, causing reinforcement and fostering confirmation bias, segregation and polarization. Interestingly, rumors, as they are aversely against the truth and are more easily to be picked up and thus they have a positive relation between lifetime and size, unlike science topics’ lifetime does not correspond to a higher level of interests.

Yes, Internet and social media sites might have fueled conspiracy rumors, not because Internet and social medias are evil in nature, but because people leverage Internet and social medias to foster their own bubbled communities when people tend to share the conspiracy-related information with other conspiracy believers rather than non-believers [5]. In this way, believes and bias misinformation would be reinforced inside each of these communities, resulting stronger polarized believes.

The Potential Solutions

Before blaming Internet and social medias, it is more meaningful and insightful for us to look into human factors. Lots of psychology studies have given us the hint that people are more interested in exaggerated, distorted information that fits into their theories. People who share them are either firm believers or people who have doubts, unsatisfied with current situations or people who have lower level of analytical thinking abilities.

Social and Education

When social medias might help on building bubbles, bursting the bubble seems to be an obvious way to help avoid people tent to share the same profiles going to extreme. Educations might include helping people to adopt the ideas that people are different, and life is not about debating being right or wrong. Oftentimes, extreme polarized information could be a mix of fact and rumors, which makes the situation more complex. It helps to expose people to different opinions and the corresponding facts and evidences, then we could encourage people to find common grounds.

Fact-Checking

The psychology studies also suggest that when people have doubts, we should give them facts. The rising amount of media outlets focused on fact-checking and political accountability reporting has definitely played an important role on helping with the issue.


Source: Mary Meeker’s Internet Trend Report [6]

The increasing amount of controversy on the credibility of journalist has made fact-tracking more important, as Alan Greenblatt puts, “This is an incredibly important time to be a journalist. Never has the watchdog role been more important.” [7] During the presidential campaign in 2016, at least 6 million people had flocked to a transcript of the debate that was fact-checked by 20 NPR journalists in real-time [8]. Globally, partnerships between social medias, Internet companies and science institutions also help to build a safer and healthier online environment.

Technology and Product

While Internet companies and social medias should not take all the blames, they could take up some responsivities to act proactively and maintain safer and healthier online communities. For example, algorithmic-driven solutions have been proposed and Google is developing a trustworthiness score to rank the results of queries to estimate the trustworthiness of a web source and build knowledge-based trust. WeChat, the social media giant in China, builds an in- app official fact-checking channel that helps to label rumors and stop them from spreading. WhatsApp, the messaging app who hosts a quarter of global populations on it, labels all forwarded messages and reminds its user to think twice before forwarding to others.

Legal

Last but not the least, legal should be an important resort to fight the bad actors in our communities, regardless online or offline. Lots of conspiracy theories that aims to driving people to polarized directions could be initiated by people with ulterior motives. In this regard, besides guidelines and policies, we should hold the bad actors accountable for their own actions. For example, in 2013, in order to combat rumors, the Chinese government brought out tough measures to stop the spread irresponsible rumors, threatening three years in jail if untrue posts online are widely reposted [10]. Although it drew lots of angry responds from the internet users in China initially, it did help to contain the spread of rumors and minimize bad impacts.

Citations:
[1] “The Real Terrorist Threat in America”, https://www.newamerica.org/international-security/articles/real-terrorist-threat-america/
[2] “The Great Recession has influenced populist movements today, say Stanford scholars”, https://news.stanford.edu/2018/12/26/explaining-surge-populist-politics-movements-today/
[3] “The Psychology of Conspiracy Theories” aps Association for Psychological Science, https://journals.sagepub.com/doi/pdf/10.1177/0963721417718261
[4] “The spreading of misinformation online”, PNAS January 19, 2016 113 (3) 554-559; first published January 4, 2016 https://doi.org/10.1073/pnas.1517441113
[5] “The internet fuels conspiracy theories – but not in the way you might imagine”, http://theconversation.com/the-internet-fuels-conspiracy-theories-but-not-in-the-way-you-might-imagine-98037
[6] “Internet Trend Report”, 2019, Mary Meeker
[7] “The Future of Fact-Checking: Moving ahead in political accountability journalism”, https://www.americanpressinstitute.org/publications/reports/white-papers/future-of-fact-checking/
[8] “NPR’s real-time fact-checking drew millions of readers”, https://www.poynter.org/fact-checking/2016/nprs-real-time-fact-checking-drew-millions-of-viewers/
[9] “Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources”, http://www.vldb.org/pvldb/vol8/p938-dong.pdf
[10] “China threatens tough punishment for online rumor spreading”, https://www.reuters.com/article/us-china-internet/china-threatens-tough-punishment-for-online-rumor-spreading-idUSBRE9880CQ20130909?feedType=RSS&feedName=technologyNews

Un-unemployed

Un-unemployed
By Mads Bulkow-Macy | July 19, 2019

The unemployment rate is often used as shorthand for the state of the entire economy. When the Federal Reserve signaled an intent to lower interests rates last week, many news stories supplied context by pointing to recent jobs numbers, headlined by low unemployment. The 3.7% June unemployment rate is near the 50-year low, suggesting that the economy is very healthy indeed. Why, then, would the Fed try to give the economy a boost?

Unemployment is near a 50-year low.


Seasonally adjusted unemployment rate fluctuation since 1969. (Source: Bureau of Labor Statistics)

Jerome Powell’s specific calculus will continue to be the source of much speculation, but one issue that economic headlines could do well to consider is what a low unemployment rate really means. The categories of “employed” and “unemployed”, while at first glance complementary, actually leave out a significant portion of the population. To understand why, it is useful to examine the process by which the Bureau of Labor Statistics develops this statistic.

Since a monthly census of the entire population is infeasible, the statistic is based off of a sample of about 60,000 households, then weighted to be demographically representative for the categories of “age, sex, race, Hispanic ethnicity, and state of residence.” The unemployed/employed determination is made via an interveiw. Reporting employment places a person in the “employed” category. In order to be counted as “unemployed,” a person must:

    • Not currently have a job.
    • Be actively seeking work (in the last four weeks).
    • Be available to work, supposing they receive an offer

Anyone who falls into neither the “employed” nor “unemployed” category is (in general) not counted in the workforce.

If we try to look at unemployment numbers and find, for instance, the number of households which are struggling to put food on the table, we will find that it is in many ways inadequate. First, employment in a single job does not necessarily mean that the person in question has sufficient means to support themselves or their family. Thus, it cannot be used as an accurate predictor of the strain on social safety nets. Secondly, there are a large class of would-be workers who do not actively engage in job seeking. These workers may be temporarily unable to engage in such activities, or may have been searching for long enough that they have become discouraged. This group includes those whose skills have become irrelevant in a changing workforce, and are working to learn a new set of skills before they attempt reentry. It also includes those who suspect that their attempts to seek employment will be met with discrimination or hostility. Note that this would disproportionatley affect groups which have probable cause to be concerned about employment discrimination, such as members of the trans community and people of color. Ultimately, the category created is likely to exclude a good portion of those who would consider themselves unemployed, and fails to capture a variety of forms of personal economic distress. It also fails to capture broader economic inefficiencies, such as underemployed workers or workers who have been forced to seek retraining.

In the creation and definition of these categories, the more narrow aim of the BLS seems to be to measure the availability of workers, in comparison to the current workforce. Yet even here it falls short, given the potential for job seekers with irrelevant skills to be counted as available, while underemployed workers – even those actively seeking advancement – are counted as unavailable.

While as a coarse metric the unemployment rate still serves a purpose as an economic indicator, the category of “unemployment” does not represent what it purports to. It would be useful for everyone from journalists to policy makers to treat it with caution, and consider more closely the people and stories it fails to include.

References

https://www.bls.gov/cps/cps_htgm.htm https://www.cnbc.com/2019/07/05/jobs-report-june-2019.html https://www.nytimes.com/2019/07/19/upshot/economy-fed-powell-rate-cuts-analysis.html

“I’m not worried about my privacy online.” — A Millennial’s Perspective

“I’m not worried about my privacy online.” — A Millennial’s Perspective
By Anonymous | July 12, 2019

As I type this, my Word document highlights a squiggly red line underneath the word “Millennial’s” in my title. How quickly I am to ignore the suggestion, knowing well that this title has been made ubiquitous from this generation’s views, stances, and actions: from
religion to politics, marriage to the economy.

Millennial’s are the generation born between 1981 and 1996, currently between the ages of 23 and 38. (Source: pewresearch.org)

Their perspectives have frustrated the Boomers above them and has quickly molded the world for the Gen Z’s below them. In light of recent privacy scandals in the technology industry and prevalence of “fake news” in the media, the millennials have not been ruffled. Do we chalk it up to apathy and ignorance? To their comfort with technology due to early exposure? To their abundant awareness and caution?

Many surveys have been conducted to understand the viewpoints of the varying generations with regards to security and privacy, and the root causes are still being understood. According to a recent study in 2015 by the Media Insight Project, only 20% of Millennials worried about privacy in general all of the time, their biggest concern being that their identity or financial information will be stolen from them.

Survey reached 1,045 adults across the US, ages 18-34. (Source: amercianpressinstitute.org)

Survey reached 1,045 adults across the US, ages 18-34. (Source: amercianpressinstitute.org)

As a part of that generation, which I would consider rather diverse, I can understand the different root causes for these perspectives.

One is that the Millennial generation was born in the digital age, where the internet was part of the every day person’s life, and the Millennials were the first true customers and fuel of social media. They haven’t known another world so they feel a sense of normalcy in the others having access to their information.

Another reason could be that Millennials have yet to see the repercussions of any security breaches. From Cambridge Analytica to the Marriott account breaches, they have understood that these events have occurred but have not yet been personally impacted by any of them.

On the other hand, Millennials feel that they are in control of their data, that they have chosen what to share online and they have actively accepted any risk of their data being leaked as they make the decision to engage with certain products or apps. They see no true harm in their data being released, except when it comes to financial information (as noted above). This is the idea that they have “nothing to hide” — a credo of the generation which feels the need to share everything.

Our data has always been around. Since before the internet, there has been data. We have just reached an age where we can capture, measure, and use to it to enhance our world like never before. There has to come a point where you accept the world you have become a part of and your role. It is the world you grew up in, the innovation that your data has lent itself to that has made your life easier and better. You start identifying tradeoffs: “If I don’t share my location with my Uber app I will need to figure out exactly what the address of this location that I am is and make sure I don’t misspell anything so that my Uber driver can come pick me up at the right spot.” You have chosen your life, the conveniences, the benefits, over the seemingly small and insignificant parts of privacy that you are handing away. And as a millennial, I may be naive, but we have reached a point where there is no “acceptable” alternative.

Sources

https://www.americanpressinstitute.org/publications/reports/survey-research/millennials-news/single-page/

https://www.pewresearch.org/fact-tank/2019/01/17/where-millennials-end-and-generation-z-begins/ft_19-01-17_generations_2019/

https://www.forbes.com/sites/larryalton/2017/12/01/how-millennials-think-differently-about-online-security/#6571f7e7705f

https://www.proresource.com/2018/05/why-millennials-and-gen-z-worry-less-about-online-privacy/

https://news.gallup.com/businessjournal/192401/data-security-not-big-concern-millennials.aspx

https://www.forbes.com/sites/blakemorgan/2019/01/02/nownership-no-problem-an-updated-look-at-why-millennials-value-experiences-over-owning-things/#7acd2f5522fc

A Simple (July 2019) Online Privacy Tech Stack

A Simple (July 2019) Online Privacy Tech Stack
By Eduard Gelman | July 12, 2019

As consumers become increasingly aware that their behavior is actively tracked by advertising firms and governments, and that this information is occasionally lost in high-profile, high-stakes leaks, many are beginning to modify their habits. Privacy and security concerns are likely at the forefront of the development and adoption of a slew of tools that individuals can use to make their online and increasingly visible “offline” behavior more private, or at least, more secure. Since the toolsets and adoption are in flux, this blog will attempt to survey the landscape as it exists in July 2019, reviewing the harms that consumers are trying to avoid, and will take some liberties in picking “flagship” products to represent a technique and in omitting less-adopted technologies for the sake of concision.

The main privacy violations that these tools help consumers to minimize fit neatly into Solove’s Privacy Taxonomy, with threats coming in “surveillance”, “identification”, and “secondary use” harms. Each product discussed in this blog post address one or more of these potential harms.

Surveillance harms may come from private or public entities who are able to read content exchanged between individuals. Just as the NSA is being excoriated for it’s wide-reaching surveillance procedures, recently Facebook began to block private and public messages depending on content. It’s relatively clear that these products are well-intended, but may carry alarming, negative consequences. Further harms may come when activity across disparate platforms and connection points can be identified as belonging to the same individual, leading directly to potential exploitation of individuals based on their history. Famously, an unaware father was recently alerted to his daughter’s pregnancy by a wayward advertisement. When sensitive information leaks and is used for identity theft, this quickly escalates into a security problem with serious financial and legal ramifications.

Online, there are countermeasures that individuals can take to obfuscate or subvert tracking:

It is important to note that much of this data collection is actively used to improve and personalize products and services. Netflix might not be able to recommend a spectacular show that is perfectly suited to your tastes if it isn’t able to merge data from your behavior and ratings with those of other Netflix users. In fact, Netflix asked everyone to participate in this project, and paid handsomely for the result. Amazon might not be otherwise able to notice that you’ve looked at reviews of healthy toothpastes, and serve you an ad with a better price and much better convenience than your local supermarket. Nonetheless, some feel that ad agencies and governments building up profiles of individuals’ likes, dislikes, behaviors, vices, and other “personal” matters is a violation of privacy.

What do you think? Did this survey miss any topics or important products? Let us know in the comments.

Facial Recognition at U.S. Airports: The “Future” is Now

Facial Recognition at U.S. Airports: The “Future” is Now
By Annie Lane | July 12, 2019

At many U.S. airports, passengers face long lines and multiple checkpoints for checking bags, obtaining boarding passes, screening carry-ons, and verifying identity to get to the gate. The Transportation Security Administration (TSA) hopes to streamline the process with facial recognition technology. As part of the Department of Homeland Security (DHS), the TSA is responsible for domestic and international air travel security in the US. The TSA estimates screening 965 million passengers annually, or roughly 2.2 million passengers daily. That number is growing at a rate of about 5% per year. Facial recognition systems promise to expedite the process and support the increasing passenger volume. Beyond security checkpoints, the TSA is partnering with airlines like JetBlue  and Delta to achieve a “curb-to-gate” vision with photos granting access at each checkpoint.

While facial recognition technology could unlock efficiencies, it also creates new risks and privacy concerns. A massive database of passenger images must be collected, stored and protected. Passengers have a right to provide consent, especially since the accuracy of facial recognition technology is questionable. The application of facial recognition technology by government agencies is also under the bipartisan scrutiny of Congress.

How Facial Recognition is Applied in Airports

The TSA lays out their plan to increase security and improve passenger security through automation of manual screening tasks in their Biometric Screening Roadmap. Traditionally, the Transportation Security Officers at the checkpoint compare the presented photo ID to the face of the person standing in front of them and matches the name on the boarding pass. The TSA has started screening pilots with U.S. Customs and Border Protection (CBP) to evaluate the facial recognition technology. In the pilots, at the Traveler Document Checker point, a camera takes a picture of the passenger’s face. This photo is then transferred to the cloud where an algorithm attempts to match the photo to the stored facial template database managed by CBP to identify the passenger. Upon finding the match, the passenger is permitted to proceed.

Storage and Security of Biometric Data
This system requires the storage of photos in a central database accessible to the federal agency. At the federal level, there is a collection of passport and visa photos. Applying this technology is a challenge for domestic flights because each state has their own database of driver’s license photos. However, a recent investigation by the Washington Post reveals that other federal agencies, including the FBI and ICE, have been accessing these state databases without due process required by the 4th Amendment. While the TSA is not currently involved in this invasion of privacy, this violates the principle of consent and betrays the public’s trust of the government’s use of facial recognition.

No data system is fully secure against attacks, so the huge database necessary becomes a desirable target. Increased access to the database introduces additional vulnerability. This is a legitimate concern – this June, the CBP reported that Perceptics, a private contractor, was hacked. The hack compromised around 100,000 images of license plates and travelers collected at border checkpoints. The CBP placed blame solely on Perceptics and chose to suspend the company rather than take any responsibility. Based on this recent response, we cannot expect the CBP or TSA to accept accountability for this new database as they partner with private companies.

Consent and Opting-out
The TSA biometric roadmap highlights that all passengers will have the opportunity to opt-out of the biometric screening. They can be screened manually using traditional methods. While it’s essential to provide the opportunity to provide consent and provide alternatives, these alternatives may come at a cost. The manual screening will likely takes longer and there may a social cost as strangers observe defiance of the “norm”. Two different passenger accounts confirm this and observe that opting out is not a clear choice for JetBlue and Delta’s boarding facial recognition systems. Even if a passenger opts out at the gate, their images have still been gathered in CBP’s cloud database as part of the flight gallery to be accessed by the private airline.

Accuracy of Facial Recognition
The system accuracy goal is correct identification of 96% of legitimate passengers. Even if this accuracy level is achieved, 1 in 25 passengers would require additional screening. While the majority of passengers may have a better experience, a subpopulation will face inconveniences. The [National Institute of Standards and Technology’s April evaluation of various facial recognition algorithms found that black and female subjects had consistently lower accuracy than white and male subjects. This means a particular subpopulation will disproportionately bear the burden of the technology. While prevalence of facial recognition is increasing, fairness has not been sufficiently addressed.

Taking Action

While facial recognition technology is already deployed at some American airports, there are opportunities to put the brakes on the program. The DHS has standards for gathering public opinion and assessing privacy risks including the creation of Privacy Impact Assessments. The House Oversight and Reform Committee and the House Homeland Security Committee both held hearings this summer on government use of facial recognition. We must hold our representatives accountable for protecting unnecessary invasions of privacy by government agencies.

Why the World Economic Forum’s Global Council on AI should focus on protecting children

Why the World Economic Forum’s Global Council on AI should focus on protecting children
By Ivan Fan | July 8, 2019

The advent of AI is a trend which will affect our children and our children’s children. In a world characterized by constant technological change, we must invest more in preparing future generations through improved governance of AI-interactions involving children, particularly in the context of areas such as education.

The newly created World Economic Forum (WEF) AI Council on Artificial Intelligence presents an opportunity to develop a global governance approach to AI, which should include a strong treatment of governance issues around AI-interactions with children. The forum is well positioned to do so; its Generation AI project has previously advanced important questions regarding uses of AI in relation to children.

The creation of the council comes in the wake of a recent trend of nations placing greater emphasis on cooperation with regard to overall AI governance. Multi-lateral efforts on the part of the EU and OECD, in particular, have sparked efforts toward developing a consensus around core AI issues in their respective memberships. Notably, the European Commission’s High Level Expert Group on AI recently released a set of ethical guidelines on AI and recommendations for trustworthy AI, formally addressing the need for governance around AI-interactions with children.

In a time when troubling terms such as “technological cold war” have cropped up, overcoming techno-nationalistic tensions and fostering collaboration has never been more important between great powers. The great challenge we face today is ensuring that people everywhere—in both developed and emerging countries—have sufficient access to AI resources. The best way to achieve this is by doubling down on opening up access to educational opportunities to youth everywhere, and the WEF is well positioned to provide critical, impartial leadership on this front.

Current talent pools are insufficient for taking advantage of the future range of occupations enabled by AI, and without systemic reform addressing rising inequality, societies will regress to a state in which opportunities are increasingly restricted to those able to access AI resources. National efforts such as the American AI Initiative, China’s New Generation Artificial Intelligence Development Plan, and the European Strategy on Artificial Intelligence all emphasize talent shortages as a significant impediment to implementing AI effectively.

This is why policies directed toward the expansion of the available talent pool are critical, which should include redesigning education systems to prepare children with the necessary skills to thrive in an AI-enabled world. Many countries agree that overhauling education systems to teach necessary cognitive, socio-cultural, and entrepreneurial and innovation competencies are a primary means of addressing talent shortages. Expanding access to STEM opportunities for women is also of vital importance, and must improve at all ends of the talent pipeline—from early childhood education all the way to the C-suite.

In his landmark book “AI Superpowers”, Kai-Fu Lee, co-chair of the WEF’s new council on AI alongside Microsoft President Brad Smith, writes about how perception AI is revolutionizing China’s education system. I serve as a research and teaching assistant to faculty here at UC Berkeley’s School of Information, and I have seen first-hand how new technologies can revolutionize the delivery of education in my own graduate program. Instructors now have unprecedented access to rich profiles of students and to dashboards notifying them of a whole host of AI-enabled features including high-fidelity, real-time notifications about performance at the individual and macro-level.

In revamping our education systems to use AI and to teach AI, it is crucial that the safety and rights of children are strictly respected by those who would impact their learning and growth. AI HLEG provides some ideas for the Global AI Council to consider – it recommends protecting children from “…unsolicited monitoring, profiling and interest invested habitualisation and manipulation” and giving children a “clean slate” of any public or private storage of data related to them upon reaching a certain age. The WEF’s Global Council on AI represents an outstanding opportunity to consider and iterate upon such ideas in order to better protect and serve the needs of our children.

Saving the Future of Phone Calls – The Fight to Stop Robocalls

Saving the Future of Phone Calls – The Fight to Stop Robocalls
By Anonymous | July 5, 2019

“Hello, this is the IRS. I am calling to inform you of an urgent lawsuit! You are being sued for failing to pay taxes and we have a warrant out for your arrest. Please call this number back immediately!”

The familiar noisy background laced with thinly veiled threats is a message many are unfortunately accustomed to. Robocalls are a pervasive annoyance that has become the top consumer complaint to the Federal Trade Commission (FTC). And despite robocalls being prohibited by law, Americans were bombarded by a record breaking 4.4 billion robocalls in June 2019. That’s 145 million calls per day, 13 calls per person!


Figure 1: YouMail Robocall Index: https://robocallindex.com/

So, how does robocallers obtain phone number anyways? Most often, they acquire numbers from third party data providers, who in turn acquired numbers from a variety of avenues that everyday users may not be aware are collecting and selling their data. Some of these sources include:

  • Toll free (1-800) numbers that employ caller ID which can collect phone numbers
  • Entries into contests where users provided phone numbers in the process
  • Applications for credit
  • Contributions to charities where users provided phone numbers in the process

Methods of manipulating users into giving up personal information have evolved over the years as well. Robocalls can disguise their numbers to appear as a local telephone number with neighboring area codes to trick users into picking up unfamiliar calls outside of their personal contacts. The variety of robocallers disguising themselves as government agencies, municipal utility providers, or even hospital staff to scam users into providing personal information has grown to such an astonishing extent that lawmakers are now paying attention.


Figure 2: FTC Phone Scams: https://www.consumer.ftc.gov/articles/0076-phone-scams

In November 2018, the Federal Communications Commission (FCC) called on carriers to develop an industry-wide standard to screen and block robocalls. In particular, the FCC urged carriers to adopt the SHAKEN (Secure Handling of Asserted information using toKENs) and STIR (Secure Telephone Identity Revisited) frameworks by the end of 2019. In particular, SHAKEN/STIR frameworks employs secure digital certificates to validate that calls are from the purported source and has not been spoofed. Each telephone service provider must obtain a digital certificate from a certified authority and this enables called parties to verify the accuracy of the calling number.

Furthermore, in January 2019, Senators Edward J. Markey and John Thune introduced the Traced Act that aims to require all telephone service providers, including those over the internet such as Google Voice or Skype, to adopt similar call authentication technologies.

Together, the collective drive by private industry and regulatory efforts will make it harder for the majority of robocallers to spam consumers at the touch of a button. Like spam emails, calls with suspicious or unverified origins can be traced and blocked en masse. And though these recent tactics are certainly a step in the right direction for consumer protection, some fear that historically underserved communicated might not upgrade in time and be risk being further isolated. Rural areas that often rely on older landlines will foreseeably struggle to adopt the new technology due to outdated equipment and cost to implement. Immigrant communities who make and receive international calls to foreign countries might be subjected to higher levels of discrimination as international calls cannot yet be fully authenticated. This means their calls may be more likely to be labeled as fraud and increased targeting by robocall operatives that will exploit this gap in technology to scam an already vulnerable population.

As the world continues to evolve with newer technology, it’s important to not only think about who will benefit from these changes, but also who will be left behind. In this case, as the FCC and private industry work together to protect consumers, they should also seek to mitigate the risk of scam and spam robocalls targeting vulnerable communities. One way to accomplish this is to work with other regulatory agencies, such as the Housing and Urban Development department, to create long term and sustainable incentives within rural areas to modernize their infrastructure. Another way is for private industries who are vested in international businesses to continue working closely with regulators to develop a global SHAKEN/STIR standard that protects an increasingly globalized world. Afterall, robocalls are hardly a uniquely American phenomenon. However, taking the lead in safeguarding the next generation can be a defining American trademark.

 

Bibliography

  • “How Do Robo-Callers and Telemarketers Have My Cell Number Anyway?” BBB, www.bbb.org/acadiana/news-events/news-releases/2017/04/how-do-robo-callers-and-telemarketers-have-my-cell-number-anyway/.
  • “How to Know It’s Really the IRS Calling or Knocking on Your Door.” Internal Revenue Service, www.irs.gov/newsroom/how-to-know-its-really-the-irs-calling-or-knocking-on-your-door
  • “Phone Scams.” Consumer Information, 3 May 2019, www.consumer.ftc.gov/articles/0076-phone-scams.
  • “Thune, Markey Reintroduce Bill to Crack Down on Illegal Robocall Scams.” Senator Ed Markey, 17 Jan. 2019, www.markey.senate.gov/news/press-releases/thune-markey-reintroduce-bill-to-crack-down-on-illegal-robocall-scams.
  • Vigdor, Neil. “Want the Robocalls to Stop? Congress Does, Too.” The New York Times, The New York Times, 20 June 2019, www.nytimes.com/2019/06/20/us/politics/stopping-robocalls.html.
  • “YouMail Robocall Index: June 2019 Nationwide Robocall Data.” Robocall Index, robocallindex.com/.

Audit organizations, trust and their relationship with ethical automated decision making

Audit organizations, trust and their relationship with ethical automated decision making
By Jay Venkata | July 5, 2019

The world runs on trust. Worldwide billions of dollars are spent every year on developing, and maintaining trust. Any transaction, whether it be supply chain, finance or healthcare related requires trust between people, businesses and entities. As an individual consumer, you are making decisions based on trust on an almost hourly basis. This goes from trusting the safety of your meals to trusting the financial transactions done through your bank. Audit organizations and regulators, both private and public, are responsible for maintaining this trust in society. I work at one of the Big 4 global audit firms. At the core of what each of these audit companies do is giving assurance to businesses and governments. The mission statement of my company actually is ‘Solving complex problems and building trust in society’. But what does trust look like in this digital world?


[Image 1]

Trust in the Digital World

In yesteryears, audit organizations would primarily base their decisions on financial ledgers, and sources of decisions can be razored down to select executives or managers. Manual and paper based processes could only be kept track of manually. However the trend towards automating business processes, and their associated accounting and strategic decisions is causing an interesting challenge for regulators. A realm of work historically led by humans such as deciding on credit card applications can increasingly be automated. There is now a need for alternative methods to develop the same level of trust again. One solution to this issue is to focus more on independently auditing the underlying algorithms. Audit firms may need to have technical staff who can work alongside the functional experts to decode the algorithms and get to the root of any errors or biases that could affect the decisions and outcomes. Hence there is a need for accounting colleges across the world to focus on these interdisciplinary skills that will make students more ready for their careers post graduation. Another challenge is that most businesses and governments do not seem very willing to publish algorithms, the data used to train them or the inferences made from the data.


[Image 2]

Auditing the algorithms

A longer term solution that could be effective is to work alongside the government to create transparency and openness standards that are applicable to all organizations. These types of guardrails exist already in financial statements and reporting, which are managed closely by the SEC in the US. The GDPR currently has a requirement to use “appropriate mathematical or statistical procedures” to avoid or reduce risk resulting from errors or inaccuracies. The French administration also announced that algorithms developed for the government use will be made publicly available, so that society at large can verify their correct application. There needs to be a similar push worldwide for rigorous standards on automated processes and decision making to create algorithmic accountability.


[Image 3]

Blockchain improves trust in transactions through distributed ledgers

This trend towards improving and automating trust could happen naturally as we move towards technologies like Internet of Things and Blockchain, which will create end-to-end traceability for products and transactions in a cheap and ubiquitous manner. However the case for auditing algorithms is clear. Audit firms and regulators need to be one step ahead of the organizations they are auditing at all times and this applies to the current scenario where the stakes couldn’t be higher- ensuring the integrity of data flow and decision making.

Works Cited

  • Abraham, C., Sims, R. R., Daultrey, S., Buff, A., & Fealey, A. (2019, March 18). How Digital Trust Drives Culture Change. Retrieved July 7, 2019, from https://sloanreview.mit.edu/article/how-digital-trust-drives-culture-change/
  • O’Neil, C., & Schermer, B. (2018, July 30). Audit the algorithms that are ruling our lives. Retrieved July 7, 2019, from https://www.ft.com/content/879d96d6-93db-11e8-95f8-8640db9060a7
    What is Blockchain Technology? (2018, September 11). Retrieved July 7, 2019, from https://www.cbinsights.com/research/what-is-blockchain-technology/
  • Likens, S., & Bramson-Boudreau, E. (2019, May 02). Blockchain Promises Trust, But Can It Deliver? Presented by PwC – MIT Technology Review. Retrieved July 7, 2019, from https://events.technologyreview.com/video/watch/pwc-blockchain-trust-likens/

The Role of Providers in Free Flowing data

The Role of Providers in Free Flowing data
By Hanna Rocks | July 5, 2019

In the past year, we have all experienced an inundation of emails with the subject line, “We’ve updated our privacy policies”. Recent actions by the Federal Trade Commission [1] (FTC) and the European Union (does the acronym ‘GDPR’ [2] ring any bells?) have prompted companies to make significant changes to the way they manage user information.

Despite all these changes to policies and headlines about what they hold, the majority of users continue to scroll to the bottom, check “I accept”, and move on without taking any time to consider what they are agreeing to. In fact, a study published in 2016 and updated in 2018 [3] found that 97% of users agreed to the privacy policy of a fictitious social media site—glossing over the clause requiring provision of the user’s first-born child as payment for the service.

Why are we so willing to accept any terms to gain access to an online service? The answer likely lies within the value that users receive in exchange for clicking “I accept”. A Deloitte survey [4] of over 8,500 consumers across six countries found that 79% were willing to share their personal data, *so long as there was a clear benefit to the user.*

This stance aligns with many legal and ethical opinions on privacy protections. One of the earliest regulatory frameworks on privacy harms, the Belmont Report [5], clearly states that the organization should weigh the potential for harm against the possible benefits to the individual. However, because the perceived benefit will vary from user to user, this value is difficult to define or estimate.

Instagram, for example, may define “benefit” as providing specific, personalized ad content to each user. This has also led to a heated debate about whether or not Instagram (or any other app on your phone) is “listening” to us [6]. I often wonder about this, but my worries have weakened over the years as I consume of a growing number of products found while scrolling through my feed. The value I receive from these products is worth whatever information Instagram has been collected and analyzed. I don’t know the details, so it is easy not to care.

Which brings us back to regulatory bodies like the FTC. The FTC is tasked with protecting ignorant or lazy consumers from corporations who are after unreasonable amounts of personal data. However, the question of what is deemed “reasonable” will continue to change as companies differentiate themselves by providing the ultimate personalized customer experience. More and more consumers are coming to expect tailored recommendations from the services they use—whether that is the “perfect” new pair of shoes discovered on Instagram or a carefully calculated rate from your insurance company.

As we continue down the path of highly customized goods and services, it is critical that businesses appoint individuals, or even teams, to provide oversight of what data the organization collects from its consumers and how that data is used. Doing so will benefit both the consumer and the provider by monitoring policies and comparing those policies to existing or emerging regulatory frameworks. Businesses would do well to adopt a proactive approach…

such as the “opt in” requirement under GDPR [7] that clearly addresses how they use customer data, rather than expecting consumers to read thousands of words written in the dreaded “legal-ese”. If a business is willing to invest time and resources, it *can* provide ultimate customization with ultimate protection. In the end, a business that brands itself as a leader in the responsible use of consumer data will surely attract more customers—offering both a personalized experience and peace of mind.

References:
[1]: https://www.vox.com/2019/1/23/18193314/facebook-ftc-fine-investigation-explained-privacy-agreement
[2]: https://www.techrepublic.com/article/the-eu-general-data-protection-regulation-gdpr-the-smart-persons-guide/
[3]: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757465
[4]: https://www2.deloitte.com/insights/us/en/industry/retail-distribution/sharing-personal-information-consumer-privacy-concerns.html
[5]: https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html#xassess
[6]: https://www.vox.com/the-goods/2018/12/28/18158968/facebook-microphone-tapping-recording-instagram-ads
[7]: https://www.cooleygo.com/gdpr-do-i-need-consent-to-process-personal-data/

Search Engines and Reporting Illegal Activity

Search Engines and Reporting Illegal Activity
By Anonymous | July 5, 2019

Search engines are one of the most ubiquitous internet services and generate some of the largest databases in the world as a result. In 2012, Google handled over 1 trillion search queries [1]. However, a certain subset of such search queries point to less benign actions, and may be important clues to help identify terrorists, child abusers, human traffickers, and illegal drug traders. Should search engines be obligated to report on the potential for such illegal activity so it can be followed up on?

First we should examine how easy this would be from a technical standpoint. To do this, I went through the search logs of a major search engine to try and find a query I had input earlier.


Figure 1: Finding a user in search data: surprisingly easy.

Within a few minutes I had my nonsensical query (Column 4), the URLs I clicked (Column 6), my rough location information (Column 11), and my IP address (Column 21) which is considering personally identifying under GDPR rules [2]. If a search company were to target for other queries like “how to make a bomb” they could quickly collect the IP addresses of potential lawbreakers and cross reference with ISP companies to figure out names and home addresses. The ease of success of this experiment indicates the technical costs of implementing a filtering and reporting system to be not be a blocking issue.

There is already some basis for the idea that search engines should be obligated to report on suspected illegal activity especially regarding child abuse. In the US, all states have mandatory reporter laws that require professionals which may be seen as trusted sources for children (such as teachers, health-care workers, and child care providers) to make a report if they suspect that a child has been abused or neglected [3]. A similar law exists for companies to report child sexual abuse, which Google used in 2014 after finding explicit photos in a scan of a user’s Gmail account [4]. The law, importantly, does not require companies to be proactive and monitor or affirmatively search for illegal content which would be easy for companies to implement [5].


Figure 2: Google already searches through some user data and is mandated to report child pornography it finds.

The main drawbacks to search engines entering the space of reporting illegal activity are the concerns over a users rights and expectations to privacy. In Gonzales v. Google Inc, the U.S government subpoenaed Google and other search engine companies for queries and URLs for a study [6]. The court ruled that forcing Google to divulge query and url data would diminish user trust in the company due to the fact that users have some expectation of privacy when using the platform. It is this expectation of privacy that keeps many users from migrating away from major search engines to smaller privacy concious alternatives.

There are also issues with false positives, where users may search terms jokingly or out of curiosity and trigger automated alerts leading to unfounded accusations. This actually happened in 2013 when a counter-terrorism unit mistakenly searched a family’s home after the husband searched “pressure cooker bomb” and “backpacks” out of curiosity at work [7]. If search engines stepped into this space, law enforcement agencies may be faced with an overwhelming number of false positive cases which would waste resources and cause bad press.


Figure 3: Searching for pressure cookers and backpacks could trigger a false positive and get your house searched.

Finally, there is the argument that searches are protected under free speech. In United States v. Gilberto Valle, the US Second Circuit Court of Appeals reversed the ruling that Valle’s Google searches, which were related to fantasizing about violent crimes, themselves constituted a crime. The court reasoned that no action was taken thus no actual crime was committed, and that Valle was free to express his fanatasies [8]. This ruling was supported by a number of First Amendment and Internet law scholars and points to search queries being protected as free speech, making it more of a potential PR disaster for search engine companies to be handing it off to law enforcement or the government.

Privacy expectations and laws surrounding search engines are still a hotly debated and contested area today. The technologies to allow search engines to find and report suspected cases of illegality are easy to implement, and there is some movement in the legal sphere to cover cases like child abuse where it is clear that search engines are ethically obligated to act. On the other side, there are genuine privacy and trust concerns, false positive issues, and legal battles over the coverage of free speech that will probably keep search engine companies sidelined. Most likely, these companies will wait for the legal contests to be settled before deciding to move into any reporting of potential criminal activities unless explicitly forced to by law, so it up to all of us to decide whether the tradeoff is worth it or not.

References

[1] https://archive.google.com/zeitgeist/2012/#the-world
[2] https://eugdprcompliant.com/personal-data/
[3] https://www.childwelfare.gov/pubPDFs/manda.pdf
[4] https://mashable.com/2014/08/04/google-gmail-child-porn-case/
[5] https://www.law.cornell.edu/uscode/text/18/2258A
[6] https://scholarship.law.berkeley.edu/cgi/viewcontent.cgi?article=1689&context=btlj
[7] https://www.theguardian.com/commentisfree/2013/aug/01/government-tracking-google-searches
[8] https://www.eff.org/cases/united-states-v-gilberto-valle