“I’m not worried about my privacy online.” — A Millennial’s Perspective

“I’m not worried about my privacy online.” — A Millennial’s Perspective
By Anonymous | July 12, 2019

As I type this, my Word document highlights a squiggly red line underneath the word “Millennial’s” in my title. How quickly I am to ignore the suggestion, knowing well that this title has been made ubiquitous from this generation’s views, stances, and actions: from
religion to politics, marriage to the economy.

Millennial’s are the generation born between 1981 and 1996, currently between the ages of 23 and 38. (Source: pewresearch.org)

Their perspectives have frustrated the Boomers above them and has quickly molded the world for the Gen Z’s below them. In light of recent privacy scandals in the technology industry and prevalence of “fake news” in the media, the millennials have not been ruffled. Do we chalk it up to apathy and ignorance? To their comfort with technology due to early exposure? To their abundant awareness and caution?

Many surveys have been conducted to understand the viewpoints of the varying generations with regards to security and privacy, and the root causes are still being understood. According to a recent study in 2015 by the Media Insight Project, only 20% of Millennials worried about privacy in general all of the time, their biggest concern being that their identity or financial information will be stolen from them.

Survey reached 1,045 adults across the US, ages 18-34. (Source: amercianpressinstitute.org)

Survey reached 1,045 adults across the US, ages 18-34. (Source: amercianpressinstitute.org)

As a part of that generation, which I would consider rather diverse, I can understand the different root causes for these perspectives.

One is that the Millennial generation was born in the digital age, where the internet was part of the every day person’s life, and the Millennials were the first true customers and fuel of social media. They haven’t known another world so they feel a sense of normalcy in the others having access to their information.

Another reason could be that Millennials have yet to see the repercussions of any security breaches. From Cambridge Analytica to the Marriott account breaches, they have understood that these events have occurred but have not yet been personally impacted by any of them.

On the other hand, Millennials feel that they are in control of their data, that they have chosen what to share online and they have actively accepted any risk of their data being leaked as they make the decision to engage with certain products or apps. They see no true harm in their data being released, except when it comes to financial information (as noted above). This is the idea that they have “nothing to hide” — a credo of the generation which feels the need to share everything.

Our data has always been around. Since before the internet, there has been data. We have just reached an age where we can capture, measure, and use to it to enhance our world like never before. There has to come a point where you accept the world you have become a part of and your role. It is the world you grew up in, the innovation that your data has lent itself to that has made your life easier and better. You start identifying tradeoffs: “If I don’t share my location with my Uber app I will need to figure out exactly what the address of this location that I am is and make sure I don’t misspell anything so that my Uber driver can come pick me up at the right spot.” You have chosen your life, the conveniences, the benefits, over the seemingly small and insignificant parts of privacy that you are handing away. And as a millennial, I may be naive, but we have reached a point where there is no “acceptable” alternative.

Sources

https://www.americanpressinstitute.org/publications/reports/survey-research/millennials-news/single-page/

https://www.pewresearch.org/fact-tank/2019/01/17/where-millennials-end-and-generation-z-begins/ft_19-01-17_generations_2019/

https://www.forbes.com/sites/larryalton/2017/12/01/how-millennials-think-differently-about-online-security/#6571f7e7705f

https://www.proresource.com/2018/05/why-millennials-and-gen-z-worry-less-about-online-privacy/

https://news.gallup.com/businessjournal/192401/data-security-not-big-concern-millennials.aspx

https://www.forbes.com/sites/blakemorgan/2019/01/02/nownership-no-problem-an-updated-look-at-why-millennials-value-experiences-over-owning-things/#7acd2f5522fc

A Simple (July 2019) Online Privacy Tech Stack

A Simple (July 2019) Online Privacy Tech Stack
By Eduard Gelman | July 12, 2019

As consumers become increasingly aware that their behavior is actively tracked by advertising firms and governments, and that this information is occasionally lost in high-profile, high-stakes leaks, many are beginning to modify their habits. Privacy and security concerns are likely at the forefront of the development and adoption of a slew of tools that individuals can use to make their online and increasingly visible “offline” behavior more private, or at least, more secure. Since the toolsets and adoption are in flux, this blog will attempt to survey the landscape as it exists in July 2019, reviewing the harms that consumers are trying to avoid, and will take some liberties in picking “flagship” products to represent a technique and in omitting less-adopted technologies for the sake of concision.

The main privacy violations that these tools help consumers to minimize fit neatly into Solove’s Privacy Taxonomy, with threats coming in “surveillance”, “identification”, and “secondary use” harms. Each product discussed in this blog post address one or more of these potential harms.

Surveillance harms may come from private or public entities who are able to read content exchanged between individuals. Just as the NSA is being excoriated for it’s wide-reaching surveillance procedures, recently Facebook began to block private and public messages depending on content. It’s relatively clear that these products are well-intended, but may carry alarming, negative consequences. Further harms may come when activity across disparate platforms and connection points can be identified as belonging to the same individual, leading directly to potential exploitation of individuals based on their history. Famously, an unaware father was recently alerted to his daughter’s pregnancy by a wayward advertisement. When sensitive information leaks and is used for identity theft, this quickly escalates into a security problem with serious financial and legal ramifications.

Online, there are countermeasures that individuals can take to obfuscate or subvert tracking:

It is important to note that much of this data collection is actively used to improve and personalize products and services. Netflix might not be able to recommend a spectacular show that is perfectly suited to your tastes if it isn’t able to merge data from your behavior and ratings with those of other Netflix users. In fact, Netflix asked everyone to participate in this project, and paid handsomely for the result. Amazon might not be otherwise able to notice that you’ve looked at reviews of healthy toothpastes, and serve you an ad with a better price and much better convenience than your local supermarket. Nonetheless, some feel that ad agencies and governments building up profiles of individuals’ likes, dislikes, behaviors, vices, and other “personal” matters is a violation of privacy.

What do you think? Did this survey miss any topics or important products? Let us know in the comments.

Facial Recognition at U.S. Airports: The “Future” is Now

Facial Recognition at U.S. Airports: The “Future” is Now
By Annie Lane | July 12, 2019

At many U.S. airports, passengers face long lines and multiple checkpoints for checking bags, obtaining boarding passes, screening carry-ons, and verifying identity to get to the gate. The Transportation Security Administration (TSA) hopes to streamline the process with facial recognition technology. As part of the Department of Homeland Security (DHS), the TSA is responsible for domestic and international air travel security in the US. The TSA estimates screening 965 million passengers annually, or roughly 2.2 million passengers daily. That number is growing at a rate of about 5% per year. Facial recognition systems promise to expedite the process and support the increasing passenger volume. Beyond security checkpoints, the TSA is partnering with airlines like JetBlue  and Delta to achieve a “curb-to-gate” vision with photos granting access at each checkpoint.

While facial recognition technology could unlock efficiencies, it also creates new risks and privacy concerns. A massive database of passenger images must be collected, stored and protected. Passengers have a right to provide consent, especially since the accuracy of facial recognition technology is questionable. The application of facial recognition technology by government agencies is also under the bipartisan scrutiny of Congress.

How Facial Recognition is Applied in Airports

The TSA lays out their plan to increase security and improve passenger security through automation of manual screening tasks in their Biometric Screening Roadmap. Traditionally, the Transportation Security Officers at the checkpoint compare the presented photo ID to the face of the person standing in front of them and matches the name on the boarding pass. The TSA has started screening pilots with U.S. Customs and Border Protection (CBP) to evaluate the facial recognition technology. In the pilots, at the Traveler Document Checker point, a camera takes a picture of the passenger’s face. This photo is then transferred to the cloud where an algorithm attempts to match the photo to the stored facial template database managed by CBP to identify the passenger. Upon finding the match, the passenger is permitted to proceed.

Storage and Security of Biometric Data
This system requires the storage of photos in a central database accessible to the federal agency. At the federal level, there is a collection of passport and visa photos. Applying this technology is a challenge for domestic flights because each state has their own database of driver’s license photos. However, a recent investigation by the Washington Post reveals that other federal agencies, including the FBI and ICE, have been accessing these state databases without due process required by the 4th Amendment. While the TSA is not currently involved in this invasion of privacy, this violates the principle of consent and betrays the public’s trust of the government’s use of facial recognition.

No data system is fully secure against attacks, so the huge database necessary becomes a desirable target. Increased access to the database introduces additional vulnerability. This is a legitimate concern – this June, the CBP reported that Perceptics, a private contractor, was hacked. The hack compromised around 100,000 images of license plates and travelers collected at border checkpoints. The CBP placed blame solely on Perceptics and chose to suspend the company rather than take any responsibility. Based on this recent response, we cannot expect the CBP or TSA to accept accountability for this new database as they partner with private companies.

Consent and Opting-out
The TSA biometric roadmap highlights that all passengers will have the opportunity to opt-out of the biometric screening. They can be screened manually using traditional methods. While it’s essential to provide the opportunity to provide consent and provide alternatives, these alternatives may come at a cost. The manual screening will likely takes longer and there may a social cost as strangers observe defiance of the “norm”. Two different passenger accounts confirm this and observe that opting out is not a clear choice for JetBlue and Delta’s boarding facial recognition systems. Even if a passenger opts out at the gate, their images have still been gathered in CBP’s cloud database as part of the flight gallery to be accessed by the private airline.

Accuracy of Facial Recognition
The system accuracy goal is correct identification of 96% of legitimate passengers. Even if this accuracy level is achieved, 1 in 25 passengers would require additional screening. While the majority of passengers may have a better experience, a subpopulation will face inconveniences. The [National Institute of Standards and Technology’s April evaluation of various facial recognition algorithms found that black and female subjects had consistently lower accuracy than white and male subjects. This means a particular subpopulation will disproportionately bear the burden of the technology. While prevalence of facial recognition is increasing, fairness has not been sufficiently addressed.

Taking Action

While facial recognition technology is already deployed at some American airports, there are opportunities to put the brakes on the program. The DHS has standards for gathering public opinion and assessing privacy risks including the creation of Privacy Impact Assessments. The House Oversight and Reform Committee and the House Homeland Security Committee both held hearings this summer on government use of facial recognition. We must hold our representatives accountable for protecting unnecessary invasions of privacy by government agencies.

Why the World Economic Forum’s Global Council on AI should focus on protecting children

Why the World Economic Forum’s Global Council on AI should focus on protecting children
By Ivan Fan | July 8, 2019

The advent of AI is a trend which will affect our children and our children’s children. In a world characterized by constant technological change, we must invest more in preparing future generations through improved governance of AI-interactions involving children, particularly in the context of areas such as education.

The newly created World Economic Forum (WEF) AI Council on Artificial Intelligence presents an opportunity to develop a global governance approach to AI, which should include a strong treatment of governance issues around AI-interactions with children. The forum is well positioned to do so; its Generation AI project has previously advanced important questions regarding uses of AI in relation to children.

The creation of the council comes in the wake of a recent trend of nations placing greater emphasis on cooperation with regard to overall AI governance. Multi-lateral efforts on the part of the EU and OECD, in particular, have sparked efforts toward developing a consensus around core AI issues in their respective memberships. Notably, the European Commission’s High Level Expert Group on AI recently released a set of ethical guidelines on AI and recommendations for trustworthy AI, formally addressing the need for governance around AI-interactions with children.

In a time when troubling terms such as “technological cold war” have cropped up, overcoming techno-nationalistic tensions and fostering collaboration has never been more important between great powers. The great challenge we face today is ensuring that people everywhere—in both developed and emerging countries—have sufficient access to AI resources. The best way to achieve this is by doubling down on opening up access to educational opportunities to youth everywhere, and the WEF is well positioned to provide critical, impartial leadership on this front.

Current talent pools are insufficient for taking advantage of the future range of occupations enabled by AI, and without systemic reform addressing rising inequality, societies will regress to a state in which opportunities are increasingly restricted to those able to access AI resources. National efforts such as the American AI Initiative, China’s New Generation Artificial Intelligence Development Plan, and the European Strategy on Artificial Intelligence all emphasize talent shortages as a significant impediment to implementing AI effectively.

This is why policies directed toward the expansion of the available talent pool are critical, which should include redesigning education systems to prepare children with the necessary skills to thrive in an AI-enabled world. Many countries agree that overhauling education systems to teach necessary cognitive, socio-cultural, and entrepreneurial and innovation competencies are a primary means of addressing talent shortages. Expanding access to STEM opportunities for women is also of vital importance, and must improve at all ends of the talent pipeline—from early childhood education all the way to the C-suite.

In his landmark book “AI Superpowers”, Kai-Fu Lee, co-chair of the WEF’s new council on AI alongside Microsoft President Brad Smith, writes about how perception AI is revolutionizing China’s education system. I serve as a research and teaching assistant to faculty here at UC Berkeley’s School of Information, and I have seen first-hand how new technologies can revolutionize the delivery of education in my own graduate program. Instructors now have unprecedented access to rich profiles of students and to dashboards notifying them of a whole host of AI-enabled features including high-fidelity, real-time notifications about performance at the individual and macro-level.

In revamping our education systems to use AI and to teach AI, it is crucial that the safety and rights of children are strictly respected by those who would impact their learning and growth. AI HLEG provides some ideas for the Global AI Council to consider – it recommends protecting children from “…unsolicited monitoring, profiling and interest invested habitualisation and manipulation” and giving children a “clean slate” of any public or private storage of data related to them upon reaching a certain age. The WEF’s Global Council on AI represents an outstanding opportunity to consider and iterate upon such ideas in order to better protect and serve the needs of our children.

Saving the Future of Phone Calls – The Fight to Stop Robocalls

Saving the Future of Phone Calls – The Fight to Stop Robocalls
By Anonymous | July 5, 2019

“Hello, this is the IRS. I am calling to inform you of an urgent lawsuit! You are being sued for failing to pay taxes and we have a warrant out for your arrest. Please call this number back immediately!”

The familiar noisy background laced with thinly veiled threats is a message many are unfortunately accustomed to. Robocalls are a pervasive annoyance that has become the top consumer complaint to the Federal Trade Commission (FTC). And despite robocalls being prohibited by law, Americans were bombarded by a record breaking 4.4 billion robocalls in June 2019. That’s 145 million calls per day, 13 calls per person!


Figure 1: YouMail Robocall Index: https://robocallindex.com/

So, how does robocallers obtain phone number anyways? Most often, they acquire numbers from third party data providers, who in turn acquired numbers from a variety of avenues that everyday users may not be aware are collecting and selling their data. Some of these sources include:

  • Toll free (1-800) numbers that employ caller ID which can collect phone numbers
  • Entries into contests where users provided phone numbers in the process
  • Applications for credit
  • Contributions to charities where users provided phone numbers in the process

Methods of manipulating users into giving up personal information have evolved over the years as well. Robocalls can disguise their numbers to appear as a local telephone number with neighboring area codes to trick users into picking up unfamiliar calls outside of their personal contacts. The variety of robocallers disguising themselves as government agencies, municipal utility providers, or even hospital staff to scam users into providing personal information has grown to such an astonishing extent that lawmakers are now paying attention.


Figure 2: FTC Phone Scams: https://www.consumer.ftc.gov/articles/0076-phone-scams

In November 2018, the Federal Communications Commission (FCC) called on carriers to develop an industry-wide standard to screen and block robocalls. In particular, the FCC urged carriers to adopt the SHAKEN (Secure Handling of Asserted information using toKENs) and STIR (Secure Telephone Identity Revisited) frameworks by the end of 2019. In particular, SHAKEN/STIR frameworks employs secure digital certificates to validate that calls are from the purported source and has not been spoofed. Each telephone service provider must obtain a digital certificate from a certified authority and this enables called parties to verify the accuracy of the calling number.

Furthermore, in January 2019, Senators Edward J. Markey and John Thune introduced the Traced Act that aims to require all telephone service providers, including those over the internet such as Google Voice or Skype, to adopt similar call authentication technologies.

Together, the collective drive by private industry and regulatory efforts will make it harder for the majority of robocallers to spam consumers at the touch of a button. Like spam emails, calls with suspicious or unverified origins can be traced and blocked en masse. And though these recent tactics are certainly a step in the right direction for consumer protection, some fear that historically underserved communicated might not upgrade in time and be risk being further isolated. Rural areas that often rely on older landlines will foreseeably struggle to adopt the new technology due to outdated equipment and cost to implement. Immigrant communities who make and receive international calls to foreign countries might be subjected to higher levels of discrimination as international calls cannot yet be fully authenticated. This means their calls may be more likely to be labeled as fraud and increased targeting by robocall operatives that will exploit this gap in technology to scam an already vulnerable population.

As the world continues to evolve with newer technology, it’s important to not only think about who will benefit from these changes, but also who will be left behind. In this case, as the FCC and private industry work together to protect consumers, they should also seek to mitigate the risk of scam and spam robocalls targeting vulnerable communities. One way to accomplish this is to work with other regulatory agencies, such as the Housing and Urban Development department, to create long term and sustainable incentives within rural areas to modernize their infrastructure. Another way is for private industries who are vested in international businesses to continue working closely with regulators to develop a global SHAKEN/STIR standard that protects an increasingly globalized world. Afterall, robocalls are hardly a uniquely American phenomenon. However, taking the lead in safeguarding the next generation can be a defining American trademark.

 

Bibliography

  • “How Do Robo-Callers and Telemarketers Have My Cell Number Anyway?” BBB, www.bbb.org/acadiana/news-events/news-releases/2017/04/how-do-robo-callers-and-telemarketers-have-my-cell-number-anyway/.
  • “How to Know It’s Really the IRS Calling or Knocking on Your Door.” Internal Revenue Service, www.irs.gov/newsroom/how-to-know-its-really-the-irs-calling-or-knocking-on-your-door
  • “Phone Scams.” Consumer Information, 3 May 2019, www.consumer.ftc.gov/articles/0076-phone-scams.
  • “Thune, Markey Reintroduce Bill to Crack Down on Illegal Robocall Scams.” Senator Ed Markey, 17 Jan. 2019, www.markey.senate.gov/news/press-releases/thune-markey-reintroduce-bill-to-crack-down-on-illegal-robocall-scams.
  • Vigdor, Neil. “Want the Robocalls to Stop? Congress Does, Too.” The New York Times, The New York Times, 20 June 2019, www.nytimes.com/2019/06/20/us/politics/stopping-robocalls.html.
  • “YouMail Robocall Index: June 2019 Nationwide Robocall Data.” Robocall Index, robocallindex.com/.

Audit organizations, trust and their relationship with ethical automated decision making

Audit organizations, trust and their relationship with ethical automated decision making
By Jay Venkata | July 5, 2019

The world runs on trust. Worldwide billions of dollars are spent every year on developing, and maintaining trust. Any transaction, whether it be supply chain, finance or healthcare related requires trust between people, businesses and entities. As an individual consumer, you are making decisions based on trust on an almost hourly basis. This goes from trusting the safety of your meals to trusting the financial transactions done through your bank. Audit organizations and regulators, both private and public, are responsible for maintaining this trust in society. I work at one of the Big 4 global audit firms. At the core of what each of these audit companies do is giving assurance to businesses and governments. The mission statement of my company actually is ‘Solving complex problems and building trust in society’. But what does trust look like in this digital world?


[Image 1]

Trust in the Digital World

In yesteryears, audit organizations would primarily base their decisions on financial ledgers, and sources of decisions can be razored down to select executives or managers. Manual and paper based processes could only be kept track of manually. However the trend towards automating business processes, and their associated accounting and strategic decisions is causing an interesting challenge for regulators. A realm of work historically led by humans such as deciding on credit card applications can increasingly be automated. There is now a need for alternative methods to develop the same level of trust again. One solution to this issue is to focus more on independently auditing the underlying algorithms. Audit firms may need to have technical staff who can work alongside the functional experts to decode the algorithms and get to the root of any errors or biases that could affect the decisions and outcomes. Hence there is a need for accounting colleges across the world to focus on these interdisciplinary skills that will make students more ready for their careers post graduation. Another challenge is that most businesses and governments do not seem very willing to publish algorithms, the data used to train them or the inferences made from the data.


[Image 2]

Auditing the algorithms

A longer term solution that could be effective is to work alongside the government to create transparency and openness standards that are applicable to all organizations. These types of guardrails exist already in financial statements and reporting, which are managed closely by the SEC in the US. The GDPR currently has a requirement to use “appropriate mathematical or statistical procedures” to avoid or reduce risk resulting from errors or inaccuracies. The French administration also announced that algorithms developed for the government use will be made publicly available, so that society at large can verify their correct application. There needs to be a similar push worldwide for rigorous standards on automated processes and decision making to create algorithmic accountability.


[Image 3]

Blockchain improves trust in transactions through distributed ledgers

This trend towards improving and automating trust could happen naturally as we move towards technologies like Internet of Things and Blockchain, which will create end-to-end traceability for products and transactions in a cheap and ubiquitous manner. However the case for auditing algorithms is clear. Audit firms and regulators need to be one step ahead of the organizations they are auditing at all times and this applies to the current scenario where the stakes couldn’t be higher- ensuring the integrity of data flow and decision making.

Works Cited

  • Abraham, C., Sims, R. R., Daultrey, S., Buff, A., & Fealey, A. (2019, March 18). How Digital Trust Drives Culture Change. Retrieved July 7, 2019, from https://sloanreview.mit.edu/article/how-digital-trust-drives-culture-change/
  • O’Neil, C., & Schermer, B. (2018, July 30). Audit the algorithms that are ruling our lives. Retrieved July 7, 2019, from https://www.ft.com/content/879d96d6-93db-11e8-95f8-8640db9060a7
    What is Blockchain Technology? (2018, September 11). Retrieved July 7, 2019, from https://www.cbinsights.com/research/what-is-blockchain-technology/
  • Likens, S., & Bramson-Boudreau, E. (2019, May 02). Blockchain Promises Trust, But Can It Deliver? Presented by PwC – MIT Technology Review. Retrieved July 7, 2019, from https://events.technologyreview.com/video/watch/pwc-blockchain-trust-likens/

The Role of Providers in Free Flowing data

The Role of Providers in Free Flowing data
By Hanna Rocks | July 5, 2019

In the past year, we have all experienced an inundation of emails with the subject line, “We’ve updated our privacy policies”. Recent actions by the Federal Trade Commission [1] (FTC) and the European Union (does the acronym ‘GDPR’ [2] ring any bells?) have prompted companies to make significant changes to the way they manage user information.

Despite all these changes to policies and headlines about what they hold, the majority of users continue to scroll to the bottom, check “I accept”, and move on without taking any time to consider what they are agreeing to. In fact, a study published in 2016 and updated in 2018 [3] found that 97% of users agreed to the privacy policy of a fictitious social media site—glossing over the clause requiring provision of the user’s first-born child as payment for the service.

Why are we so willing to accept any terms to gain access to an online service? The answer likely lies within the value that users receive in exchange for clicking “I accept”. A Deloitte survey [4] of over 8,500 consumers across six countries found that 79% were willing to share their personal data, *so long as there was a clear benefit to the user.*

This stance aligns with many legal and ethical opinions on privacy protections. One of the earliest regulatory frameworks on privacy harms, the Belmont Report [5], clearly states that the organization should weigh the potential for harm against the possible benefits to the individual. However, because the perceived benefit will vary from user to user, this value is difficult to define or estimate.

Instagram, for example, may define “benefit” as providing specific, personalized ad content to each user. This has also led to a heated debate about whether or not Instagram (or any other app on your phone) is “listening” to us [6]. I often wonder about this, but my worries have weakened over the years as I consume of a growing number of products found while scrolling through my feed. The value I receive from these products is worth whatever information Instagram has been collected and analyzed. I don’t know the details, so it is easy not to care.

Which brings us back to regulatory bodies like the FTC. The FTC is tasked with protecting ignorant or lazy consumers from corporations who are after unreasonable amounts of personal data. However, the question of what is deemed “reasonable” will continue to change as companies differentiate themselves by providing the ultimate personalized customer experience. More and more consumers are coming to expect tailored recommendations from the services they use—whether that is the “perfect” new pair of shoes discovered on Instagram or a carefully calculated rate from your insurance company.

As we continue down the path of highly customized goods and services, it is critical that businesses appoint individuals, or even teams, to provide oversight of what data the organization collects from its consumers and how that data is used. Doing so will benefit both the consumer and the provider by monitoring policies and comparing those policies to existing or emerging regulatory frameworks. Businesses would do well to adopt a proactive approach…

such as the “opt in” requirement under GDPR [7] that clearly addresses how they use customer data, rather than expecting consumers to read thousands of words written in the dreaded “legal-ese”. If a business is willing to invest time and resources, it *can* provide ultimate customization with ultimate protection. In the end, a business that brands itself as a leader in the responsible use of consumer data will surely attract more customers—offering both a personalized experience and peace of mind.

References:
[1]: https://www.vox.com/2019/1/23/18193314/facebook-ftc-fine-investigation-explained-privacy-agreement
[2]: https://www.techrepublic.com/article/the-eu-general-data-protection-regulation-gdpr-the-smart-persons-guide/
[3]: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757465
[4]: https://www2.deloitte.com/insights/us/en/industry/retail-distribution/sharing-personal-information-consumer-privacy-concerns.html
[5]: https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html#xassess
[6]: https://www.vox.com/the-goods/2018/12/28/18158968/facebook-microphone-tapping-recording-instagram-ads
[7]: https://www.cooleygo.com/gdpr-do-i-need-consent-to-process-personal-data/

Search Engines and Reporting Illegal Activity

Search Engines and Reporting Illegal Activity
By Anonymous | July 5, 2019

Search engines are one of the most ubiquitous internet services and generate some of the largest databases in the world as a result. In 2012, Google handled over 1 trillion search queries [1]. However, a certain subset of such search queries point to less benign actions, and may be important clues to help identify terrorists, child abusers, human traffickers, and illegal drug traders. Should search engines be obligated to report on the potential for such illegal activity so it can be followed up on?

First we should examine how easy this would be from a technical standpoint. To do this, I went through the search logs of a major search engine to try and find a query I had input earlier.


Figure 1: Finding a user in search data: surprisingly easy.

Within a few minutes I had my nonsensical query (Column 4), the URLs I clicked (Column 6), my rough location information (Column 11), and my IP address (Column 21) which is considering personally identifying under GDPR rules [2]. If a search company were to target for other queries like “how to make a bomb” they could quickly collect the IP addresses of potential lawbreakers and cross reference with ISP companies to figure out names and home addresses. The ease of success of this experiment indicates the technical costs of implementing a filtering and reporting system to be not be a blocking issue.

There is already some basis for the idea that search engines should be obligated to report on suspected illegal activity especially regarding child abuse. In the US, all states have mandatory reporter laws that require professionals which may be seen as trusted sources for children (such as teachers, health-care workers, and child care providers) to make a report if they suspect that a child has been abused or neglected [3]. A similar law exists for companies to report child sexual abuse, which Google used in 2014 after finding explicit photos in a scan of a user’s Gmail account [4]. The law, importantly, does not require companies to be proactive and monitor or affirmatively search for illegal content which would be easy for companies to implement [5].


Figure 2: Google already searches through some user data and is mandated to report child pornography it finds.

The main drawbacks to search engines entering the space of reporting illegal activity are the concerns over a users rights and expectations to privacy. In Gonzales v. Google Inc, the U.S government subpoenaed Google and other search engine companies for queries and URLs for a study [6]. The court ruled that forcing Google to divulge query and url data would diminish user trust in the company due to the fact that users have some expectation of privacy when using the platform. It is this expectation of privacy that keeps many users from migrating away from major search engines to smaller privacy concious alternatives.

There are also issues with false positives, where users may search terms jokingly or out of curiosity and trigger automated alerts leading to unfounded accusations. This actually happened in 2013 when a counter-terrorism unit mistakenly searched a family’s home after the husband searched “pressure cooker bomb” and “backpacks” out of curiosity at work [7]. If search engines stepped into this space, law enforcement agencies may be faced with an overwhelming number of false positive cases which would waste resources and cause bad press.


Figure 3: Searching for pressure cookers and backpacks could trigger a false positive and get your house searched.

Finally, there is the argument that searches are protected under free speech. In United States v. Gilberto Valle, the US Second Circuit Court of Appeals reversed the ruling that Valle’s Google searches, which were related to fantasizing about violent crimes, themselves constituted a crime. The court reasoned that no action was taken thus no actual crime was committed, and that Valle was free to express his fanatasies [8]. This ruling was supported by a number of First Amendment and Internet law scholars and points to search queries being protected as free speech, making it more of a potential PR disaster for search engine companies to be handing it off to law enforcement or the government.

Privacy expectations and laws surrounding search engines are still a hotly debated and contested area today. The technologies to allow search engines to find and report suspected cases of illegality are easy to implement, and there is some movement in the legal sphere to cover cases like child abuse where it is clear that search engines are ethically obligated to act. On the other side, there are genuine privacy and trust concerns, false positive issues, and legal battles over the coverage of free speech that will probably keep search engine companies sidelined. Most likely, these companies will wait for the legal contests to be settled before deciding to move into any reporting of potential criminal activities unless explicitly forced to by law, so it up to all of us to decide whether the tradeoff is worth it or not.

References

[1] https://archive.google.com/zeitgeist/2012/#the-world
[2] https://eugdprcompliant.com/personal-data/
[3] https://www.childwelfare.gov/pubPDFs/manda.pdf
[4] https://mashable.com/2014/08/04/google-gmail-child-porn-case/
[5] https://www.law.cornell.edu/uscode/text/18/2258A
[6] https://scholarship.law.berkeley.edu/cgi/viewcontent.cgi?article=1689&context=btlj
[7] https://www.theguardian.com/commentisfree/2013/aug/01/government-tracking-google-searches
[8] https://www.eff.org/cases/united-states-v-gilberto-valle

HR Analytics – An ethical dilemma?

HR Analytics – An ethical dilemma?
By Christoph Jentzsch | June 28, 2019

In the times of “Big Data Analytics”, the “War for Talents” and “Demographic Change” the key words HR Analytics or People Analytics seem to be ubiquitous in the realm of HR departments. Many Human Resource departments ramping up their skills on analytical technologies to deploy the golden nuggets of data they have about their own workforce. But what does HR Analytics even mean?

Mick Collins, Global Vice President, Workforce Analytics & Planning Solution Strategy at SAP SuccessFactors sets the context of HR Analytics as the following:

“The role of HR – through the management of an organization’s human capital assets – is to impact four principal outcomes: (a) generating revenue, (b) minimizing expenses, (c) mitigating risks, and (d) executing strategic plans. HR analytics is a methodology for creating insights on how investments in human capital assets contribute to the success of those four outcomes. This is done by applying statistical methods to integrated HR, talent management, financial, and operational data.” (Lalwani, 2019)

Summarized, HR Analytics is a data-driven approach to HR Management.

Figure 1: Data-Driven Decision Making in HR – Source: https://www.analyticsinhr.com/blog/what-is-hr-analytics/

So, what’s the big fuzz about it? Well, the example of Marketing Analytics, which had a revolutionary impact on the field of marketing, showcases that HR analytics is changing the way how HR departments operate tomorrow. A more data-driven approach enables HR to:

  • …make better decisions using data, instead of relying on the managers gut feeling
  • …move from an operational partner to a tactical, or even strategic partner (Vulpen, 2019)
  • …attract more talent, by improving the hiring processes and the employee experience
  • …continuously improve workforce planning through informed talent development (MicroStrategy Incorporated, 2019)

However, the increased availability of new findings and information as well as the ongoing digitalization that unlocks new opportunities to understand and interpret those information, also raises new concerns. The most critical challenges are:

  • Having employees in HR functions with the right skillset to gather, manage, and report on the data
  • Confidence in data quality as well as cleansing and interpretation problems
  • Data privacy and compliance risks
  • Ethical and moral concerns about using the data

Especially, the latter 2 aspects are up for investigation and some guidance is given on how to overcome those challenges. It is important to understand in the first place that corporate organizations collect data about their employees on very detailed level. Theoretically they could reconcile findings back down the level that identifies an individual employee. However, in the considerations of legal requirements this is not always allowed and due to the implementation of GDPR regulations, organizations are now forced to look at employee data and privacy in the same way they do for customers.

Secondly, it is crucial to understand that HR Analytics uses a range of techniques based on statistics that are incredibly valuable at the population level but they can be problematic if you use them to make a decision about an individual. (Croswell, 2019)

Figure 2: HRForecast Recruiting Analytics Dashboard source: https://www.hrforecast.de/portfolio-item/smartinsights/

This is being confirmed by Florian Fleischmann, CEO of HR Analytics Provider HRForecast as he states: “The real lever of HR Analytics is not taking place on an individual employee level, it is instead happing on a corporate macro level, when organizational processes, such as the hiring procedure or overarching talent programs are being improved.” (Fleischmann, 2019). Mr. Fleischmann is totally right, as managing people on an individual level is still a person-to-person relationship between employee and manger, which is nothing that requires a Big Data algorithm. Assuming the worst-case scenario: Job Cuts. If low-performers ought to be identified, simply line managers have to be interviewed – there is no need for a Big Data solution.

Analytics on an individual level do not bring added value but can even create harm as Mr. Fleischmann points out: “According to our experience the application of AI technology to predict for example, employee attrition rates on an individual basis can create more harm than benefit. It can cause a self-fulfilling prophecy, as the manager believes to know what team member is subject to leave and changes his behavior accordingly in a negative way”. (Fleischmann, 2019)

For that reason, HRForecast advocates for two paradigms in the ethical use and application for HR Analytics:

  1. Information on an employee level is only provided to the individual employee and is not shared with anyone else. “This empowers the employee to stay performant as he or she can analyze for example his or her own skill set against a benchmark of skills that are required in the future”, confirms Fleischmann.
  2. Information is being shared with management only on an aggregated level. The concept of “Derived Privacy” is applicable in this context as it allows enough insights to draw conclusions on a bigger scale but protects the individual employee. Given the legal regulations data on that level needs to be fully anonymized and groups smaller than 5 employees are excluded from any analysis. Fleischmann adds: “The implementation of GDPR did not affect HRForecast, as we applied those standards already pre-GDPR. Our company stands to a high ethical code of conduct, which is a key element if you want to be a successful player in the field of HR Analytics.”

In conclusion it can be stated that the application of Big Data Analytics or AI in a context of Human resources can create a huge leap in organizational transparency. However, his newly won information can cause major privacy risks for employees if not treated in reasonable fashion. To mitigate the risk of abusing the increased level of transparency an ethical code of conduct as provided by a third-party expert HRForecast needs to be applied in modern organizations. Thus, Big Data in HR can lead to an ethical dilemma, but it does not have to.

Bibliography

  • Croswell, A. (2019, June 25). Why we must rethink ethics in HR analytics. Retrieved from Why we must rethink ethics in HR analytics: https://www.cultureamp.com/blog/david-green-is-right-we-must-rethink-ethics-in-hr
  • Fleischmann, F. (2019, June 25). CEO HRForecast. (C. Jentzsch, Interviewer)
  • Lalwani, P. (2019, April 29). What Is HR Analytics? Definition, Importance, Key Metrics, Data Requirements, and Implementation. Retrieved from What Is HR Analytics? Definition, Importance, Key Metrics, Data Requirements, and Implementation: https://www.hrtechnologist.com/articles/hr-analytics/what-is-hr-analytics/
  • MicroStrategy Incorporated. (2019). HR Analytics – Everything You Need to Know. Retrieved from HR Analytics – Everything You Need to Know: https://www.microstrategy.com/us/resources/introductory-guides/hr-analytics-everything-you-need-to-know
  • Vulpen, E. v. (2019). HR Analytics. Retrieved from What is HR Analytics?: https://www.analyticsinhr.com/blog/what-is-hr-analytics/

Privacy Movements within the Tech Industry

Privacy Movements within the Tech Industry
By Jill Rosok | June 24, 2019

An increasing number of people have become fed up with major tech companies and are choosing to divest from companies that violate their ethical standards. There’s been #deleteuber, #deletefacebook and other similar boycotts of big tech companies that violated consumer trust.

In particular, five companies have an outsized influence on the technology industry and the economy in general, Amazon, Apple, Facebook, Google (now a unit of parent company, Alphabet) and Microsoft. Among numerous scandals, Facebook has insufficiently protected user data leading to Russian hacking and the Cambridge Analytica controversy. Amazon and Apple have been chastised for unsafe working conditions in their factories. Google contracts with the military and collects a massive amount of data on their users. Microsoft has been repeatedly involved in antitrust suits. Those who have attempted to eliminate the big five tech companies from their lives have found it nearly impossible. It’s one thing to delete your Facebook and Instagram accounts and stop ordering packages from Amazon. However, eliminating the big five companies from your life is actually much more complicated than that. The vast majority of smartphones have hardware and/or software built by Apple and Google. Furthermore, Amazon’s services run the backend of a huge number of websites, meaning stepping away from these companies would essentially mean to stop using the internet.

For a limited few, it might be possible to simply log off and never come back, but most people rely on tech companies in some capacity to provide them basic access to work, connection to friends and family, and the internet in general. As the big five acquire more and more services that encompass the entirety of people’s lives it is extremely difficult for an individual to participate in a meaningful boycott of all five companies.

In light of the dominance of these five companies, to what extent is the government responsible for some kind of intervention? And if the government were to intervene, what might this look like? Antitrust legislation is intended to protect the consumer from monopoly powers. Historically, the government’s focus has been ensuring that companies are not behaving in such a way that leads consumers to pay higher prices for goods and services. However, this doesn’t protect users where no cash is exchanged, as in the case of Facebook. It’s a great example of the classic adage, if you’re not paying money for a service, then that means the product is you. It also does not hold up in a circumstance where venture backing or other product lines in the business can enable companies to artificially deflate prices below cost for many years until all other competitors are wiped off the map. Senator and presidential candidate, Elizabeth Warren, recently proposed breaking up big tech. While her piece was received more symbolically than as a fully formed plan to regulate the tech industry, there were aspects that appear to have resonated strongly with the general public. In particular, the idea that mergers and acquisitions by large companies should undergo much deeper scrutiny and perhaps be banned entirely was well received by analysts.

Like with most complex problems in life, there are no easy solutions to simultaneously protect consumers and maximize technological innovation. However, it is vital to avoid becoming stagnant in response to the scale of the problem. Rather, as individuals, we must remain informed and put pressure on our political leaders to enact meaningful legislation to ensure the tech industry does not violate the basic rights of consumers.