Privacy Reckoning Comes For Healthcare

Privacy Reckoning Comes For Healthcare
By Anonymous | March 3, 2019

The health insurance industry (“payors”), compared to other industries, is relatively late to the game in utilizing data science and advanced analytics in its core business. While actuarial science has long been at the heart of pricing and risk management in insurance, not only are actuarial methods years behind the latest advances in applied statistics and data science, but the scope of use of these advanced analytical tools has been limited largely to solely underwriting and risk management.

But times are a-changing. Many leading payors are investing in data science capabilities in applications ranging from the traditional stats-heavy domain of underwriting to a range of other enterprise functions including marketing, care management, member engagement, and beyond. With this larger foray into data science has come requisite concerns with data privacy. ProPublica and NPR teamed up last year to publish the results of an investigation into privacy concerns related to the booming industry of using aggregated personal data in healthcare applications (link); while sometimes speculative and short on details, the report brings up skin-crawling possibilities of how this can go horribly wrong. Given the sensitivity of healthcare generally and the alarming scope of data collection in process, it’s high time for the healthcare industry to take a stand on how they intend to use this data and confront privacy issues top of mind for consumers. Let’s explore a few issues in particular.

Data usage: “Can they do that?”

One issue raised in the article — which would be an issue for any person with a health insurance plan — is how personal will actually be used. There are a number of protections in place that prevent some of the more egregious imagined uses of personal data, the most important being that insurance companies cannot price-discriminate for individual plans (though insurers can charge different prices for different plan tiers in different geographies). Beyond this, however, one could imagine other uses that might raise concerns on the expectations of privacy with data, including: using personal data in group plan pricing (insurance plans fully underwritten by the payor and offered to employers with <500 employees), outreach to individuals that may alert others to personal medical information (consider the infamous Target incident where a father learned of his daughter’s then-unannounced pregnancy through pregnancy-related mailers sent by Target), and individualized pricing that takes into account data collected from social media in a world where laws governing health care pricing are in flux in our current political environment. Data usage is something that payors need to be transparent about with its consumers if payors hope to engender and maintain the already-mercurial trust of its members…and ultimately voters.

Data provenance: “Do I really subscribe to ‘Guns & Ammo’?”

It is demonstrable that payors are making significant investments in personal data, sourced from a cottage industry of providers that aggregate data using a variety of proprietary methods. Given the potential uses laid out above, consider the following: what if major decisions about the healthcare offered to consumers is based on data that is factually incorrect? Data aggregation firms sometimes resort to imputing data for people with missing data points — so that, if all my neighbors subscribe to Guns & Ammo magazine, for instance, it may assume I am also a subscriber. Notwithstanding what my specific hypothetical Guns & Ammo subscription might mean, what is the impact of erroneous data on decisions around important healthcare decisions? How do we protect consumers from being the victim of erroneous decisions based on erroneous data that is out of their control? A standard is required here in order to ensure decisions are not made based on inaccurate data.

Conclusion: Miles to go before we sleep on this issue

ProPublica and NPR merely scratched the surface of potential data privacy issues that can arise from questionable data usage, data inaccuracy, and other issues not addressed in the article. As the healthcare industry continues to invest further in burgeoning its data science capabilities — which, by the way, has the potential to also help millions of people — it will be critical for payors to take a clear stand in articulating a clear data privacy policy with, at the very least, well-understood standards of data usage and data accuracy.

—————————

IMAGE SOURCES: both are examples of what a ‘personal dossier’ of an individual’s health risk might look like, including personal data. Both come from the main ProPublica article mentioned above (“Health Insurers Are Vacuuming Up Details About You – And It Could Raise Your Rates”, by Marshall Allen, July 17, 2018), found here: https://www.propublica.org/article/health-insurers-are-vacuuming-up-details-about-you-and-it-could-raise-your-rates

Both images are credited to Justin Volz, special to ProPublica

Contextual Violations of Privacy

Contextual Violations of Privacy
By Anonymous | March 3, 2019

Facebook’s data processing practices are once again in headlines (shocker, right?). One recent outrage surrounds the way in which data from non-related mobile applications is shared with the social media platform in order to improve their respective efficacy of targeting users on Facebook. This particular question has raised serious questions about end user privacy harm. This has in fact prompted New York Department of Financial Services to request documents from Facebook. In this post we will discuss some of the evidence concerning the data sharing practices of third-party applications with Facebook, and then discuss a useful lens for evaluating the perceived privacy harm. Perhaps we will also provide some insights in alternative norms in which we might construct the web to be a less commercial, surveillance-oriented tool for technology platforms.

The Wall Street Journal recently investigated 70 of the top Apple iOS 11 apps and found that 11 of them (16%) shared sensitive, user-submitted data with Facebook in order to enhance the ad targeting effectiveness of Facebook’s platform. The sensitive health and fitness data provided by the culprit apps includes very intimate data such as ovulation tracking, sexual activity defined as ìexerciseî, alcohol consumption, heart rates and other sensitive data. These popular apps use a Facebook feature called “App Events” that is then used to feed Facebook ad-targeting tools. In essence, this feature enables companies to effectively track users across platforms to improve their ad effectiveness targeting.

A separate, unrelated and earlier study conducted by Privacy International running Android 8.1 (Oreo) provides more technical discussion and details of data sharing. In tests of 34 common apps it found that 23 (61%) automatically transferred data to Facebook at the time a user opens an application. This occurred regardless of a user having a Facebook account. This data includes the specific application accessed by a user, events such as the open and closure of the application, device specific information, the userís suspected location based on language and time zone settings and a unique Google advertising ID (AAID) provided by the Google Play Store. For example, specific applications such as the travel app Kayakî sent detailed search behavior of end users to Facebook.

In response to the Wall Street Journal reports, a Facebook spokesperson commented that it’s common for developers to share information with a wide range of platforms for advertising and analytics. To be clear, the report was focused on how other apps use peopleís information to create Facebook ads. If it is common practice to share information across platforms, which on the surface appears to be true (although the way in which targeted marketing and data exchanges work is not entirely clear), then why are people so upset? Moreover, why did the report published by the Wall Street journal spark regulatory action while the reports from Privacy International were not as polarizing?

Importance of Context

Helen Nissenbaum NYU researcher, criticizes the current approach to online privacy which is dominated by discussion of transparency and choice. One central challenge to the whole paradigm is what Nissenbaum calls the “transparency paradox”. That is, providing simple, digestible and easy to comprehend privacy policies are, with few exceptions, directly opposed to detailed understanding as to how data are really controlled in practice. Instead, she argues for an approach that leverages contextual integrity in order to define the ways in which data and information ought to be handled. For example, if you operate as an online bank, then the ways in which information is used and handled in a banking context ought to apply whether it is online or in-person.

Now applying Nissenbaum’s approach to the specific topic of health applications sharing data, e.g. when one annotates her menstrual cycle on her personal device, would she reasonably expect that information to be accessed and used for forums in social media (e.g., on Facebook)? Moreover, would she reasonably expect that her travel plans to Costa Rica would then be algorithmically aggregated with her menstrual cycle information in order to detect whether she would be more or less inclined to purchase trip insurance? What if that information was then used to charge her more for the trip insurance? The number of combinations and permutations of this scenario is only constrained by one’s imagination.

Arguably many of us would be uncomfortable with this contextual violation. Debatably, sharing flight information with Facebook does not result in the same level of outrage as does health data. That is due to the fact that the norms that govern health data tend to privilege autonomy and privacy much more than those of other commercial activities like airline travel. While greater transparency would have been a meaningful step towards minimizing the outrage experienced by the general public with the health specific example, it is still not sufficient to remove the privacy harm that could be, was or is experienced.

As Nissenbaum has proposed, perhaps it is time that we rethink the norms of how data are governed and whether informed consent with todayís internet is really a sufficient approach towards protecting individual privacy. We can’t agree on a lot in America today, but it feels like keeping our medical histories safe from advertisers is maybe one area where we could find a majority of support?

A Case Study on the Evolution and Effectiveness of Data Privacy Advocacy and Litigation with Google

A Case Study on the Evolution and Effectiveness of Data Privacy Advocacy and Litigation with Google
By Jack Workman | March 3, 2019

2018 was an interesting year for data privacy. Multiple data breaches, the Facebook Cambridge Analytica scandal, and the release of the European Union’s General Data Protection Regulation  (GDPR) mark just a few of the many headlines. Of course, data privacy is not a new concept, and it is gaining prominence as more online services collect and share our personal information. Unfortunately, as 2018 showed, this gathered personal information is not always safe, which is why governments are introducing and exploring new policies and regulations like GDPR to protect our online data. Some consumers might be surprised that this is not the first time governments have attempted to tackle the issue of data privacy. GDPR actually replaced an earlier  data privacy initiative by the EU called the Data Protection Derivative of 1995. In the US, California’s Online Privacy Protection Act  (CalOPPA) of 2003 governs many actions involving privacy and is planned to be replaced by the California Consumer Privacy Act  (CCPA) in 2019. Knowing this, you might be wondering, what’s changed? Why do these earlier policies need replacing? And are these policies actually effective in setting limits on data privacy practices? To answer these questions, we turn to the history of one of the internet’s most well-known superstars: Google.

Google: Two Decades of Data Privacy History

Google’s presence and contributions in today’s ultra-connected world cannot be understated. It owns the most used  search engine, the most used internet browser, and the most popular smartphone operating system. Perhaps more than any other company, Google has experienced and been at the forefront of the evolution of the internet’s data privacy debates.

As such, it is a perfect subject for a case study to answer our questions. Even better, Google publishes an archive  of all of its previous privacy policy revisions with highlights of what’s changed. Why are privacy policies important? Because privacy policies are documents legally required to be shared by a company to explain how it collects and shares personal information. If a company changes its approach to personal information use, then this change should be reflected in a privacy policy update. By reviewing the changes between Google’s privacy policies, we can assess how Google responded to and the impact on Google of major data privacy events in the last two decades of data privacy advocacy and policy.

2004: The Arrival of CalOPPA

Google’s first privacy policy , published in June of 1999, is a simple affair: only 3 sections and 617 words. The policy remained mostly the same until July 1, 2004, the same date that CalOPPA’s policy went into effect, where Google added a full section on “Data Collection” and much further detail on how it shared your information. Both additions were required under the new regulations set forth by CalOPPA and can be considered positive steps towards more transparent data practices.

2010: Concerns Over Government Data Requests

A new update in 2010 brings first mention of the Google Dashboard. The Dashboard, published after rising media attention focusing on reports that Google shared its data with governments upon request, is a utility for users to view the data Google’s collected. This massive increase in transparency can be considered a big win for data privacy advocates.

2012: A New Privacy Policy and Renewed Scrutiny

March 2012 brings Google’s biggest policy change yet. In a sweeping move, Google overhauled its policy to give it the freedom to share user data across all of its services. At least, all except for ads: “We will not combine DoubleClick cookie information with personally identifiable information unless we have your opt-in consent”. This move received negative attention and fines from both international media and governments.

2016: The Ad Wall Falls

With a simple, one-line change in its privacy policy, Google drops the barrier preventing it from using data from all of its services to better target its advertisements. This move shows that, despite previous negative attention, Google is not afraid of expanding its use of our personal information.

2018: The Arrival of GDPR

It is still far too soon to assess the impact of GDPR, but, if the impact on Google’s privacy policy  is any indicator, then it represents a massive change. With the addition of videos, additional resources, and clearer language, it seems as if Google is taking these new regulations very seriously.

Conclusion

Comparing Google’s first privacy policy to its most recent depicts a company that’s become more aware of and more interested in communicating its data practices. As demonstrated, this growth was caused by media scrutiny and governmental legislation along the way. However, while the increased transparency is appreciated, the same media scrutiny and governmental legislation has not prevented Google from expanding its use and sharing of our personal information. This raises a new question that will only be answered with time: will GDPR and the pending US regulations actually place real limits on the use of and protections for our personal information, or will they just continue to increase transparency?

Operation Neptune Spear

Operation Neptune Spear
By Chris Sanchez | March 3, 2019

Almost eight years ago on May 2nd, 2011, at 11:35pm Eastern Time, former President Barak Obama unfolded Operation NEPTUNE SPEAR to the world:

*“…the United States has conducted an operation that killed Osama bin Laden, the leader of al-Qaeda, and a terrorist who’s responsible for the murder of thousands of innocent men, women, and children.”*

Neptune Spear Command Center

At the time, the American public was aware that the US was engaged in combat operations in Afghanistan, but the whereabouts of Osama bin Laden—including whether he was alive or dead—were unknown. The announcement by President Obama (which, by the way, interrupted my viewing of America’s Funniest Home Videos, confirmed to the American public that Osama bin Laden:

  • Survived the US invasion of Afghanistan in 2001.
  • Had been hiding in Pakistan for several years.
  • Was killed in the raid by a highly trained (but undisclosed) US military unit.

President Obama’s announcement and subsequent reporting provided additional details about the raid and the decisions leading up to it, but the primary substance of the event can be neatly summarized in the above three bullet points. Yet much to my shock and dismay, over the coming days, I watched as news channels reported leaked details of the event to include classified information such as the identity of the military unit responsible for the raid including their call signs, identifying features, and deployment rotation cycle. None of this disclosed information materially altered the narrative of what had happened or provided any particularly useful insight into this classified military operation.

Secrecy and representative democracy have long had a tumultuous relationship, which is not likely to significantly improve in our Age of Information (On-Demand), as there will always be a trade-off between government transparency and the desire to keep certain pieces of information hidden from the public in the name of national security to include economic, diplomatic, and physical security. And though it often takes major headline events—Pentagon Papers (1971), Wikileaks (2006), Edward Snowden (2013) —to jar the public consciousness, the resultant public discussion surrounding these events, often finds that the balance between transparency and secrecy is either not well monitored, or well understood, by those who are elected/appointed to safeguard both the public trust and their overall security.

Take for instance the Terrorist Screening Database  (TSDB, commonly known as the “terrorist watchlist”). The TSDB is managed by the Terrorist Screening Center, a multi-agency organization created in 2003 by presidential directive, in response to the lack of intelligence sharing across governmental agencies prior to the September 11 terrorist attacks. People—both US citizens and foreigners—who are known or suspected of having terrorist organization affiliations are placed into the TSDB, along with unique personal identifiers, including in some cases, biometric information. This central repository of information is then exported across federal agencies (Department of States, Department of Homeland Security, Department of Defense, etc.) to aid in terrorist identification across passive and active channels.
TSDB Nomination Regimen

In the aftermath of the 9/11 attacks and subsequent domestic terror incidents, one would be hard pressed to argue that the TSDB is not a useful and necessary information-sharing tool for US Law Enforcement and other agencies responsible for domestic security. But like other instances of the government claiming the necessity of secrecy in the name of national security, there are indications that the secrecy/transparency balance is tilted in favor of unnecessary secrecy. A report in 2014 from the Intercept —an award-winning news organization—claimed evidence that 280,000 people in the TSDB (almost half the total number at the time), had no known terrorist group affiliation. How or why were these unaffiliated people placed into this federal database? The consequences of being placed in the TSDB are not trivial. Depending on the circumstances, TSDB members can find themselves on the “no-fly list”, have visas denied, be subjected to enhanced screenings at various checkpoints, and find their personal information (including biometric information) exposed across multiple organizations.

With an average of over 1,600 daily nominations to the TSDB, I am hard-pressed to believe that due diligence is conducted on all of those names, despite what is claimed on the Federal Bureau of Investigation’s FAQ section  of their Terrorist Screening Center website, regarding the thoroughness of the TSDB nomination process. Furthermore, once nominated, it’s very cumbersome for individuals to correct or remove records about them in the TSDB, in spite of a formal appeals procedures as mandated by the Intelligence Reform and Terrorism Prevention Act of 2014. The Office of the Inspector General under the Department of Justice has criticized the maintainers of the TSDB for “…frequent errors and being slow to respond to complaints”. A 2007 Inspector General report found a 38% error rate in a 105 name sample from the TSDB.

As long as we live in a representative democracy that values individual privacy, free and open discussion of policy, and the applicability of Constitutional principles to all US citizens, there will always be “friction” at the nexus of government responsibility, public trust in governmental institutions, and secrecy. Trust in US governmental institutions has slowly eroded over time, due in large part to the access of information previously hidden from the public, which was found to be contrary/misleading to what they had been told or had been led to believe. Experience has shown that publicly elected representatives are often not enough of a check on the power of government agencies to strike an appropriate balance between secrecy and transparency. Fortunately, though not perfect in their efforts to right perceived wrongs, much progress has been made at this nexus point by public advocacy organizations, academic institutions, investigative journalism, constitutional lawyers, and concerned citizens.

In my experience, which includes being on the front lines of the War on Terror from 2007-2013, the men and women who comprise the totality of “government institutions”, while imperfect, generally do have the best interests of the nation (as a whole), in mind when prosecuting their responsibilities. Given the limitations of human decision making in times of both crisis and tranquility, there is a tendency to err on the side of secrecy in the name of security. However, taken to extremes this mentality can result in significant abuses of power ranging from moderate invasions of privacy to severe abuses of personal freedoms. To compound the situation, the public erosion of trust in government creates a certain level of suspicion behind every governmental action that is not completely “above board”, even when there are very good reasons for non-public disclosure of information (such as the operational details as described in the Operation Neptune example cited at the beginning of this article). At the end of the day, the government will take those measures it deems as necessary to secure the safety of its citizenry, even if such actions come at the expense of the rights of minority groups or those who do not find themselves in political power. I think it’s our job as vigilant citizens to ensure that the balance of power is restored once the real or perceived crisis has passed.

How transparent does a government need to be? In a representative democracy it needs to be as transparent as possible without compromising public safety and security. How the US government and its citizens decide to strike that balance over the coming generations will be an interesting discussion indeed.

Primary Sources
1. https://en.wikisource.org/wiki/Remarks_by_the_President_on_Osama_bin_Laden
2. https://fas.org/sgp/crs/terror/R44678.pdf
3. https://theintercept.com/2014/08/05/watch-commander/
4. https://www.fbi.gov/file-repository/terrorist-screening-center-frequently-asked-questions.pdf/view