Ethical Implication of Generative AI

Ethical Implication of Generative AI
By Gabriel Hudson | April 1, 2019

Generative data models are rapidly growing in popularity and sophistication in the world of artificial intelligence (AI). Rather than using existing data to classify an individual or predict some aspect of a dataset these models actually generate new content. Recently developments in generative data modeling have begun to blur lines not only between real and fake, but also between machine and human generated content creating a need to look at the ethical issues that arise as the technologies evolve.

Bots
Bots are an older technology that has already been used over a large range of functions such as automated customer service or directed personal advertising. Bots are generative (almost exclusively creating language), but historically have been very narrow in function and limited to small interaction on a specified topic. In May of 2018 Google debuted a Bot system called Duplex that was able to successfully “fool” a significant number of test subjects while carrying out daily tasks such as booking restaurant reservations and making a hair salon appointment (link). This, combined with ubiquity of digital assistants, sparked a resurgence in bot advancement.

Deepfake
In this case Deepfake is a generalized term used to describe very realistic “media” (such images, videos, music, and speech) created with an AI technology know as a Generative Adversarial Network (GAN). GANs were originally introduced in 2014 but came into prominence when a new training method was published in 2018. GANs represent the technology behind seemingly innocuous generated media such as the first piece of AI generated art sold (link):

as well as a much more harmful set of false pornographic videos created using celebrities faces
(link):

The key technologies in this area were fully release fully released to the public upon their completion.

Open AI’s GPT-2
In February 2019 Open AI (a non-profit AI research organization founded in part by Elon Musk) released a report claiming a significant technology breakthrough in generating human sounding text as well as promising sample results (link). Open AI, however, against longstanding trends in the field and their own history chose not to release the full model citing potential for misuse on a large scale. Similar to GPT-2, there have also been breakthroughs in generative technology in other media like images, that have been released to the public. All of the images in the subsequent frame were generated with technology developed by Nvidia.

In limiting access to a new technology Open AI brought to the forefront some discussions about how the rapid evolution of generative models must be handled. Now that almost indistinguishable “false” content can be generated in large volume with ease it is important to consider who is tasked with deciding and maintaining the integrity of online content. In the near future, discussions must be extended about the reality of the responsibilities of both consumers and distributors of data and the way their “rights” to know fact from fiction and human from machine may be changing.

ESG Investing and Data Privacy

ESG Investing and Data Privacy
By Nate Velarde | March 31, 2019

Much of the focus on how to better protect individuals’ data privacy revolves around legal remedies and more stringent regulatory requirements. Market-based solutions are either not discussed or seen as unrealistic, ineffective or impractical. However, the “market” in the form of “responsible” or “sustainable” driven investors are imposing market discipline on companies with insufficient data privacy safeguards through lower share prices and redirecting investment capital to those companies with lower data privacy risks. Responsible investing as a market force is poised to grow dramatically. Blackrock, the world’s largest asset manager, is forecasting that responsible investing strategies will comprise 21% of total fund assets by 2028, up from only 3% today.

Responsible investing involves the integration of environmental, social and governance (“ESG”) factors into investment processes and decision-making. Many investors recognize that ESG information about companies is vital to understand a company’s business model, strategy and management quality. Several academic studies have shown that good corporate sustainability performance is associated with good financial results and superior investment returns. The best known ESG factors having financial relevance are those related to climate change. The reason for this is that climate change is no longer a hypothetical threat, but one that is real with multi-billion dollar consequences for investment portfolios.

Why Do ESG Investors Care About Data Privacy?

ESG investors are becoming increasing focused on data privacy issues. Under the ESG framework, data privacy is considered a human rights issue – falling under the “S” of ESG. Privacy is a fundamental human right, according to international norms established by the United Nations, the US and EU constitutions, but it is increasingly at odds with the business models of technology companies. As these companies have become more reliant on personal data collection, processing and distribution, they have faced increased scrutiny from users and regulators, heightening reputational, litigation and regulatory risks.

Data has been dubbed the “new oil”, the commodity that powers the digital economy. But, as investors are finding, scandals caused by privacy breaches can be just as damaging to tech behemoths as oil spills are to fossil fuel companies. Facebook-Cambridge Analytica was the tech industry’s Exxon-Valdez moment in regards to data privacy. $120 billion was wiped off Facebook’s market value in the aftermath of the scandal. Many of the sellers were ESG investors who sold the stock because of what they perceived as Facebook’s poor data stewardship.

For ESG investors, data privacy risk has become a crucial metric in assessing the companies in which they invest. ESG funds are pushing companies to be more transparent in their data-handling processes (collection, use and protection) and privacy safeguards with shareholders. ESG investors want companies to be proactive and self-regulate rather than wait for government involvement, which often tends to be overbearing and ultimately, more damaging to long-term profitability.

How ESG Investors Advocate for Data Privacy

ESG investors have three levers to advocate for stronger privacy safeguards – one carrot and two sticks. The first is dialog with senior management. As shareholders and/or potential shareholders, ESG investors are given the opportunity to meet regularly with the CEO, CFO and other key executives. ESG investors use their management face time to discuss business opportunities and risks, of which privacy, is top of mind. ESG investors can highlight any deficiencies in privacy policies (relative to what they see as industry best practice) and advocate for increased management and board oversight, spending on privacy and security audits and staff training and helping shift the mindset of executives towards designing in privacy into their products and servives. The key message ESG investors convey to tech executives is that companies that are better at better managing privacy risks have a lower probability of suffering incidents that can meaningfully impact their share price. Any direct incremental expenses associated with privacy risk mitigation is miniscule (in dollar terms) compared to the benefits of a higher share price valuation that is associated with lower risk.

As demonstrated by the Facebook-Cambridge Analytica share price sell-off in mid-2018, ESG investors’ second lever is to vote with their feet and sell their shares if companies fall short of data privacy expectations. Large share price declines are never pleasant, but they are often temporary. As long as business model profitability is not permanently impaired, the share price will eventually recover in most cases. Management may not feel enough pain to see through the hard work of implementing the technical and cultural changes required to adequately protect their users’ data. This is when ESG investors’ third lever can be deployed. Acting in concert with other shareholders, ESG investors’ can engage in a proxy fight and vote to replace the company management and/or board with one more focused on data privacy concerns. The mere threat of a proxy fight has proved to be a powerful catalyst for change at many companies across many industries. While this has yet to happen specifically in regards to data privacy, given the growing market power of ESG investors and their focus on privacy issues, that day is likely to come sooner, rather than later.

Conclusion

Data privacy researchers and advocates should establish relationships with ESG investors, ESG research firms (Sustainlytics) and influential proxy voting advisory firms (Institutional Shareholder Services and Glass-Lewis), to highlight concerns, make recommendations and mold the overall data privacy conversation at publicly traded technology companies. Data privacy advocacy through ESG investors is a more direct, and likely, much faster route to positive change (albeit, incremental) than litigation or regulation.

The Privacy Tradeoff

The Privacy Tradeoff
By John Pette | March 31, 2019

I see privacy referenced often as an all-or-nothing proposition, often in discussions of whether one has it or one does not. In the realm of data, though, privacy exists on a continuum. It is a tradeoff between the benefits from having data readily available and the protection of people’s privacy. There is tremendous gray area in this discussion, but some things are clear. Few would argue that all social security numbers should be public. Things like people’s names and addresses are less clear. It is easy to argue that these data have always been publicly available in America via the White Pages. This is not a valid argument, as it ignores context. While that information was certainly available, the internet was not. Name, phone, and address records were not in one collected location; they were only on the local level, and not digitized. As such, there were limits to the danger of dissemination. Also, there was only so much a bad actor could do with information. In the modern world, anyone can use these basic data elements to commit fraud from anywhere in the world. The context has changed, and the need to protect information has changed with it.

Of course, to what extent data should be protected is also a gray area. Technology and, arguably, society benefit greatly from data availability. People want Waze to work reliably. Many of those same people probably do not want Google to track their locations. It is easy to go too far in either direction. These sorts of situations should all have privacy assessments to evaluate the benefits and risks.

The privacy tradeoff is particularly tricky in government, which has the responsibility for protecting its citizens, but also an obligation for transparency. In studying public crime data from all U.S. municipalities with populations of more than 100,000, I uncovered enormous differences in privacy practices. Some cities made full police reports publicly available to any anonymous user, exposing the privacy details of anyone involved in an incident. Others locked down all data under a blanket statement like, “All data are sensitive. If you want access to a report, file a FOIA request in person.” In the latter case, the data are certainly protected, but the police departments provide no data of value to its citizens. At the risk of making a fallacious “slippery slope” argument, I fear the expansion of government using privacy as a catch-all excuse for hiding information and eliminating transparency. The control of information is a key element of any authoritarian regime, and it is easy to reach that point without the public noticing.

The Freedom of Information Act (FOIA) is intended to provide the American public transparency in government information. It is a flawed system with good intentions. Having worked in an office responsible for FOIA responses for one government bureau, I have seen both sides of FOIA in action. When people discuss their FOIA requests publicly, it is generally in the form of complaints, and usually in one of two contexts:

  1. “They are incompetent.”
  2. “They’re hiding something.”

Most of the time, no one is intentionally hiding anything, though that makes for the most convenient conspiracy theories. In reality, there is an unspeakable volume of FOIA requests. Records are not kept in any central database, so each response requires any involved employee to dig through their email, and their regular jobs are already full-time affairs. Then, each response goes through multiple legal reviews to redact privacy data of U.S. citizens. Eventually, this all gets packaged, approved, and delivered to the requestor. It is far from a perfect system. However, it does, to a sufficient degree, serve its original intent. As long as FOIA is in place and respected, I do not see the information control aspect of government devolving into authoritarianism.

What is the proper balance? This is the ultimate question in the privacy tradeoff. Privacy risk should be assessed with every new technology or application that could contain threats of exposure, and the benefits should always outweigh those risks to the public. If companies provide transparency in their privacy policies and mechanisms for privacy data removal, the benefits and risks should coexist harmoniously.

The Bias is Real

The Bias is Real
By Collin Reinking | March 31, 2019

In 2017, a poll conducted by Digital Examiner, after a very public controversy in which a Google employee wrote about Google’s “ideological echo chamber”, 57.1% of respondents said they believed search results were “biased” in some way. In the years since then, Google, alongside other big tech companies, increasingly find themselves at the center of much public debate about whether their products are biased.

Of course they are biased.

This is nothing new.

That doesn’t mean it can’t be a problem.

Search is Biased
In its purest form, search engines filter down the massive corpus of media hosted on the world wide web to just the selections from the corpus that relate to our desired topic. Whatever bias that corpus has, the search engine will reflect. Search engines are biased because the Internet is made by people and people are biased.

This aspect of bias rose to the international spotlight in 2016 when a teenager from Virgina posted a video showing how Google’s image search results for “three white teenagers” differed from the results for “three black teenagers”. The results for “three white teenagers” were dominated by stock photos of three smiling teens while the results for “three black teenagers” were dominated by mugshots (performing the same searches today mostly returns images from articles referencing the controversy).

In their response to the controversy Google asserted that it’s search engine results are driven by what images were found next to what text on the Internet. In other words, Google was only reflecting the corpus it’s search engine was searching over. Google didn’t create the bias, the Internet did.

This is Not New
Before the Internet there were libraries. Before search engines there were card catalogs, many of which relied on the Dewey Decimal Classification system. Melvil Dewey was a serial sexual harasser whose classification system reflected the racism and homophobia, along with other biases, that were common in the dominant culture at the time of its invention in 1876. If you had searched the Google of 1919 for information about homosexuality, you would have landed in the section for abnormal psychology, or similar. Of the 100 numbers in the system dedicated to religion, 90 of them covered Christianity. Google didn’t invent search bias.

Pick your Bias
Do we want Google, or any other company to try to filter or distort our view of the corpus? This is the first question we must ask ourselves when we consider the conversation around “fixing” bias in search. Some instances clearly call for action, such as Google’s early efforts at image labeling not adequately distinguishing images of African Americans from Images of gorillas. Other questions, like how to handle content that some might consider political propaganda or hate speech, are more confusing and would require Google to serve as of truth and social norms.

But We Know The Wrong Answer When We See It
Google is currently working to build an intentionally censored search engine, Dragonfly, to allow itself to enter the Chinese market. This project, which is shrouded in more secrecy than usual (even for a tech company), is the wrong answer. Google developing a robust platform for managing censorship is basically pouring the slippery onto the slope. With the current political climate both here in the United States and around the globe it is not hard to imagine actors of all political stripes looking to exert more control over the flow of information. Developing an interface for bias to exert that control is not a solution, it’s a problem.

A New Danger To Our Online Photos

A New Danger To Our Online Photos
By Anonymous | March 29, 2019

This is age of photo sharing.

We as humans have replaced some our socialization needs by posting our captured moments online. Those treasured pictures on Instagram and facebook fulfill many psychological and emotional needs – from keeping in touch with our family, reinforcing our ego, collecting our memories and even so we can keep up with the Joneses.

You knew what you were doing when you posted your Lamborghini to your FB group. Photo credit to @Alessia Cross

We do this even when the dangers of posting photos at times appear to outweigh the benefits. Our pictures can be held for ransom by digital kidnappers, used in catfishing scams, used to power fake gofundme campaigns or be gathered up by registered sex offenders. Our photos could expose us to real world perils such as higher insurance premiums, real life stalking (using location metadata) and blackmail. This doesn’t even include activities which aren’t criminal but still expose us to harm – like our photos being used against us in job interviews, being taken out of context or being used to embarrass us years later. As they say, the internet never forgets.

As if this all wasn’t even enough, now our private photos are being used by companies to train their algorithms. According to this article in fortune, IBM “released a collection nearly a million photos which were scraped from Flickr and then annotated to describe the subject’s appearance. IBM touted the collection of pictures as a way to help eliminate bias in facial recognition. The pictures were used without consent from the photographers and subjects, IBM relied on “Creative Commons” licenses to use them without paying licensing fees.

IBM has issued the following statement:

IBM has been committed to building responsible, fair and trusted technologies for more than a century and believes it is critical to strive for fairness and accuracy in facial recognition. We take the privacy of individuals very seriously and have taken great care to comply with privacy principles, including limiting the Diversity in Faces dataset to publicly available image annotations and limiting the access of the dataset to verified researchers. Individuals can opt-out of this dataset.

Opting-out however is easier said than done. To remove any images requires photographers to email IBM links to the images they would like to have removed which is a bit hard since IBM has not revealed usernames of any users it pulled photos from.

Given how all the dangers our photos are already exposed to, it might be easy to dismiss this. Is a company training models on your pictures really more concerning than, say, what your creepy uncle is doing with downloaded pictures of your kids?

Well, it depends.

The scary part of our pictures being used to train machines is that we don’t know a lot of things. We don’t know which companies are doing it and we don’t know what they are doing it for. They could be doing it for a whole spectrum of purposes from the beneficial (make camera autofocus algorithms smarter) to innocuous (detect if someone is smiling) to possibly iffy (detect if someone is intoxicated) to ethical dubious (detecting someone’s race or sexual orientation) to downright dangerous (teach Terminators to hunt humans).

It’s all fun and games until your computer tries to kill you. Photo by @bwise

Not knowing means we don’t get to choose. Our online photos are currently thought of a public good and used for any conceivable purpose, even if those purposes are not only something we may not support but possibly even harmful to us. Could your Pride Parade photos be used to train detection of sexual orientation? Could insurance companies use your photos to train detection of participation in risky activities? Could T2000s use John Connor’s photos to find out what Sarah Connor would look like? Maybe these are extreme examples, but it is not much of a leap to think there might be companies developing models that you might find objectionable. And now your photos could be helping them.

All of this is completely legal of course, though it goes against the principles laid out in the Belmont Report. It doesn’t respect persons due to its lack of consent (Respect for Persons), it provides no real advantage to the photographers or subjects (Beneficience) and all the benefits really go to the companies exploiting our photos while we absorb all of the costs (Justice).

With online photo sharing, a Pandora’s box has been opened and there is no going back. As much as your local Walgreen’s Photo Center might wish it, wallet sized photos and printed 5.75 glossies are things of the past. Online photos are here to stay, so we have have to do better.

Maybe we can start with not helping Skynet.

Hasta la vista, baby.

Sources:

Millions of Flickr Photos Were Scraped to Train Facial Recognition Software, Emily Price, Fortune March 12, 2019, http://fortune.com/2019/03/12/millions-of-flickr-photos-were-scraped-to-train-facial-recognition-software/

Apple’s Privacy Commercial: A Deconstruction

Apple’s Privacy Commercial: A Deconstruction
By Danny Strockis | March 29, 2019

On March 14, Apple released their most recent advertisement, ‘Privacy on iPhone – Private Side’. Reflecting classic Apple style, it’s a powerful piece of subliminal advertising that is both timely and emotional. In 54 short seconds, Apple comments on a variety of privacy matters and not-so-subtley positions their company and products as the antidote to the surveillance economy run by the likes of Google and Facebook.

Privacy is a notoriously difficult concept to define; there’s not a single way to capture its full essence. So let’s have a closer look at Apple’s commercial and break down the privacy messages within. I’ll also touch on implications of the commercial as a whole.

0:03 – Keep out

Apple starts out by touching on the privacy of the home. A home is a hugely valued center of privacy, where people feel most comfortable. When the privacy of a home is violated, harsh reactions often follow. Examples of recent home privacy violations include the introduction of always-on personal assistants like Google Home, Amazon’s ability to deliver packages inside your home, and Google’s wifi-sniffing self-driving cars.

This scene simultaneously addresses what 1890 lawyers Warren & Brandeis have called “the right to be let alone”. When we feel that our solitude has been unwantingly violated, we often claim a privacy violation has taken place. Privacy expert Daniel Solove would identify thsee as violations of “Surveillance” or “Intrusion”.

0:08 – The eavesdropping waitress

I personally identify with this next scene; just last week I found myself pausing a conversation with my brother to let a waitress refill my water. It’s a unique way for Apple to comment on the importance of privacy in conversations, even when those conversations take place in a public forum. We often say or post something in a forum that is public, and yet maintain a certain expectation around the privacy of our words. Many online examples exist of intrusion on conversations – for instance, Facebook reading texts to serve advertisements.

0:19 – Urinals

The most laugh-out-loud scene playfully acknowledges our desire for privacy of our physical selves and bodies. It’s an often under-appreciated part of privacy in the technology, but with the advent of selfies, fitness trackers, and always-on video cameras, protection of people’s physical self is an increasingly relevant subject. Solove calls privacy violations of this nature “Exposure”.

0:23 – Paper shredder

In perhaps the most direct scene, Apple succintly addresses many important topics around data privacy. The credit applicaton being shredded contains many pieces of highly sensitive personally identifiable information, which in the wrong hands could be used for identify theft and many other privacy violations. The scene plainly describes our desire to keep our information out of the wrong hands, and exercise some control over our data.

Importantly this scene also personifies our desire to have our information destroyed once it’s no longer needed. While many online businesses have made a habit of collecting historical information for eternal storage, policies in recent years have begun to enforce maximum retention periods and the right-to-be-forgotten).

0:25 – Window locks

A topic closely assoicated with privacy, especially online, is security. When a company’s databases are breached and our personal information leaked to hackers, we feel our privacy has been violated in what Solove would call “Insecurity” or “Disclosure”. In the offline world, we take great strides to ensure our security, like locking our windows or installing a security system in our home. In the online world, security is often far more out of our control; we are only as safe as the weakest security practices of the websites we visit.

0:31 – Traffic makeup

Interestingly, the final and longest segment of the commerical is perhaps the least obvious (but maybe that’s becuase I’m a male). I believe the image of a woman being watched while she applies makeup is primarily intended to describe how we don’t want creepy observers in our lives. But I like to think Apple comments on something else here – our desire to control how we are perceived in life.

Solove says that “people want to manipulate the world around them by selective disclosure of facts about themselves… they want more power to conceal information that others might use to their disadvantage.” A person’s right to control their physical appearance might be the most basic form of this desire. When the unwelcome driver in the next lane watches the businesswoman apply makeup, he steals from her the ability to control her appearance to the world.

Apple and Privacy

Apple has been heralded as a privacy and security conscious company, and industry leader amongst a sea of companies with lackluster views towards consumer privacy. A flagship example of Apple’s commitment to privacy is their early adoption of end-to-end encryption in iMessage, which protects the privacy of your written conversations on the Apple platform. Apple has also made a point of highlighting that even though they could, they have chosen not to collect information on its customers and use it for advertising or secondary uses. They like to say “their customer is not their product.”

Even still, Apple hasn’t been immune to its own privacy problems. A 2014 breach of iCloud celebrity accounts made headlines. More recently, a Facetime mistake allowed callers to view the recipient’s video feed before the recipient accepted the call. Loose rules about 3rd party application access to user information has also come under scrutiny.

Reception for Apple’s privacy commercial has been largely positive, but some have highlighted Apple’s imperfect privacy record. But I believe the more significant event here is the promotion of privacy as a topic into the forefront of consumer advertisement. Apple has long been an innovator in creative advertisements. The fact that Apple has promoted privacy so heavily shows that Apple believes privacy has reached a tipping point and has become something customers look for in purchasing decisions. This goes against previous research studies, which have shown that consumers de-priortize privacy in favor of other factors until all other factors are equal. Privacy historically takes a stark back seat to price, convenience, appeal, and functionality.

Apple seems to think privacy concern is at an all time high, and represents a business opportunity for their company. Google search trends for the “privacy” topic would say otherwise:

Only time will tell if privacy has become enough of an issue to drive a change in Apple’s bottom line. But for their part, Apple has once again done a masterful job of distilling a highly complex range of emotions into a beautiful and powerful piece of art.

Health Insurance and Our Data

Health Insurance and Our Data
By Ben Thompson | March 29, 2019

Health Insurance is a part of life. It is something that most of us need and have to purchase each year whether we like it or not. If you’re lucky, this involves picking a plan offered by an employer who takes on some of the financial burden, which is helpful considering plan prices continue to rise. Otherwise, you’re left to either choose a full price plan from the marketplace (at least $1000/month for family of 3) or the risk of going uninsured, paying out of pocket for any medical expenses and rolling the dice that a catastrophe won’t cause you major debt.

When you sign up for a health insurance plan you probably assume that the insurance company has some of your basic data, like the data you shared when you signed up or some medical record data, however, you might be surprised to find out that insurance companies collect far more than basic demographic information. These companies are collecting, or purchasing, all sorts of data about people, including information on income, hobbies, social media posts, recent purchases, types of cars owned and television viewing habits. This should be concerning. Consider that health insurance companies make more money by insuring more healthy people than unhealthy. What if they begin to use data to predict who is healthier and make coverage decisions from these predictions? For example, LexisNexis is a company that collects data on people and uses hundreds of non-medical personal attributes, like those previously mentioned, to estimate the cost of insuring a person and sells this information to insurance companies and actuaries. They say that this information is not used to set plan prices but there are no laws prohibiting this.

Currently, HIPPA and the Genetic Information Nondiscrimination Act only regulate how health records are used but not other data, and even the regulations on health records are fairly lax. For example, the Genetic Information Nondiscrimination Act does not apply to life insurance. This means that a life insurer can use your genetic data if you’ve had a genetic test, like 23andMe or Ancestry.com, to alter your policy. If you refuse to share the requested data after having a test done, they can legally terminate your policy. More generally, there are no laws prohibiting insurance companies from collecting any non-health related data about you or how they use it. It is all free-game.

When you use the internet you don’t assume that your actions could influence your ability to get fair health coverage. You’re not anticipating that data brokers are tracking your every action, attempting to infer as much as possible about you, often getting it wrong and selling it all to insurance companies. There is great evidence that the data that data brokers compile is incorrect. We need to start demanding that there be policies in place to regulate what data insurance companies can collect and how they can use it.

With the passing of GDPR in the EU, an EU citizen is legally allowed to request to view all of the data an insurer has on them, request that it be deleted from their databases, and/or make corrections to it. It is time that the U.S. start implementing similar regulations. These straightforward rights would move the U.S. a long ways towards making sure that everyone has access to to fair health coverage and has control over their personal data.

Sources:
Allen, Marshall. “Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates.” Propublica, Propublica, 17 Jul. 2018, https://www.propublica.org/article/health-insurers-are-vacuuming-up-details-about-you-and-it-could-raise-your-rates

Andrews, Michelle. “Genetic Tests Can Hurt Your Chances Of Getting Some Types Of Insurance.” NPR, NPR, 7 Aug. 2018, https://www.npr.org/sections/health-shots/2018/08/07/636026264/genetic-tests-can-hurt-your-chances-of-getting-some-types-of-insurance

Leetaru, Kalev. “The Data Brokers So Powerful Even Facebook Bought Their Data – But They Got Me Wildly Wrong.” Forbes, Forbes, 5 Apr. 2018, https://www.forbes.com/sites/kalevleetaru/2018/04/05/the-data-brokers-so-powerful-even-facebook-bought-their-data-but-they-got-me-wildly-wrong/#69dd2f843107

Miller, Caitlyn Renee. “I Bought a Report on Everything That’s Known About Me Online.” The Atlantic, The Atlantic, 6 Jun. 2017, https://www.theatlantic.com/technology/archive/2017/06/online-data-brokers/529281/

Morrissey, Brian et al. “The GDPR and key challenges faced by the Insurance industry.” KPMG, KMPG, Feb. 2018, https://assets.kpmg/content/dam/kpmg/ie/pdf/2018/03/ie-gdpr-for-insurance-industry.pdf

Probasco, Jim. “Why Do Healthcare Costs Keep Rising.” Investopedia, Investopedia, 29 Oct. 2018, https://www.investopedia.com/insurance/why-do-healthcare-costs-keep-rising/

Song, Kelly. “4 Risks consumers need to know about DNA testing kit results and buying life insurance.” CNBC, CNBC, 4 Aug. 2018, https://www.cnbc.com/2018/08/04/4–risks-consumer-face-with-dna-testing-and-buying-life-insurance.html

Cross-Border Data Transfers & Privacy

Cross-Border Data Transfers & Privacy
By Anne Yu | March 22, 2019

Cross-border Data Transfers (CBDTs):
Personal data collected from one location are transferred to a third country or international organization.

Nowadays global economy depends on CBDTs. From my investigation modern governments recognize,

importance of moving data freely to wherever those data are needed…the economic and social benefits of protecting the personal information of users of digital trade.

But there is also a trend for countries to restrict the data exchange and make data localized.

CBDTs restrictions fall into two general categories:

1. Privacy Regulations: the transfer process subject to compliance with a set of conditions, including conditions for onward transfer. Once conditions are met, transfers are allowed. It typically covers a variety of matters [1], if being overlooked or jepardized, operators will be imposed with legal and civil obligations.

    • Data subject consent
    • Data anonymization
    • Breach notification
    • Appointment data protection officers
        • One government agency with enforcement authority
        • One third-party accountability agent

2. Data Localization: ban on transferring data out of the country, or require the organization to build or use local infrastructure and servers.

Cross-Border Privacy Regulations

CBDTs rule and policy are varied depending on where the data come from and go to. The type of data also matter in this case. For example,

  • The EU GDPR generally prohibites CBDTs of personal data outside of EU territory, unless to a third country with a set of conditions are fulfilled.
    The USMCA promotes cross-border data flows with less strict conditions as well.
    The APEC CBPRs established a principles-based model for national privacy laws that recognized the importance of effective privacy protections that avoid barriers to information flows.
  • Within which, each APEC member was encouraged to implement their domestic privacy laws based on the principles in this framework, which seems less strict.
  • Other countries, like China, South Korean might have even stricker rules.

When CBDTs Happens?

Surprisedly, CBDTs occurs in many daily occasions. For example,

  • Cooperate emails, customer suppport communications.
  • Data analysis to optimize global logistics.
  • Outsource services.
  • HR for global workforces.We will discuss each of these cases using examples.
  • Conduct global researches.
  • Use Internet to query, post or update information locating overseas

There are other unexpected situations, for example, accidental or intentional data breaches.

Some Case Studies

Overall, one can query dlapiperdataprotection.com  for a better understanding. The service can also be used to compare laws from two countries side by side.

European Union (EU) Countries: General Data Protection Regulation (GDPR)

GDPR contains a set of rules for protecting the personal data of all EU residents plus visitors. GDPR also provides strict law protecting data transfering accross borders, including significant fines and penalties for non-compliant data controllers and processors. It has been updating new protections until 2018, especially focusing on EU citizens’ data. For more details, check GDPR Articles 44, and 45 – 49, which lays out conditions data can be transferred beyond EU/EEA.

Canada and Mexico (USMCA)

In 2018 the U.S., Mexico and Canada announced a new trade agreement United States-Mexico-Canada Agreement (USMCA), which is built on a chapter on digital trade from APEC Cross-Border Privacy Rules (APEC CBPRs), aims to

> “adopt or maintain a legal framework that provides for the protection of the personal information of the users.

USMCA formally recognizes the “APEC CBPRs” within their respective legal systems. [2]

Asian (APEC CBPRs)

The APEC CBPRs system is developed by the 21 APEC member economies as a cross-border transfer mechanism and comprehensive privacy program for private sector organizations to enable the accountable free flow of data across the APEC region. The APEC CBPRs system has now been formally joined by the United States, Canada, Japan and Mexico.

Comparison of CBPRs and GDPR

The rest of the world

With most countries following either GDPRs or APEC CBPRs there are still some countries imposing their own systems to regulate stricker on CBDTs.

Russian

Russian enacted Data protection (privacy) Laws to permit CBPRs as long as operator ensures that the recipient state provides adequate protection of personal data.

China
In 2017, Cybersecurity Law of the People’s Republic of China (CSL) was published including policies invovling cross-border data transfer. It is much stricker [3]. In general,

it requires Critical Information Infrastructure (CII) to localize data within the territory of China and all network operators to conduct security assessments prior to the data export.

South Korean
It is stricker as well with new constraints on CBDTs. In March 2019 Amendments to IT Networks Act took effect. The most important point is that it imposes to appoint a local agent responsible for Korean data privacy compliance regarding CBPRs [4].

Data Localization
On the other extreme side, banning data transfers, is called Data Localization or Data Residency,

which regulates means data about a nations’ citizens or residents be collected, processed or stored inside the country. The requirement for localization has been increased after the incident of ex CIA – *Edward Snowden*, who leaked highly classified information from NSA in 2013. Also goverments in Europe and across the world are starting to realize the perils brought by data flow through technology. The emerging trend is becoming to enforce data are consumed on the spot before serve upper applications. Germany and France are the first to approve data localization laws, following by EU in 2017.

Data Types Matter as well
Each country could have its own laws for diffent types of data. For example, Australia regulates its health records, Canada restricts personal data from public service providers, and China restricts more including personal, business, and financial data [5].

Reference
[1] Top 10 operational impacts of the GDPR: Part 4 – Cross-border data transfers

[2] APEC Cross-Border Privacy Rules Enshrined in U.S.-Mexico-Canada Trade Agreement

[3] China: New Challenges Ahead: How To Comply With Cross-Border Data Transfer Regulation In China

[4] Korean data law amendments pose new constraints for cross-border online services and data flows

[5] Data localization

      • [GDPR]: General Data Protection Regulation
      • [CBDTs]: Cross-border Data Transfers
      • [EU]: European Union
      • [APEC]: Asia-Pacific Economic Cooperation Apec
      • [APEC CBPRs]: APEC Cross-Border Privacy Rules
      • [USMCA]: United States-Mexico-Canada Agreement

Enforcing Antitrust Laws in Tech

Enforcing Antitrust Laws in Tech
By Anonymous | March 14, 2019

A lot of the contemporary focus on antitrust regulation with respect to the technology sector was first sparked by the Justice Department’s case against Microsoft in 2000. The case brought up questions about whether our legal framework was strong enough to oversee the quickly growing and evolving tech industry. Because technology companies are so different to companies involved in the first antitrust cases in the early 20th century, some measures and approaches have become inadequate for their proper regulation. In the long run, this may have concerning implications to consumers and market participants. So, how are large tech companies different from large monopolies of the past?

Intellectual Property

The first major difference is that tech companies’ most valuable assets are frequently their intellectual property as opposed to physical property. Given the intangible nature of intellectual property, it can be difficult to value. Regulators then have a hard time comparing one firm to another or calculating market metrics that help make decisions about going after a firm for antitrust. In the eyes of regulators, however, intellectual property is just another form of property.

Network Effects

Additionally, many technology companies depend on network effects for their services to be viable. This means that competitors of a certain service are competing for the entirety of the market instead of just a share of it. In other words, a traditional firm may sell a product and a consumer will buy that product without considering what other consumers do. However, in the example of a social media company, a potential user will consider signing up only if their friends are also using the service.

Dynamic Markets

Identifying substitute products can also be a complex proposal for antitrust regulators in the tech sector. Many distinct services can appear to be fulfilling different needs in a market but, in a matter of one update, can suddenly be competing directly. So, while it’s easy to discern that Coke and Pepsi are substitute products and competitors, services like Instagram and Snapchat are not so clear cut. These two social media technology companies started out offering different services, but their offerings have only recently started to overlap. When studying antitrust cases in the technology sector, regulators must make careful case by case analysis and tease out the details of each market.

How to measure effects

Lastly, recent antitrust claims have been evaluated by judges by considering whether a company’s practices have raised consumer prices. This means some tech companies have been permitted to grow while being overlooked by regulators because many of the services they offer are free to consumers.

Implications for individuals and market participants

As companies become large technology conglomerates, they increase their ability collect larger amounts of data from different aspects of our lives. So, a company like Alphabet has information from a user’s email, location data from their cellular phones, and browsing data collected through Chrome. This increased amount of data collection leaves consumers exposed to potential breaches by the companies, and violations of privacy.

Consolidation has also made it difficult for smaller startups to enter the market and innovate by creating new services. Many times, the choice that these smaller firms have is to either accept an acquisition by a much larger firm, or have their service cloned by the larger firm with more resources. In situations where firms have created markets, like advertising platforms, these large companies might be inclined to give unfair special pricing to certain businesses. In addition, vertically integrated tech companies may prioritize their own products to users within other products.

There is still much research left to be done on the effects of large tech companies on consumers, businesses and the markets in which they participate. What is clear is that antitrust regulators need a different approach to overseeing the industry. As we continue to have conversations about the ideal way to regulate large technology companies, we should consider different metrics than we have used in the past to measure their potential negative effects.

Works Cited

  • Baker, Jonathan B. “Can Antitrust Keep Up?: Competition policy in high-tech markets” The Brookings Institution, https://www.brookings.edu/articles/can-antitrust-keep-up-competition-policy-in-high-tech-markets, (December 1, 2001).
  • Finley, Klint. ìLegal scholar Tim Wu says the US must enforce antitrust lawsî, Wired Magazine, https://www.wired.com/story/tim-wu-says-us-must-enforce-antitrust-laws, (March 11, 2019).
  • Hart, David M. “Antitrust and Technological Innovation.” Issues in Science and Technology 15, no. 2 (Winter 1999).
  • “2018 Antitrust and Competition Conference.” Stigler Center for the Economy and the State, https://research.chicagobooth.edu/stigler/events/single-events/antitrust-competition-conference-digital-platforms-concentration, (April 19-20, 2018).

Image source
Image 1: LA Times
Infographic: https://howmuch.net/articles/100-years-of-americas-top-10-companies

Trust me, I’m a Company

Trust me, I’m a Company
By Mumin Khan | March 12, 2019

With great economic prosperity comes great consequences. Americans hold the tenants of capitalism and growth closely as a fundamental part of their identity. The rewards America reaps by obsessing about profits are undeniable; America is the single largest economy in the world by a number of metrics. Americans enjoy some of the best wages in the world, have access to the best post secondary education, and enjoy a reasonably high quality of life. Simply earning over $32,400 per year puts you in the top 1% globally; the median income for US households in 2017 was $61,372. This same profit addiction, that has taken Americans to such highs, has also brought them new lows. The legislative climate in the United States has all but guaranteed that corporations can play fast and loose with the lives of consumers and face little to no consequences when things go south. The larger the corporation is, the more they can get away with.

On September 7th, 2017, Equifax, one of the largest American credit bureaus, who collect information on an estimated 820+ million people and 91+ million businesses, disclosed that they were hacked several months before. Over 143 million people had sensitive information, including names, addresses, dates of birth, Social Security numbers, and driver’s license numbers, stolen from Equifax over a period of 76 days. Obtaining some or all of this information would allow a malicious actor to assume someone’s identity for financial gain and wreak havoc on their life.


Pictured above, Revenue of the largest credit bureaus in millions of dollars.

The method of intrusion was a known vulnerability in Apache Struts , a web technology that powered Equifax’s dispute portal. This same vulnerability, dubbed CVE-2017-9805 was found in Equifax’s system by the United States Computer Emergency Readiness Team in March 2017 and disclosed to them. Internally, Equifax circulated the information using an email list of system administrators. Unfortunately, the list was out of date and certain key SA’s did not get the notice to update Struts. To make matters worse, an expired certificate allowed hackers to bypass automatic malicious activity detection software throughout the 76 day breach. Once inside, the hackers found that individual databases were not isolated from one another, this allowed them to access more personal information. During this process, the hackers gained access to a database of unencrypted credentials which then allowed them to query against even more user information. More information can be found in the following GAO Report: Actions Taken by Equifax and Federal Agencies in Response to the 2017 Breach.

Given the facts of the hack, it’s hard to view Equifax as a victim alongside the 143 million people that had their information stolen. Rather, their systematic failures to protect the private data of users who often don’t have a say as to what Equifax collects on them makes them an accomplice to the hack. Yet nearly two years after the hack was initiated, no charges against a single Equifax employee were filed. No fines were levied on the corporation. No legislative action has been taken to audit and monitor Equifax in the future. In fact, the opposite happened: Congress passed Senate Bill 2155 which shielded Equifax from class action lawsuits.

Senate Banking Committee member Sen. Mike Crapo, R-Idaho questions Wells Fargo Chief Executive Officer John Stumpf, on Capitol Hill in Washington, Tuesday, Sept. 20, 2016, during the committee’s hearing. Stumpf was called before the committee for betraying customers’ trust in a scandal over allegations that employees opened millions of unauthorized accounts to meet aggressive sales targets. (AP Photo/Susan Walsh) ORG XMIT: DCSW129

Pictured above, Representative Mike Crapo, sponsor of S. 2155

Equifax abdicated its responsibility to guard the data that it collects on people. Why aren’t there regulatory requirements on private companies that collect extremely sensitive personal information on American citizens? Where are our institutions that hold these organizations accountable? Someone will always pay for data breaches like this one. As of now, only the American consumer has paid. Until we start guaranteeing each American’s right to the protection of their data, these types of incidents will continue to happen.