Archive for October 24th, 2018

Is the GDPR’s Bark Bigger than its Bite?
by Zach Day on 10/21/2018

The landmark EU regulation, formally called the General Data Protection Regulation or GDPR, took effect on May 25, 2018. Among other protections, GDPR grants “data subjects” with a bundle of new rights and placed an increased obligation on companies who collect and use the data. Firms were given a two year preemptive notice to implement changes that would bring them into compliance with GDPR by May 2018.


Image Credit: https://www.itrw.net/2018/03/22/what-you-need-to-know-about-general-data-protection-regulation-gdpr/

I don’t think I’m reaching too far to make this claim: some for-profit enterprises won’t do the right thing just because it’s the right thing, especially when the right thing is costly. Do the EU member countries’ respective Data Protection Authorities, also called DPAs, have enforcement tools that are powerful enough to motivate firms to invest in the systems and processes required for compliance?

Let’s compare two primary enforcement tools/consequences, monetary fines and bad press coverage.

Monetary Fines

When the UK Information Commissioner’s Office released their findings on Facebook’s role in the Cambridge Analytica scandal, the fine was capped at 500,000 pounds or $661,000. This is because Facebook’s transgressions occured before the initiation of GDPR and was therefore subject to the regulations of the UK Data Protection Act of 1998, the UK’s GDPR precursor, which specifies a maximum administrative fine of 500,000 pounds. How painful do you think a sub-million dollar fine is for a company that generated $40B of revenue in 2017?

GDPR vastly increases the potential monetary fine amount to a maximum of $20M euros or 4% of the company’s global turnover. For Facebook, this would have amounted to a fine of ~$1.6B. That’s more like it.

But how effectively can EU countries enforce the GDPR? GDPR enforcement occurs at the national level, with each member country possessing its own Data Protection Authority. Each nation’s DPA has full enforcement discretion. Because of this, there will inevitably be variation in enforcement trends from country to country. Countries like Germany, with a strong cultural value of protecting individual privacy, may enforce the GDPR with far more gusto than a country like Malta or Cyprus.

Monetary fines are not going to be the go-to tool for every enforcement case brought under the GDPR. DPAs have vast investigative powers, such as carrying out audits, obtaining access to stored personal data, accessing the facilities of the data controller or processor then issuing warnings, reprimands, orders, and bans on processing. It’s likely that these methods will be used with much more frequency. Although, the first few cases will be anomalies since, a) media outlets are chomping at the bit to report on the first few enforcement actions taken under the GDPR and b) DPAs will be trying to send a message.

PR Damage

What do you think burned worse to Facebook, a $661,000 fine or the front page of every international media outlet running the story for hundreds of millions of readers to see (imagine for how many this was the last straw, causing them to deactivate their Facebook accounts)? I would argue that the most powerful tool in the GDPR regulators toolbox is the bad press associated with a GDPR violation brought against a company, especially in the early years of the regulation when the topic is still fraught.

Mark Zuckerberg testifying before a joint hearing of the Senate Judiciary and Senate Commerce Committees, April 10, 2018. Image Credit: https://variety.com/2018/digital/news/zuckerberg-congress-testimony-1202749461/

A report published in July by TrustArc outlining estimated GDPR compliance rates across the US, UK, and EU noted that 57% of firms are motivated to comply with GDPR by ‘customer satisfaction’, whereas only 39% were motivated by fines. Of course a small business with 100 employees in a suburb of London is chiefly concerned with a potential 20,000,000 euro fine. They’d simply be out of business. On the other hand, large Silicon Valley based tech firms, with armies of experienced attorneys (Facebook attorneys have plenty of litigation experience in this area, by now), have much more to lose from more bad press versus a fine of any amount allowed under GDPR.

Path Forward Firms are going to pursue any path that leads to maximum revenue growth and profitability, even if it means operating in ethical/legal grey areas. If GDPR regulators plan to effectively motivate compliance from companies, they need to focus on the most sensitive pressure points. For some, it’s the threat of monetary penalty. For the tech behemoths, it’s the threat of another negative front-page headline. Regulators will be at a strategic disadvantage if they don’t acknowledge this fact and master their PR strategies.

Ethical Issues of the Healthcare Internet of Things
By Osmar Coronel  | October 21, 2018


Tracking vital signs on the fly

Very likely you are already using an Internet of Things (IoTs) product for your healthcare.


Our new connected world

IoTs are small computing objects that are constantly collecting information and sending it to the cloud or turning on or off something automatically. This blog is mainly focused on the ethical risks of IoT devices applied to improving your health.

IoT devices are predicted to expand in the healthcare industry due to the many benefits they provide. However, our FIPP, Fair Information Practice Policy, regulation might not be able to protect the consumer against all the new ethical risks associated with the upcoming healthcare IoT device applications.

IoT devices have several applications in healthcare. For instance, insulin pumps and blood-pressure cuffs connect to a mobile app that tracks and monitors blood pressure. The power of technology allows people to take control of their health. With IoTs people can also be more engage with their health. A patient with an insulin pump can be in more control of their blood pressure levels and this gives them more control of their diabetes. IoTs applied to healthcare can monitor and collect information such as heart rate and skin temperature. The data captured from the consumer can be transmitted, stored, and analyzed. This creates opportunities for research.

The use of IoTs in healthcare is expanding in the medical area. According to a Market Watch article, the healthcare IoT Market is expected to be worth $158 Billion by 2022The Consumer Electronics Show (CES) in 2018, showcased several companies IoT products created to diagnose, monitor and treat illnesses.

Under the lens of the Federal Trade Commission (“FTC”), the FIPP focus is on notice, access, accuracy, data minimization, security, and accountability. The most relevant recommendations for IoT are security, data minimization, notice, and choice.

From the FIPP security recommendation, companies should implement from the very beginning “security by design”. They should also train their employees and retain providers that are able to enforce security in their services.


New Healthcare

Another risk of the application of healthcare IoTs is that it collects a large amount of data of the consumer for a long period. The FIPP Data Minimization proposes that companies should limit the data collected only for what is needed and for a limited time. Companies should develop and apply best practices, business needs and develop policies and practices that impose reasonable limits on the collection and retention of consumer data. Security and data minimization have more explicit initiatives to help minimize the ethical risks of IoTs.

On the other hand, notice and choice could be a challenge. In general there is a high risk that IoT companies do not provide notice or choice to the customer. Providing a notice or choice is challenging since IoTs are used in everyday life and they typically lack a user interface. Furthermore, some people think that the benefit of the IoT devices outweigh the cost of not giving the consumer notice and choice.

It is challenging to provide a choice when there is no user interface. However, according to the FIPP, there are still suitable alternatives like the implementation of video tutorials and implementation of QR codes on the devices. Also, in many cases, the data use might be under the consumers’ expectations, so that means that not every data collection needs to require the consumer consenting to the collection of data. Companies should implement opt-in choices at the point of sale when the consumer is acquiring the device with an easy to understand language.

The use of new technological advances in healthcare IoT devices offers a large number of benefits and they will expand considerable in the healthcare sector. Nonetheless, they will require careful implementation. The expansion of healthcare IoTs will come with a surge of new ethical problems and conflicts.

Unknown Knowns

October 24th, 2018

Unknown Knowns
by Anonymous on 10/21/2018


Image Credit: https://www.azquotes.com/quote/254214
Donald Rumsfeld during Department of Defense News Briefing, archive.defense.gov. February 12, 2002.

The taxonomy of knowledge laid out by Rumsfeld in his much quoted new briefing conspicuously omits a fourth category: unknown knowns. In his critique of Rumsfeld’s analysis, the philosopher Slavoj Žižek defines the unknown knowns as “the disavowed beliefs and suppositions we are not even aware of adhering to ourselves, but which nonetheless determine our acts and feelings.” While this may seem like the realm of psychoanalysis, it’s a term that could also be applied to two of the most important topics in machine learning today: bias and interpretability.

The battle against bias, especially illegal biases that discriminate against protected classes, is a strong focus for both academia and industry. Simply testing the outputs of an algorithm for different categories of people for statistical difference can reveal things about the decision making process that were previous unknown, flipping things from the “unknown known” state to “known known.” More advanced interpretability tools, like LIME, are able to reveal even more subtle relationships between inputs and outputs.

While swaths of “unknown knowns” are being converted to “known knowns” with new techniques and attention, there’s still a huge amount of “unknown knowns” that we will miss forever. Explicitly called out protected classes are becoming easier to measure, but it’s rare to check for all possible intersections of protected classes. For example, there may be know measurable bias in some task when comparing genders or across race separately, but there may be bias when looking at the combinations. The fundamental nature of intersections is that their populations become smaller as more dimensions are considered, so the statistical tests become less powerful and it’s harder for automated tools to identify bias with certainty.


Image Credit: https://www.ywboston.org/2017/03/what-is-intersectionality-and-what-does-it-have-to-do-with-me/

There are are also many sub-classes that we don’t even know to look for bias against and have to rely on chance to discover. For example, in 2014 Target was called out for predicting pregnancies based on shopping patterns. Their Marketing Analytics team had a hypothesis that they could target pregnant women and made the explicit choice to single out this population, but with modern unsupervised learning techniques it could have just as easily been an automatically deployed campaign where no human had ever seen the description of the target audience.

“Pregnant women” as a category is easy to describe and the concerns about such targeting are easy to stir up controversy and change corporate behaviour, but more niche groups that may be biased against by algorithms may never be noticed. It’s also troubling that there may be classes discovered by unsupervised learning algorithm that have no obvious description yet, but would be controversial if given a name.

So what can be done? It may seem like a contradiction to try and address unknown knowns, given that they’re unknown, but new interpretability tools are changing what can be known. Practitioners could also start dedicating more of their model validation time to exploring the full set of combinations of protected classes, rooting out the subtle biases that might be missed with separate analysis of each category. A less technical but more ambitious solution is for organizations and practitioners to start sharing the biases they’ve discovered in their models and to contribute to some sort of central repository that others can learn from.

GDPR Will Transform Insurance Industry’s Business Model
By Amit Tyagi | October 21, 2018

The European Union wide General Data Protection Regulation, or GDPR, came into force on May 25, 2018, with significant penalties for non-compliance. In one sweep, GDPR harmonizes data protection rules across the EU and gives greater rights to individuals over how their data is used. GDPR will radically reshape how companies can collect, use and store personal information, giving people the right to know how their data are used, and to decide whether it is shared or deleted. Companies face fines of up to 4 per cent of global turnover or €20m, whichever is greater.

To comply with GDPR, companies across various industries are strengthening their data usage and protection policy and procedures, revamping old IT systems to ensure that they have the functionality to comply with GDPR requirements, and reaching out to customers to get required consents.

However, GDPR will also require a fundamental rethink of business models for some industries, especially those that heavily rely on personal data to make business and pricing decisions. A case in point is insurance industry. Insurers manage and underwrite risks. Collection, storage, processing and analyzing data is central to their business model. The data insurers collect go beyond personal information. They collect sensitive information such as health records, genetic history of illnesses, criminal records, accident-related information, and much more.

GDPR is going to affect insurance companies in many ways. Start with pricing. Setting the right price for underwriting risks heavily relies on data. With data protection and usage restriction provisions of GDPR, insurers will have to re-look at their pricing models. This may have an inflationary effect on insurance prices: not a good thing for consumers. This will be further compounded by ‘data minimization’, a core principle of GDPR limits the amount of data companies can lawfully collect.

Insurance companies typically store their data for long periods. This aids them in pricing analytics and customer segmentation. With right to erasure, customers can request insurers to erase their personal data and claims history. These requests might come from customers who have an unfavorable claims history, leading to adverse selection due to information asymmetry.

Insurance frauds are another area that will be impacted by GDPR. Insurance companies protect themselves from fraudulent claims by analyzing myriad data points, including criminal convictions. With limitation on the type of data they are able to lawfully use, quite possibly insurance frauds may spike.

Insurance companies will also have to rethink their internal processes and IT systems which were built for a pre-GDPR era. Most decisions in insurance industry are automated, which includes, inter alia, whether to issue a policy or not, how much insurance premium to charge, whether to processes a claim fully or partially. Now with GDPR, customers can lawfully request human intervention in decision making.

GDPR gives the right to customers to receive their personal data held by an insurer, or have it transmitted to another insurer in a structured, commonly used and machine-readable format. This will be a challenge as insurers will have to maintain interoperable data formats from disparate legacy IT systems. Further, this has to be done free of charge. This will surely lead to lower profitability as competition among insurers will increase.

GDPR mandates that data should be retained only as long as is necessary for the purpose for which it was collected, after which it needs to be deleted and anonymized. If stored for longer duration, the data should be pseudonymized. This will require significant system changes, which will be a huge challenge for insurance companies as the rely on disparate systems and data sources, all of which will have to change to meet GDPR requirements.

Though insurers may be acutely impacted by GDPR, their path to compliance should be a disciplined approach: revisiting systems and processes to assess readiness for this regulation and investing in filling gaps. Some changes may be big, such as data retention and privacy by design, while some may be more straightforward, such as providing privacy notices. In all cases, effective change management is the key.