Does cashless society discriminate against the poor and elderly?

Does cashless society discriminate against the poor and elderly?
By Adara Liao | October 13, 2019

The increasing prevalence of cashless payments, including mobile phone payments, digital payments and credit cards, in the world means that cash circulation is dropping worldwide. Some retailers do not accept cash or incentive customers to pay using smartphones. In major cities in China, over 90 percent of people use WeChat and Alipay as their primary payment method, ahead of cash. In the US, the use of cash is just 16% of total transactions and is expected to decline as cards growth accelerate.

The growing use of cashless payment puts financially disenfranchised populations at a disadvantage as they cannot participate in services that require cashless payments and pay higher cost when transacting in cash.

Two populations who are less likely to participate in cashless payments are the elderly and the poor. The elderly are less able to manage cashless payment methods, especially without transition support. If local authorities or utility companies do not support cash, the elderly cannot get their government subsidies and face hurdles to pay for basic services.

In some countries, retailers may charge more for accepting cash or refuse cash payments altogether. In the US, many restaurants refuse to accept cash due to combination of incentives from credit card networks like Visa and Mastercard and desire to create a frictionless experience for high-value customers. For those who do not qualify for cards, or cannot afford mobile payment methods, they are excluded from these retailers.

In addition to being locked out of retailers, low-income people who participate in cashless payments options face stiff penalties for overdraft fees from their savings accounts and are ineligible to receive credit cards who reward wealthy users for money spent. People who rely on cash subsidize wealthier people who use credit cards.

About a third of the population in the US are underbanked, which means they opt to go without regular patronage of traditional money management services such as debit cards and checking accounts. Traditionally, possessing a bank account is required for participation in digital payment services. Although startups now act as savings accounts with low fees that replace traditional bank accounts, they do not have brick and mortar presence of traditional banks and require digital literacy which will still exclude the lowest segments of underbanked and poor. Thus, there is a population who require cash and need to be addressed by retailers and companies.

Governments can protect financial inclusion for people who are unable to participate digitally. In the US, The Cashless Retailers Prohibition Act of 2018 would make it illegal for restaurants and retailers not to accept cash or charge a different price to customers depending on the type of payment they use. Education can also play a part in helping elderly adopt digital payments. In Singapore, the government holds classes to help its senior population learn how to use digital payments such as paying in store using QR codes and topping up travel cards with debit cards [1].

It is not just tech-disadvantaged populations who are vulnerable to discriminatory actions in a cashless society. Fintech companies have access to a broader set of information on a customer’s financial habits and social network. Such information provides fintech companies with power to make lending decisions, or partner with credit-scoring or lenders to make discriminatory decisions.

With more information collected by fintech companies, consumers’ preferences can be shared with third parties or sold for profit. To the extent that fintech companies are susceptible to hacking or undisciplined sharing of consumers’ information with third parties, the privacy of consumers are compromised. Thus, sensitive information about consumers should be regulated and consumers have to be clearly informed about how and who will use their information.

As society moves towards cashless payments, groups without the financial ability or adoption ability to participate in cashless payments risk being put at a disadvantage. Possessing high-end smartphones, qualifying for debit cards and having the technological know-how to operate cashless payments methods safely are barriers to entry for disenfranchised groups. To ensure that all groups in society are able to fully participate in the changing landscape, governments can introduce laws to protect cash acceptance and be mindful of impact on disenfranchised groups when encouraging fintech industry.

References
[1] https://www.raconteur.net/finance/financial-inclusion-cashless-society
[2]https://www.theguardian.com/business/2018/jul/15/cashless-ban-washington-act-discrimination
[3]https://www.brookings.edu/wp-content/uploads/2019/06/ES_20190614_Klein_ChinaPayments_2.pdf
[4]https://www.brookings.edu/opinions/americas-poor-subsidize-wealthier-consumers-in-a-vicious-income-inequality-cycle/

Chart Source
[1] https://www.statista.com/statistics/568523/preferred-payment-methods-usa/
[2]https://www.emarketer.com/content/four-mobile-payment-trends-to-watch-for-in-2019

Robodebt – The Australian Government Debt Recovery scheme is set to expand

Robodebt – The Australian Government Debt Recovery scheme is set to expand.
By Stephanie Mather | October 4, 2019

Imagine having your hard earned wages ripped out of your tax return and held hostage while you try and prove the obvious flaws in an automated system. To add to this, your only form of recourse is a phone line that is notorious for its wait times. This is the reality faced by thousands of welfare recipients in Australia, trapped in a nightmare dubbed RoboDebt. Subject to two parliamentary enquiries and a Federal Court challenge, the ethics of data-matching to detect welfare fraud is in the spotlight as the Australian government continues to expand its reach into its citizens’ data.

The Canberra Times, 3 January 2017

In the Beginning
In early 2017, Australian news outlets exploded over the government’s new automated data matching capabilities with thousands of letters sent out to welfare recipients demanding information or a debt would be raised against them. On average, the letters contained a debt of AU\$1919, an extraordinary amount for an unemployed person who may only be receiving AU\$535.60 a fortnight in welfare. What was even more galling, a large number of these debts are due to a known issue in the data-matching algorithm: it does not correctly account for intermittent earnings across the year.

Most people who receive unemployment benefits have casual jobs whilst they look for full-time employment. Benefits entitlement rates are adjusted from fortnightly earnings but the Australian Tax Office (ATO) income data provided to Centrelink was simply averaged across the financial year. As a result, if a person had earned a high income (or was employed full-time) for part of the year, the RoboDebt system automatically assumed this was divided evenly across every fortnight. This data mismatch is the most common cause of debts being raised incorrectly. If the recipient did not respond in the 28 day window or failed to provide the correct timing of wage earnings, the debt was passed to a debt collection agency (with an additional 10% recovery fee). In some circumstances it was then taken directly from their next tax return by the ATO.

Burden of Proof
Data matching between ATO and Centrelink is not new: sharing agreements have been available since the 1990s. However, the onus used to be on the department to prove debts before they were raised, the department has coercive powers to get payslips from employers to show the dates of earnings. Only when the person was proven to have both benefits and wages concurrently was a debt pursued. RoboDebt has pushed down the burden of proof to the individual, who likely has a limited understanding of the potential impact of the averaged data, going back up to 7 years. For vulnerable people on welfare, collating this data in 28 days can be formidable and they are unlikely to be aware of their appeal avenues to gain their full records, up to and including the department collecting payslips from employers on their behalf.

There has been some improvements in the system, just this month Centrelink released the new ‘Check and Update your Income’ process which provides much more detail and transparency to the user as they update their data, including real-time updates on the debt incurred. However, as raised by the Ombudsman, it still fails to adequately explain that “DHS may use Australian Taxation Office (ATO) data to calculate any debt, which may mean they have to pay back more than they need to.” (p. 31, Ombudsman. DHS is still taking an unfair advantage over vulnerable people whom may not understand the implications of averaging. A better approach would be to match real-time wage data with the ATO, stopping overpayments before they occur and limiting the recovery of small debts.

Because of the many outdated postal addresses in the database, users would be oblivious to the debt until they were contacted by a debt collection agency or the money was taken from their tax return. The original program development focused on correctly matching the two datasets, without regard to the necessary communication piece; if the recipient did not respond within 28 days of the initial contact, the debt was confirmed and passed to a collection agency. This highlights the importance of thinking through all the data required for a just implementation of an automated debt recovery system as well as the ability of subjects to understand the appeals process. DHS did not demonstrate adequate effort to make contact and after the first senate inquiry, the debt notices are tracked via registered mail. By not tracking the letters, the department took advantage of the 28 day window to deem the debts valid and apply collection fees. If the original system design had considered how to create a just and fair access to the appeals process, this would not have been overlooked.

Cost Benefit Analysis to the Australian People
Robodebt was touted by the government as a way to catch the ‘dole-bludgers’, those in society making false representation to get access to additional public funds, benefiting all Australians through the resultant savings. But, the large false positive error rate has meant the mental harm and stress caused by the debts have been felt by many, and were a primary driver for the Senate inquiries. The 2017 inquiry raised issue with the fairness, transparency and usability of the RoboDebt system. Although some of the issues around customer communication have now been met, a second enquiry into Centrelink’s compliance program was opened in 2019 and a separate Federal Court challenge is pending.

With all these challenges against the system it would be right to assume that it at least has been profitable for the Budget given all the debts recovered? Wrong! In fact, the system has cost almost as much to administer as it has recovered to date. The balance of cost, both in monetary and societal terms, are in the terms of reference for the 2019 senate enquiry.


Robodebt Centrelink compliance program cost AU\$600m to recoup AU\$785m so far

Who’s Next?
The Department of Human Services has just released a Matching of Centrelink and Medicare Data Protocol. Medicare is the universal healthcare system in Australia and the matching seems to cover all non-medication services. According to the protocol, “the data-matching program is designed to detect false, manipulated and assumed identities used by customers in submitting multiple claims. It is also designed to detect false information provided by customers relating to employment, medical eligibility and relationship circumstances” (DHS, July 2019). The public should demand transparency from the DHS on how their medical information will be used so they can pass judgement if the expansion of the RoboDebt system is beneficial to the Australian people. The current sustained criticism suggests otherwise.

Innocent Until Proven Guilty? Not Here.

Innocent Until Proven Guilty? Not Here.
By Jeffrey Braun | October 4, 2019

The Chicago Police recently launched a Gun Offender Dashboard website that displays summary statistics on the incarceration status of people charged with gun offenses. It shows the number of people arrested and released on bail as well as the number of people arrested but not released on bail. The latter group remains in jail, awaiting trial.

As their motivation for creating the site, CPD leadership states they are concerned about the number of accused individuals who are released on bail shortly after being charged. They allege that offenders are released too easily and too frequently, only to commit crimes again once they are freed. By publishing the data, their hope is to increase public awareness of this problem and thereby encourage the county courts to be less willing to release alleged offenders.

But the dashboard does not stop at showing aggregate statistics. On a separate tab, it also reveals the names of all the individuals charged with gun offenses. Clicking on a name in the list allows the user to drill down into the details of the arrest. It also allows you to see prior criminal convictions for each individual, if any.

The site presents a serious ethical issue. By disclosing the names of individuals charged with gun offenses it can effectively deprive them of a fundamental right — their right to be presumed innocent until proven guilty, a right accorded to people charged with crimes in the US and in many other countries.

How can it deprive accused individuals of the right of presumed innocence? The people listed on the site have only been charged with a gun violation. They have not been convicted of a gun violation. In the minds of many people who will view the site, though, the distinction between being charged and being convicted may not be particularly meaningful. Many viewers will look differently upon those listed on the site simply because they appear there. This can in turn result in harmful consequences for alleged offenders, including lose of job opportunities, housing, and other benefits they would otherwise enjoy if their names did not appear on the site.

In its own defense, CPD states that the data is publicly available anyway. Anyone can go on to CPD’s site and view data on arrests the police have made, including gun offenses. This is true. However, the dashboard creates a whole new paradigm for accessing and consuming the data. It assembles and publishes it in such a way that it is easy for casual site visitors to see a long list of alleged offenders along with some very personal history that the accused individuals would likely prefer not be shared in such a public manner. This is a far cry from the research-like paradigm of looking up data on individual criminal cases or alleged offenders. It is a mass publication of data that can cause significant harm to people who have only been charged and not convicted of a crime.


(Names in above screen shot obscured by the author)

We can compare the dashboard’s data access paradigm to the social context of the traditional criminal trial for some additional insight here. In criminal court cases, jurors are admonished that defendants must be presumed innocent until the state has proven them to be guilty. People viewing CPD’s gun offender dashboard, on the other hand, will likely not take to heart any warnings about presumption of innocence while they peruse the site. It is just too tempting when presented with a long list of alleged offenders to assume that some, if not most, are guilty of the crimes. This is only reinforced by the site’s characterization of the accused individuals as “offenders”, a term that implies the accused individuals have already been found guilty.

What feels even more offensive and potentially damaging is the fact that site users can easily view past convictions of those people shown on the list. Here again, a social context comparison is informative. US rules of evidence generally prevent introduction of past criminal convictions as evidence to support the guilt of a person on trial for a new crime (https://www.nolo.com/legal-encyclopedia/evidence-prior- convictions-admissible-against-defendants-who-testify.html). In fact, when an individual on CPD’s Gun Offender dashboard list goes on trial, the jury will likely not be told whether the defendant had committed a crime previously. No such protection is afforded when an individual’s information is viewed on CPD’s site, however. His or her past criminal conviction record is right there, for all to see and use to draw conclusions about the accused individual.

We all yearn for effective ways to decrease gun violence. Seemingly on a daily basis, we watch with horror, sadness and despair as the media report on lives senselessly lost to gun violence. We cannot, though, walk away from our long-held beliefs in the rights of the accused as we search for answers to gun violence. The power and impact of aggregated and easily consumed public data can sometimes deprive the accused of their rights, as has happened here with the CPD gun offender dashboard.

The Chicago Police Department should just publish aggregate data on their gun offender site. It will accomplish their stated goal and it will avoid deprivation of individual rights. At the very least, CPD should replace the term “Offender” on the site with a term that clearly indicates the individuals listed have just been charged, not convicted. The ability to easily click through to data on past convictions must also be disabled.

Our “Public” Private Lives

Our “Public” Private Lives
By Anonymous | October 4, 2019

It used to be that what you did at your previous employer’s, in college, at that New Year’s party a decade ago was all a thing of the past, privy only to your own inner memory (and maybe a photo album or two). But now, in this day and age of Twitter, Snapchat, Instagram, and others, our “public” private lives are not so private anymore.

One danger is that there’s a lot out there that we are not even aware of ourselves. Once in a while I will get an email from a service that has changed their Terms of Service, and I will only then realize that I had an account with them. With some apps and sites, this is not a big deal. With others, like health vaults for instance, the lack of remembering where / when / why I signed up is more concerning. Questions like, what information have I shared with them, what information could I be losing, what information could be leaked, pop up.

Some of these accounts I probably have from testing out for a day and then abandoning ship, or perhaps they are legacy apps from decades ago and I forgot to close my account, or worse, maybe I accidentally spun them up without intending to do so by clicking one too many buttons in an automated onboarding flow.

On the one hand, having these accounts out in public makes me vulnerable to fraud, like account takeover (good thing my name is not super generic, like John Smith!). If my digital presence were to be audited (for instance if my future employer did a digital background check), that might result in unintended negative consequences. And what if my “friends” on these networks I never realized existed were involved in infamous pursuits. On the web, not only is your own persona being judged, but also who is in your extended circle of connections.

Regardless, there are 2 recommendations that I think are vital to at least mitigating any unwanted consequences.

One – Make it very apparent that a user when signing up for a service is signing up for terms and policies as well (this might involve a bit more friction that many one-click onboarding flows have at the moment).

Two – Make the fine print less fine and easier to digest for this limited attention span, TL;DR, and possibly less educated audience (aka mirror the language to that of web copy and marketing materials – which often cater to the reading level of a fifth grader for optimal comprehensibility).

The Belmont Report talks about how we should provide users with informed consent, which means with full information, at an easy to comprehend level, given to them voluntarily. To make it actually given “voluntarily”, we should reduce the amount of automation of opting in to consent. That means building in voluntary consent flows, which companies will likely balk at due to the increase in user friction this may cause.

The other issue that we need to address is that any changes to terms of service and policies should follow these same rules, as friction filled as they may be. These updates can’t just be emailed randomly, lost in spam folders, swiped left in app; they should force manual opt-in for users. This would ensure not just protection for users, but also incentivize companies coming up with these policy changes to make sure they are following the same strict protocol they did in coming up with the original policies to begin with.

In this more and more connected, open world, perhaps as we share more and more, we will care less and less. But for those who do care, let’s keep a transparency through comprehensible policies delivered in a very apparent way so users can truly keep tabs on their private information.

Lies Against the Machine

Lies Against the Machine
By Elle Proust | October 4, 2019

Lie detection machines are a mainstay of crime fiction, as effective tools for law enforcement. Outside of fiction they are less useful, both because of legal restrictions and not working especially well. As with many other areas of life, Artificial Intelligence(“AI”) has been put forward as a potential solution. Three recent examples; European Dynamics provides data science based tools to border agents in Europe, research scientists at Arizona State University have built a AI based immigration agent called ‘AVATAR’ intended to be used at the US/Mexico border, and Conversus has developed an AI based lie detector ‘EyeDetect’ to screen potential employees. Other than having names sounding like the villain company in a robot sci fi, the developed technology is concerning both because of the lack of transparency on their scientific efficacy and apparently little thought to ensuring that they are applied fairly.

Background

It is human fallibility at discerning lies that has driven scientific methods for doing so. A literature review of over 200 studies in 2003 found that humans could accurately detect whether a statement was true or false at around 54% of the time, so there. Policing, immigration and other law enforcement would be greatly improved by accurate lie detection. Attempts to measure lying by machine have been around since the early twentieth century, and have had varying levels of success through polygraphs, voice detection and even brain scans, though most have been rejected by courts for unreliability. Data science in the field is relatively new – but spreading widely.

How they work

EyeDect as in the tool utilised in the movie Blade Runner – monitors eye movements to detect lying. AVATAR also use eye movement but also adds in voice analytics and facial scanning. Finding the data set to train these AI systems on presents an issue, as in most cases we do not have fully labeled data. If someone gets away with a lie it is by definition mislabelled which creates a training problem. Lie detection studies have gotten around this by secretly filming undergraduate students committing dishonest acts and then asking them about it later. The AVATAR deep learning system system was actually trained recordings of faces of college students in the same manner. AVATAR does not disclose how it guarantees that the lying done by students is comparable to people lying at the border. AVATAR claims 75% accuracy but whether this is determined purely on the college students or at the border is unlcear. If it is at the border how it is accurately accounting false negatives is another question entirely. EyeDetect at least has acknowledged some per review assessments showing accuracy tending toward 50% but how they can ensure this does not happen to customers and potential employees does not appear to be publicised.

Applicability and fairness

AVATAR at least is a black box model, the owners of which readily acknowledge that they have no idea how the technology makes decisions – this can be worrying because then we do not know if the algorithms are making unbiased decisions. Conversus have argued that lie detectors are an improvement on the polygraph because no humans can manipulate it. It is true certainly that both employment screening and immigration can and likely are subject to human biases. However, algorithms can be biased and arguing otherwise is specious. As an example, Google did not intend to generate a racist image identifier, however no human would ever label a person of African descent as non-human, which the Google algorithm did – removing a human made a situation worse.

In addition, Professor Eubanks, author of automating inequality has argued that alogirthms can remove individual bias but “are less good at identifying and addressing bias that is structural and systemic”. Most of the data these algorithms are trained on are disproportionately white and affluent Americans – there is no guarantee that these will treat other populations fairly. In other situations, such as welfare programs oversampling of one group of people have led to other groups being identified as outliers simply for being different. Employment and Immigration are already difficult for marginalised groups and we should tread very carefully with anything that could amplfy existing issues.

Looking Forward

The United States at least has banned the use of any lie detectors in employment testing and as admissable evidence. There is no such prohibition against the use by border enforcement which explains the attempted rise of AVATAR. European Dynamics is being trialled by the Hungarian government, and EyeDetect has confirmed that is an operating in a Middle Eastern country but will not name which one – the human rights record in either case should not instill much confidence.

It seems likely systems will be used more widely, appropriate care must be taken.

Images Taken from: Blade Runner (1982), and https://www.eyetechds.com/avatar-lie-detector.html

The Sensorvault is Watching

The Sensorvault is Watching
By Anonymous | October 4, 2019

Imagine a world where the police could track down criminals soon after they commit a crime by tracking the location of their cell phones. This is a world where law enforcement and tech companies work together to bring wrongdoers to justice. Now imagine a world where innocent people are prosecuted for these crimes because their cell phones were present in the vicinity of these crimes. These people were unaware that their cell phones were constantly tracking their ware bouts and storing that data indefinitely. Believe it or not, we live in both of these worlds.

People who have an android phone or a mobile device that has Google Maps installed on it are most likely sharing their location with Google at all times. This data collection is not easy to opt out of, and many people do not know that they are sharing their location at all [1]. This data is stored in a Google database that employees call “Sensorvault” and it is even collected when people are not using location apps on their phones. Surprisingly, this data is still stored when users turn off their location history and can only be deleted with extensive effort by the user [2]. The Sensorvault “includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade [3].”

This treasure trove of data has proven to be useful to law enforcement throughout the United States. Police and federal agents were struggling to find the culprits of a bank robbery in Milwaukee that occured in October 2018. They served Google with a search warrant for information about the locations of the robbers phones, also referred to as a reverse location search. This is not the only time this kind of search warrant has been served, with another occassion being to identify members of a rally that turned into a riot in Manhattan [4].

But how are the devices of only criminals being tracked? They are not, in fact, the only devices being tracked and reported, and this had lead to concern from civil liberty groups [3]. When the police or FBI serves a reverse location search warrant, they are returned with a list of phone owners that were in the vicinity of the crime. In the case of the previously mentioned bank robbery, law enforcement requested all devices that were within 100 feet of the bank during a half hour block of time surrounding the robbery [4]. It is easy to believe that this could result in innocent people being linked to crimes that they did not commit. And this indeed did happen to Jorge Molina, who was detained by police for a week because his phone was in the vicinity of a murder, and his car matched the same make and model as the vehicle involved in the murder. When new evidence made clear that Mr. Molina was not involved, he was released [5]. This was time spent building a case against an innocent individual that could have been spent investigating other suspects further.

The facts explored here are cause for concern; locations of most mobile devices are being tracked without the owner’s knowledge and sometimes when they take measures to stop the recording, like turning off their location history. This data is being used to tie people to crimes that they might not have committed, but were in the wrong place at the wrong time. Steps must be taken to address both the lack of public awareness and the legal scope and use cases of this data.

Bibliography
[1] Nakashima, Ryan. (2018, August 13) AP Exclusive: Google tracks your movements, like it or not. https://www.apnews.com/828aefab64d4411bac257a07c1af0ecb
[2] The Associated Press. (2018, August 13) How to find and delete where Google knows you’ve been. https://www.apnews.com/b031ee35d4534f548e43b7575f4ab494
[3] Lynch, Jennifer. (2019, April 18) Google’s Sensorvault Can Tell Police Where You’ve Been. https://www.eff.org/deeplinks/2019/04/googles-sensorvault-can-tell-police-where-youve-been
[4] Brandom, Russell. (2019, August 28) Feds ordered Google location dragnet to solve Wisconsin bank robbery. https://www.theverge.com/2019/8/28/20836855/reverse-location-search-warrant-dragnet-bank-robbery-fbi
[5] Valentine-Devries, Jennifer. (2019, April 13) Tracking Phones, Google Is a Dragnet for the Police. https://www.nytimes.com/interactive/2019/04/13/us/google-location-tracking-police.html

Are regulations like GDPR right solution to address online privacy concerns?

Are regulations like GDPR right solution to address online privacy concerns?
By Anonymous | July 19, 2019

With the internet turning the entire world into a potentially addressable market, anyone can build a niche business as long as they can find their customers. Personalized ads solve this problem by enabling businesses of all sizes to reach their customers from anywhere in the world. Ad-supported services such as Facebook and Google are extremely popular with the users because of their business model- they provide excellent service at free of cost. Google search , for example, enable anyone to find answers to anything. This immense value proposition of data driven free services loved by users and valued by businesses have revolutionized the economy. But for this business model to work , users need to share their data so that the internet companies can continuously improve their products and serve personalized ads.

However, Individual users have little knowledge or control of their personal data that these internet companies are using and sharing . A resulting conflict has emerged between the necessity of data collection and sharing for companies on the one hand and consumer autonomy and privacy for individuals on the other.

What is GDPR?

The first major regulatory framework to address this dilemma is arguably GDPR or “The General Data Protection Regulation” which came into effect starting May 25th , 2018. GDPR is a regulation in European Union (EU) on data protection and privacy for all citizens of the EU. Due to the open nature of the web, GDPR rules apply to any business that markets or sells their products to EU consumers or whenever a business collects or tracks the personal data of an individual who is physically located in the EU.

GDPR is designed to make it easier for consumers to protect their privacy and enable them to demand greater transparency and accountability from those who use their data. It mandates that businesses that collect and use personal data must put in place appropriate technology and processes to protect personal data of their consumers and sets substantial fines for failing to comply.

GDPR also introduced the concept of “privacy by design”which requires business to design their application in such a way that it collects only the least amount of personal data necessary for their purpose and receives person’s express and specific consent before collecting that limited personal data.

Impact of GDPR

Consumer attitude on Privacy

Hubspot partnered with the University of Virginia to explore how consumer attitudes have changed post GDPR. A group of more than 1000 subjects were surveyed across the EU and the US and the results show that consumer sentiment on privacy has actually decreased in 1 year since GDPR went into effect.

Fewer consumers are concerned with how companies are collecting their personal data.

Fewer expect organizations to be more transparent with their policies on data collection, use and sharing with third parties.

Competition

An analysis of the impact of GDPR on the ad-tech industry suggests that regulation can reinforce dominant players at the expense of smaller entities by further concentrating power — because big companies have greater resources to tackle compliance.

Is there a better way to address privacy concerns?

Though it is still early days, studies suggest that GDPR is likely not working the way it was expected to. Consumers have either become more indifferent towards their online privacy concerns or their confidence on the protection mechanisms have declined in the first year since GDPR went into effect. There is also growing evidence that GDPR is likely ending up in harming competition and helping internet giants to further increase their market share at the expense of the smaller players.

Instead of a one size fits all regulatory approach like GDPR , it might be worthwhile to define the context specific substantive norms for privacy as suggested by Nissenbaum and use that to constrain what information websites can collect, with whom they can share it, and under what conditions it can be shared. Secondly , conditions could be created for users to see not simply what they have shared — which GDPR requires, but also their profile information that gets built after merging their inputs with other data sources. Today, many users don’t care about privacy, particularly if the service is useful or saves them money. At the same time , most users likely have no idea what information about them that these internet companies may have uncovered. They might change their minds if they actually saw their profile data, not simply the raw inputs. And finally, once empowered , users should be trusted to take care of their own privacy.

References

[1] Nissenbaum-contextual-approach-privacy-online
[2] General-data-protection-regulation-consumer-attitudes
[3] Gdpr-effect-on-competition

Smart Home Devices help Amazon and Google to more invade your privacy

Smart Home Devices help Amazon and Google to more invade your privacy
By Martin Jung | July 19, 2019

The number of smart speakers with voice assistant such as Amazon’s Echo and Google’s Home has been increased to 66.4 million at the beginning of 2019. The speakers keep track of the questions people ask and store recordings of them.

On top of that, the speakers integrate tightly with many smart home devices, allowing owners to communicate to those devices leveraging the speakers and provide instructions by voice commands, remote controls or a touchscreen. Short list of examples of such smart devices is light bulb made by Philips; ceiling fans made by Hunter Fan; thermostats made by Ecobee, and Nest; security door ring made by Ring; door locks made by August; and self-steering vacuums from iRobot. Recently, Amazon Alexa partnered with UK National Health Service to allow owners to ask health questions and Amazon Alexa answers the question based on searches of the NHS site. This enablement is opening up a new level of privacy concern, since Amazon now has a private health information of the owners.

Amazon’s privacy policy and Google’s privacy policy state that the information would be shared within the companies or to the subsidiary companies. This means that the data that can be collected include not only the voice recording but also maps of homes (vacuums) or everyday schedule of the owners (light bulb and thermostats) or health condition. Moreover, especially when paired with other information about you such as home address, calendar information, it helps fill out a complete record of your behavior and surrounding situations. In sum, this situation clearly expands Surveillance footprint and Secondary Use in terms of Solove’s Privacy Taxonomy.

Owners of Amazon Echo and Google Home can delete their recorded voice or opt out of some data collection. However, by default the information is stored and the process of deleting the record is somewhat complicated. For a government policy to mitigate this problem, the expansive General Data Protection Regulation (GDPR) in Europe, people have the right to ask companies to stop using or delete their private information. However, U.S. regulation generally does not give people that much control. A California Consumer Privacy Act set to take effect in 2020 will give the owners the right to know what data is being collected and shared and allow the owners to deny companies the right to sell it.

Regardless how strict the regulation is, data hungry companies such as Amazon and Google will try to work out of the regulation and will continue to find a way to collect owner’s data to drive their business by expanding their ecosystem devices. It is becoming more important for users to feel the urgency and configure the devices or speakers not to collect their private data as much as they want to enjoy the Smart Home devices.

Big Medical Data, Big Ethics

Big Medical Data, Big Ethics
By Yue Hu | July 19, 2019

The collection and usage of personal medical and health data has come under increased scrutiny recently with technology and medical development. Obviously, most medical scientists increasingly want patients to donate massive amounts of sensitivity personal information for study such as the complex sets of factors causing SCA and determining survival. However, privacy protection and ethical medical research become a big concern due to difficulties in data flow control with hidden network of patient data distribution by healthcare organizations and their third-party vendors. How to balance protecting patients’ privacy with the benefits that big data brings to medical research becomes a popular topics and increase public attention.

Hidden Network of Medical Data Sharing
Receiving notice letter of data breach or spam call for asking payment of medical statement makes people scared and helpless. Recently, I received a notice letter of a data privacy incident involving Retrieval-Masters Creditors Bureau Inc. doing business as American Medical Collection Agency. The security compromise of company’s payments page from an independent third-party compliance firms affected millions of Quest Diagnostics Inc. customers. Based on an external forensics review, that an unauthorized user had access to companies system between August 1,2018 and March 30, 2019, and the hackers had 8 full months to gather personal information including first and last name, SSN, band account information name of lab or medical service provider, data of medical service, referring doctor, certain other medical information. Upwards of 20M customers of Quest Diagnostics and Laboratory Corporation of America had their data stolen.

This shocked news bring me to more consideration and concern about my and my family personal medical data and information. I had two questions on top of mind when I received this letter:

  • Am I asked for consent to share data with this company? Obviously, the answer is ***NO***! I never gives the right to share any data with this company. After the research online, I realize that my blood sample collected by WomanCare Center in last August was sent to the medical lab and this company collects receivables for medical labs.
  • How do I prevent this data privacy incident in the future? Definitely, it is hard! When I was asked for blood test by my doctor, I lost autonomy to choose lab test company. What is worse, information is collected by third party agency without knowledge and consent. Therefore, I totally lost control of my personal medical data flow.

This story indicates the huge hidden network of medical data distribution by healthcare organizations and their third-party vendors. When patients receive care at a healthcare provider (HCP) or organization (HCO), most of time they don’t have the freedom to choose the medical lab for their test. Moreover, they are not asked for consent to sending test and identity to these third-party labs and vendors. Unfortunately, patients can not find this unseen layer of networks until data breaches happen at these third party companies.

Cyber Criminals in Health Care
In the past five years, we’ve seen healthcare data breaches grow in both size and frequency, with the largest breaches impacting as many as 80 million people. Nowadays, medical data and identity is uniquely comprehensive and valuable for quality clinical care and health-related research, making it more valuable than a credit card information or location data. Moreover, today healthcare organizations comes to cloud, network, application, IoT, and etc., which brings difficulty for data security. According [a recent report](https://www.hcinnovationgroup.com/cybersecurity/news/13027679/report-healthcare-industry-workers-lack-basic-cybersecurity-awareness), SecurityScorecard ranks healthcare 9th out of all industries in terms of overall security rating. With frequent medical data breach, the public lost trust to health care industry’s who still use outdated technology and lack basic security awareness.

Cyber criminals also leads to financial and operational losses except for reputation loss and cost of recovery efforts. On the other hand, the security criminals will bring irretrievable physical, emotional, and dignitary harms. Once the data is inappropriately disclosed or theft, the patients are not possible to control their sensitive private medical data flow. Based on [A February 2017 survey from Accenture](https://newsroom.accenture.com/news/one-in-four-us-consumers-have-had-their-healthcare-data-breached-accenture-survey-reveals.htm), 50% of breach victims suffers medical identity with an average out-of-pocket cost of $2,500. Unfortunately, many breaches is detected with a fraud altered or an error on their credit card statement and their benefit explanation instead of receiving company enforcement notification.

Code of Medical Ethics
Upholding trust in the patient-physician relationship, to preventing harms to patients, and to respecting patients’ privacy and autonomy create responsibilities plays an important role in individual physicians, medical practices, and health care institutions when patient information is shared and distributed to third-party vendors. Due to the hidden and complicated network of medical data distribution between medical institution and third-party vendors, medical health organizations and individual physicians have the obligation to better secure patients’ data for vulnerable population protection and medical privacy respects.

  • Risk mitigation before breach: All health care organization should take action to approach security efforts. Training staff in proactive cyber awareness training, limiting the security access, provide early alters to trending cyberattacks and refining partners and third-party vendors to reduce the risks for data breach. It is always impossible to achieve total security. Every health care organization and medical institutes needs to evaluate the acceptable level of data breach risk and determine the cybersecurity strategies with professional cybersecurity providers.
  • Data Sharing with third-party: Reviewing partners’ and third-party vendors’ security level and standard before sharing medical data is very important for medical institutions. Collaborating with third-party companies lacking data security awareness will impose high risk of cyberattacks even high security level is adopted by the institutions. In addition, in order to enhance patient privacy, the institutions should apply technological solutions to anonymize, de-identity or perturb the data.
  • Actions after data breach: Ensuring that patients are promptly informed about the breach, what information is breached, and potential harms is important. The healthcare organization also provide information to patients to enable patients to mitigate potential adverse consequences of inappropriate disclosure of their personal medical information.
  • What patients can do after data breach: Data victims should remain vigilant for fraud and identity theft by reviewing and monitoring their account statement and credit reports closely. If patients believe that they are the victim of identity theft or have evidence for their personal information misusing, patients should immediately contact the FTC who can provide information about avoid identity theft.

Works Cited:

  • Breach of Security in Electronic Medical Records: https://www.ama-assn.org/delivering-care/ethics/breach-security-electronic-medical-records
  • One in Four US Consumers Have Had Their Healthcare Data Breached, Accenture Survey Reveals: https://newsroom.accenture.com/news/one-in-four-us-consumers-have-had-their-healthcare-data-breached-accenture-survey-reveals.htm
  • Top 10 Biggest Healthcare Data Breaches of All Time: https://digitalguardian.com/blog/top-10-biggest-healthcare-data-breaches-all-time
  • How to Prevent a Healthcare Data Breach in 2018: https://healthitsecurity.com/news/how-to-prevent-a-healthcare-data-breach-in-2018
  • The tricky ethics—and big risks—of medical ‘data donation’: https://www.advisory.com/daily-briefing/2018/07/18/personal-data
  • How to be a cybersecurity sentinel: https://www.advisory.com/research/health-care-advisory-board/multimedia/infographics/2018/how-to-be-a-cybersecurity-sentinel
  • Big data, big ethics: how to handle research data from medical emergency settings?: https://blogs.biomedcentral.com/on-medicine/2018/09/13/big-data-big-ethics-handle-research-data-medical-emergency-settings/
  • Debt Collector Goes Bankrupt After Health Care Data Hack: https://www.bloomberg.com/news/articles/2019-06-17/american-medical-collection-agency-parent-files-for-bankruptcy

Cyberbullying is on the Rise: What Can We Do to Stop It?

Cyberbullying is on the Rise: What Can We Do to Stop It?
By Hilary Yamtich | July 19, 2019

Seventh grader Gabriella* (name changed) comes to school and reports to me, her teacher, that last night another student at the school sent mean messages on Snap Chat about her to other students and now her friends don’t want to sit with her at lunch. I reported this to administrators who were unable to identify the sender of the messages and did not follow up further.

Cyberbullying is on the rise and female students are three times more likely than male students to be bullied online. Data from the survey “Student Reports of Bullying: Results from the 2017 School Crime Supplement to the National Crime Victimization Survey,” says that 21% of female students and 7% of male students between ages 12 and 18 experienced some form of cyberbullying in 2017. Students in grades 9 through 11 are most likely to experience cyber- bullying. This data shows overall a 3.5% increase from the last year that this data was collected (2014-15.)

Of course this problem is getting worse; the increase is not just due to an increase in reporting—much younger students are gaining access to social media through smart phone apps. Students are more savvy about how to use social media. And students know that it is easy to maintain some degree of anonymity online by creating fake accounts to bully their peers. As in Gabriella’s case, administrators rarely have the time or tools to thoroughly address these incidents.

There are three main tools being used to address the problem.

First, lobbying groups such as Common Sense Education are pushing for legislation that criminalizes electronic bullying. In some states, cyberbullying can be prosecuted and even minor offenders can serve time for such offenses. Cyberbullying can also be a hate crime if certain language is involved. However, the vast majority of cyberbullying cases do not reach the level of actual legal prosecution.

Second, tech companies are also developing tools to address the issue. According to one study conducted by an anti-bullying organization Ditch the Label, the largest number of cyberbullying instances happen via Instagram. Instagram is using machine learning algorithms to identify potentially abusive comments and in 2017 unveiled a tool that allows users to block certain words or even entire groups of users.

Finally, many states have policies about how schools are meant to respond to cyberbullying incidents. For instance, in California the government provides an online training program for administrators to prevent and respond to online bullying.

Ultimately, the effectiveness of all these tools comes down to engaging with the young people who are involved. We can only address cyberbullying if we know that it is happening—this can come from young people understanding what is happening and feeling comfortable enough to tell an adult, or from machine learning tools that tech companies use to flag incidents. Obviously the technical solutions are limited by the extent to which tech companies are incentivized to catch acts of cyberbullying and the effectiveness of the tools themselves. And the likelihood that students will bring all incidents to adult attention is also not always high. Even when adults are made aware of these incidents, school officials might not have the technical knowledge or time to fully address the issues. Students are essentially on their own to deal with cyberbullying—without thorough education about their rights to be free of bullying online, many students simply accept the abuse and silently suffer.

Gabrielle spoke up about what was happening to her, but little could be done. The bullying did not rise to the level of hate crimes due to lack of specific language and administrators were not able to identify with certainty which students were involved. Gabriella became more withdrawn throughout the year and by the end of the year was in counseling for depression.

Educators especially in middle and high school need to implement the policy recommendations that already exist to ensure that these incidents are addressed effectively. Teachers need to be supported in taking time to educate students about these issues. And tech companies need to work more directly with parents and young people to ensure that the protections that they design are actually used effectively.