Big Medical Data, Big Ethics

Big Medical Data, Big Ethics
By Yue Hu | July 19, 2019

The collection and usage of personal medical and health data has come under increased scrutiny recently with technology and medical development. Obviously, most medical scientists increasingly want patients to donate massive amounts of sensitivity personal information for study such as the complex sets of factors causing SCA and determining survival. However, privacy protection and ethical medical research become a big concern due to difficulties in data flow control with hidden network of patient data distribution by healthcare organizations and their third-party vendors. How to balance protecting patients’ privacy with the benefits that big data brings to medical research becomes a popular topics and increase public attention.

Hidden Network of Medical Data Sharing
Receiving notice letter of data breach or spam call for asking payment of medical statement makes people scared and helpless. Recently, I received a notice letter of a data privacy incident involving Retrieval-Masters Creditors Bureau Inc. doing business as American Medical Collection Agency. The security compromise of company’s payments page from an independent third-party compliance firms affected millions of Quest Diagnostics Inc. customers. Based on an external forensics review, that an unauthorized user had access to companies system between August 1,2018 and March 30, 2019, and the hackers had 8 full months to gather personal information including first and last name, SSN, band account information name of lab or medical service provider, data of medical service, referring doctor, certain other medical information. Upwards of 20M customers of Quest Diagnostics and Laboratory Corporation of America had their data stolen.

This shocked news bring me to more consideration and concern about my and my family personal medical data and information. I had two questions on top of mind when I received this letter:

  • Am I asked for consent to share data with this company? Obviously, the answer is ***NO***! I never gives the right to share any data with this company. After the research online, I realize that my blood sample collected by WomanCare Center in last August was sent to the medical lab and this company collects receivables for medical labs.
  • How do I prevent this data privacy incident in the future? Definitely, it is hard! When I was asked for blood test by my doctor, I lost autonomy to choose lab test company. What is worse, information is collected by third party agency without knowledge and consent. Therefore, I totally lost control of my personal medical data flow.

This story indicates the huge hidden network of medical data distribution by healthcare organizations and their third-party vendors. When patients receive care at a healthcare provider (HCP) or organization (HCO), most of time they don’t have the freedom to choose the medical lab for their test. Moreover, they are not asked for consent to sending test and identity to these third-party labs and vendors. Unfortunately, patients can not find this unseen layer of networks until data breaches happen at these third party companies.

Cyber Criminals in Health Care
In the past five years, we’ve seen healthcare data breaches grow in both size and frequency, with the largest breaches impacting as many as 80 million people. Nowadays, medical data and identity is uniquely comprehensive and valuable for quality clinical care and health-related research, making it more valuable than a credit card information or location data. Moreover, today healthcare organizations comes to cloud, network, application, IoT, and etc., which brings difficulty for data security. According [a recent report](, SecurityScorecard ranks healthcare 9th out of all industries in terms of overall security rating. With frequent medical data breach, the public lost trust to health care industry’s who still use outdated technology and lack basic security awareness.

Cyber criminals also leads to financial and operational losses except for reputation loss and cost of recovery efforts. On the other hand, the security criminals will bring irretrievable physical, emotional, and dignitary harms. Once the data is inappropriately disclosed or theft, the patients are not possible to control their sensitive private medical data flow. Based on [A February 2017 survey from Accenture](, 50% of breach victims suffers medical identity with an average out-of-pocket cost of $2,500. Unfortunately, many breaches is detected with a fraud altered or an error on their credit card statement and their benefit explanation instead of receiving company enforcement notification.

Code of Medical Ethics
Upholding trust in the patient-physician relationship, to preventing harms to patients, and to respecting patients’ privacy and autonomy create responsibilities plays an important role in individual physicians, medical practices, and health care institutions when patient information is shared and distributed to third-party vendors. Due to the hidden and complicated network of medical data distribution between medical institution and third-party vendors, medical health organizations and individual physicians have the obligation to better secure patients’ data for vulnerable population protection and medical privacy respects.

  • Risk mitigation before breach: All health care organization should take action to approach security efforts. Training staff in proactive cyber awareness training, limiting the security access, provide early alters to trending cyberattacks and refining partners and third-party vendors to reduce the risks for data breach. It is always impossible to achieve total security. Every health care organization and medical institutes needs to evaluate the acceptable level of data breach risk and determine the cybersecurity strategies with professional cybersecurity providers.
  • Data Sharing with third-party: Reviewing partners’ and third-party vendors’ security level and standard before sharing medical data is very important for medical institutions. Collaborating with third-party companies lacking data security awareness will impose high risk of cyberattacks even high security level is adopted by the institutions. In addition, in order to enhance patient privacy, the institutions should apply technological solutions to anonymize, de-identity or perturb the data.
  • Actions after data breach: Ensuring that patients are promptly informed about the breach, what information is breached, and potential harms is important. The healthcare organization also provide information to patients to enable patients to mitigate potential adverse consequences of inappropriate disclosure of their personal medical information.
  • What patients can do after data breach: Data victims should remain vigilant for fraud and identity theft by reviewing and monitoring their account statement and credit reports closely. If patients believe that they are the victim of identity theft or have evidence for their personal information misusing, patients should immediately contact the FTC who can provide information about avoid identity theft.

Works Cited:

  • Breach of Security in Electronic Medical Records:
  • One in Four US Consumers Have Had Their Healthcare Data Breached, Accenture Survey Reveals:
  • Top 10 Biggest Healthcare Data Breaches of All Time:
  • How to Prevent a Healthcare Data Breach in 2018:
  • The tricky ethics—and big risks—of medical ‘data donation’:
  • How to be a cybersecurity sentinel:
  • Big data, big ethics: how to handle research data from medical emergency settings?:
  • Debt Collector Goes Bankrupt After Health Care Data Hack:

Cyberbullying is on the Rise: What Can We Do to Stop It?

Cyberbullying is on the Rise: What Can We Do to Stop It?
By Hilary Yamtich | July 19, 2019

Seventh grader Gabriella* (name changed) comes to school and reports to me, her teacher, that last night another student at the school sent mean messages on Snap Chat about her to other students and now her friends don’t want to sit with her at lunch. I reported this to administrators who were unable to identify the sender of the messages and did not follow up further.

Cyberbullying is on the rise and female students are three times more likely than male students to be bullied online. Data from the survey “Student Reports of Bullying: Results from the 2017 School Crime Supplement to the National Crime Victimization Survey,” says that 21% of female students and 7% of male students between ages 12 and 18 experienced some form of cyberbullying in 2017. Students in grades 9 through 11 are most likely to experience cyber- bullying. This data shows overall a 3.5% increase from the last year that this data was collected (2014-15.)

Of course this problem is getting worse; the increase is not just due to an increase in reporting—much younger students are gaining access to social media through smart phone apps. Students are more savvy about how to use social media. And students know that it is easy to maintain some degree of anonymity online by creating fake accounts to bully their peers. As in Gabriella’s case, administrators rarely have the time or tools to thoroughly address these incidents.

There are three main tools being used to address the problem.

First, lobbying groups such as Common Sense Education are pushing for legislation that criminalizes electronic bullying. In some states, cyberbullying can be prosecuted and even minor offenders can serve time for such offenses. Cyberbullying can also be a hate crime if certain language is involved. However, the vast majority of cyberbullying cases do not reach the level of actual legal prosecution.

Second, tech companies are also developing tools to address the issue. According to one study conducted by an anti-bullying organization Ditch the Label, the largest number of cyberbullying instances happen via Instagram. Instagram is using machine learning algorithms to identify potentially abusive comments and in 2017 unveiled a tool that allows users to block certain words or even entire groups of users.

Finally, many states have policies about how schools are meant to respond to cyberbullying incidents. For instance, in California the government provides an online training program for administrators to prevent and respond to online bullying.

Ultimately, the effectiveness of all these tools comes down to engaging with the young people who are involved. We can only address cyberbullying if we know that it is happening—this can come from young people understanding what is happening and feeling comfortable enough to tell an adult, or from machine learning tools that tech companies use to flag incidents. Obviously the technical solutions are limited by the extent to which tech companies are incentivized to catch acts of cyberbullying and the effectiveness of the tools themselves. And the likelihood that students will bring all incidents to adult attention is also not always high. Even when adults are made aware of these incidents, school officials might not have the technical knowledge or time to fully address the issues. Students are essentially on their own to deal with cyberbullying—without thorough education about their rights to be free of bullying online, many students simply accept the abuse and silently suffer.

Gabrielle spoke up about what was happening to her, but little could be done. The bullying did not rise to the level of hate crimes due to lack of specific language and administrators were not able to identify with certainty which students were involved. Gabriella became more withdrawn throughout the year and by the end of the year was in counseling for depression.

Educators especially in middle and high school need to implement the policy recommendations that already exist to ensure that these incidents are addressed effectively. Teachers need to be supported in taking time to educate students about these issues. And tech companies need to work more directly with parents and young people to ensure that the protections that they design are actually used effectively.

Are Internet and Social Medias Making the Society More Polarized?

Are Internet and Social Medias Making the Society More Polarized?
By Shirley Deng | July 19, 2019

The Problem

Misinformation and fake news are the problems we today try very hard to combat, as they tend to result in conspiracy theories and plots, ended in hatred and causing the society to be more polarized. It seems like, these problems are only increasing, making larger impacts and more serious consequence due to ease access to the Internet. Rising scandals and strengthening regulations also help to put all these issues to people’s attention.

The Factors

Yet, the society, people and government institutions are also putting the blames to the Internet and more specially, social medias. On October 2018, Peter Bergen and David Sterman addressed on New American that, today, the main terrorist problem in the United States today is one of individuals radicalized by a diverse array of ideologies absorbed from the Internet [1]. Also, as the Stanford professor, Francis Fukuyama points out, polarization might be caused and fostered by many reasons. Though Americans are sorting themselves out geographically, living in increasingly politically homogeneous neighborhoods, social media and the proliferation of media channels via the Internet and TV has played a role allowing people to communicate exclusively with people like themselves [2].

Unquestionably, the development of Internet has enabled connection between people disregard geographic barriers, fostered all kinds of conversations in no time, and given people access to content in their own preference, all thanks to social media and recommendation algorithms. However, in psychology studies, there are also lots of other factors indicating why conspiracy theories spread fast and adopted in scale. Specifically, people who have a low level of analytic thinking, like to overestimate the casual tendency between co-occurring events, or are anxious and feel powerless are more likely to turn to conspiracy theories [3].

A group of researchers from Laboratory of Computational Social Science and other institutions have run an experiment on Facebook to test the difference of spreading patterns on scientific topics and conspiracy (rumors) [4]. The only difference between a science topic and a conspiracy rumor is that whether it is validated with a process. Their experiment and model resulted in very interesting findings that: regardless science topics or conspiracy rumors, when people first receive such information, they tend to share with their close friends first. In other words, most of times, information is taken by a friend who have the same profile (polarization), belonging to the same echo chamber. Users tend to aggregate in communities of interests, causing reinforcement and fostering confirmation bias, segregation and polarization. Interestingly, rumors, as they are aversely against the truth and are more easily to be picked up and thus they have a positive relation between lifetime and size, unlike science topics’ lifetime does not correspond to a higher level of interests.

Yes, Internet and social media sites might have fueled conspiracy rumors, not because Internet and social medias are evil in nature, but because people leverage Internet and social medias to foster their own bubbled communities when people tend to share the conspiracy-related information with other conspiracy believers rather than non-believers [5]. In this way, believes and bias misinformation would be reinforced inside each of these communities, resulting stronger polarized believes.

The Potential Solutions

Before blaming Internet and social medias, it is more meaningful and insightful for us to look into human factors. Lots of psychology studies have given us the hint that people are more interested in exaggerated, distorted information that fits into their theories. People who share them are either firm believers or people who have doubts, unsatisfied with current situations or people who have lower level of analytical thinking abilities.

Social and Education

When social medias might help on building bubbles, bursting the bubble seems to be an obvious way to help avoid people tent to share the same profiles going to extreme. Educations might include helping people to adopt the ideas that people are different, and life is not about debating being right or wrong. Oftentimes, extreme polarized information could be a mix of fact and rumors, which makes the situation more complex. It helps to expose people to different opinions and the corresponding facts and evidences, then we could encourage people to find common grounds.


The psychology studies also suggest that when people have doubts, we should give them facts. The rising amount of media outlets focused on fact-checking and political accountability reporting has definitely played an important role on helping with the issue.

Source: Mary Meeker’s Internet Trend Report [6]

The increasing amount of controversy on the credibility of journalist has made fact-tracking more important, as Alan Greenblatt puts, “This is an incredibly important time to be a journalist. Never has the watchdog role been more important.” [7] During the presidential campaign in 2016, at least 6 million people had flocked to a transcript of the debate that was fact-checked by 20 NPR journalists in real-time [8]. Globally, partnerships between social medias, Internet companies and science institutions also help to build a safer and healthier online environment.

Technology and Product

While Internet companies and social medias should not take all the blames, they could take up some responsivities to act proactively and maintain safer and healthier online communities. For example, algorithmic-driven solutions have been proposed and Google is developing a trustworthiness score to rank the results of queries to estimate the trustworthiness of a web source and build knowledge-based trust. WeChat, the social media giant in China, builds an in- app official fact-checking channel that helps to label rumors and stop them from spreading. WhatsApp, the messaging app who hosts a quarter of global populations on it, labels all forwarded messages and reminds its user to think twice before forwarding to others.


Last but not the least, legal should be an important resort to fight the bad actors in our communities, regardless online or offline. Lots of conspiracy theories that aims to driving people to polarized directions could be initiated by people with ulterior motives. In this regard, besides guidelines and policies, we should hold the bad actors accountable for their own actions. For example, in 2013, in order to combat rumors, the Chinese government brought out tough measures to stop the spread irresponsible rumors, threatening three years in jail if untrue posts online are widely reposted [10]. Although it drew lots of angry responds from the internet users in China initially, it did help to contain the spread of rumors and minimize bad impacts.

[1] “The Real Terrorist Threat in America”,
[2] “The Great Recession has influenced populist movements today, say Stanford scholars”,
[3] “The Psychology of Conspiracy Theories” aps Association for Psychological Science,
[4] “The spreading of misinformation online”, PNAS January 19, 2016 113 (3) 554-559; first published January 4, 2016
[5] “The internet fuels conspiracy theories – but not in the way you might imagine”,
[6] “Internet Trend Report”, 2019, Mary Meeker
[7] “The Future of Fact-Checking: Moving ahead in political accountability journalism”,
[8] “NPR’s real-time fact-checking drew millions of readers”,
[9] “Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources”,
[10] “China threatens tough punishment for online rumor spreading”,


By Mads Bulkow-Macy | July 19, 2019

The unemployment rate is often used as shorthand for the state of the entire economy. When the Federal Reserve signaled an intent to lower interests rates last week, many news stories supplied context by pointing to recent jobs numbers, headlined by low unemployment. The 3.7% June unemployment rate is near the 50-year low, suggesting that the economy is very healthy indeed. Why, then, would the Fed try to give the economy a boost?

Unemployment is near a 50-year low.

Seasonally adjusted unemployment rate fluctuation since 1969. (Source: Bureau of Labor Statistics)

Jerome Powell’s specific calculus will continue to be the source of much speculation, but one issue that economic headlines could do well to consider is what a low unemployment rate really means. The categories of “employed” and “unemployed”, while at first glance complementary, actually leave out a significant portion of the population. To understand why, it is useful to examine the process by which the Bureau of Labor Statistics develops this statistic.

Since a monthly census of the entire population is infeasible, the statistic is based off of a sample of about 60,000 households, then weighted to be demographically representative for the categories of “age, sex, race, Hispanic ethnicity, and state of residence.” The unemployed/employed determination is made via an interveiw. Reporting employment places a person in the “employed” category. In order to be counted as “unemployed,” a person must:

    • Not currently have a job.
    • Be actively seeking work (in the last four weeks).
    • Be available to work, supposing they receive an offer

Anyone who falls into neither the “employed” nor “unemployed” category is (in general) not counted in the workforce.

If we try to look at unemployment numbers and find, for instance, the number of households which are struggling to put food on the table, we will find that it is in many ways inadequate. First, employment in a single job does not necessarily mean that the person in question has sufficient means to support themselves or their family. Thus, it cannot be used as an accurate predictor of the strain on social safety nets. Secondly, there are a large class of would-be workers who do not actively engage in job seeking. These workers may be temporarily unable to engage in such activities, or may have been searching for long enough that they have become discouraged. This group includes those whose skills have become irrelevant in a changing workforce, and are working to learn a new set of skills before they attempt reentry. It also includes those who suspect that their attempts to seek employment will be met with discrimination or hostility. Note that this would disproportionatley affect groups which have probable cause to be concerned about employment discrimination, such as members of the trans community and people of color. Ultimately, the category created is likely to exclude a good portion of those who would consider themselves unemployed, and fails to capture a variety of forms of personal economic distress. It also fails to capture broader economic inefficiencies, such as underemployed workers or workers who have been forced to seek retraining.

In the creation and definition of these categories, the more narrow aim of the BLS seems to be to measure the availability of workers, in comparison to the current workforce. Yet even here it falls short, given the potential for job seekers with irrelevant skills to be counted as available, while underemployed workers – even those actively seeking advancement – are counted as unavailable.

While as a coarse metric the unemployment rate still serves a purpose as an economic indicator, the category of “unemployment” does not represent what it purports to. It would be useful for everyone from journalists to policy makers to treat it with caution, and consider more closely the people and stories it fails to include.


“I’m not worried about my privacy online.” — A Millennial’s Perspective

“I’m not worried about my privacy online.” — A Millennial’s Perspective
By Anonymous | July 12, 2019

As I type this, my Word document highlights a squiggly red line underneath the word “Millennial’s” in my title. How quickly I am to ignore the suggestion, knowing well that this title has been made ubiquitous from this generation’s views, stances, and actions: from
religion to politics, marriage to the economy.

Millennial’s are the generation born between 1981 and 1996, currently between the ages of 23 and 38. (Source:

Their perspectives have frustrated the Boomers above them and has quickly molded the world for the Gen Z’s below them. In light of recent privacy scandals in the technology industry and prevalence of “fake news” in the media, the millennials have not been ruffled. Do we chalk it up to apathy and ignorance? To their comfort with technology due to early exposure? To their abundant awareness and caution?

Many surveys have been conducted to understand the viewpoints of the varying generations with regards to security and privacy, and the root causes are still being understood. According to a recent study in 2015 by the Media Insight Project, only 20% of Millennials worried about privacy in general all of the time, their biggest concern being that their identity or financial information will be stolen from them.

Survey reached 1,045 adults across the US, ages 18-34. (Source:

Survey reached 1,045 adults across the US, ages 18-34. (Source:

As a part of that generation, which I would consider rather diverse, I can understand the different root causes for these perspectives.

One is that the Millennial generation was born in the digital age, where the internet was part of the every day person’s life, and the Millennials were the first true customers and fuel of social media. They haven’t known another world so they feel a sense of normalcy in the others having access to their information.

Another reason could be that Millennials have yet to see the repercussions of any security breaches. From Cambridge Analytica to the Marriott account breaches, they have understood that these events have occurred but have not yet been personally impacted by any of them.

On the other hand, Millennials feel that they are in control of their data, that they have chosen what to share online and they have actively accepted any risk of their data being leaked as they make the decision to engage with certain products or apps. They see no true harm in their data being released, except when it comes to financial information (as noted above). This is the idea that they have “nothing to hide” — a credo of the generation which feels the need to share everything.

Our data has always been around. Since before the internet, there has been data. We have just reached an age where we can capture, measure, and use to it to enhance our world like never before. There has to come a point where you accept the world you have become a part of and your role. It is the world you grew up in, the innovation that your data has lent itself to that has made your life easier and better. You start identifying tradeoffs: “If I don’t share my location with my Uber app I will need to figure out exactly what the address of this location that I am is and make sure I don’t misspell anything so that my Uber driver can come pick me up at the right spot.” You have chosen your life, the conveniences, the benefits, over the seemingly small and insignificant parts of privacy that you are handing away. And as a millennial, I may be naive, but we have reached a point where there is no “acceptable” alternative.


A Simple (July 2019) Online Privacy Tech Stack

A Simple (July 2019) Online Privacy Tech Stack
By Eduard Gelman | July 12, 2019

As consumers become increasingly aware that their behavior is actively tracked by advertising firms and governments, and that this information is occasionally lost in high-profile, high-stakes leaks, many are beginning to modify their habits. Privacy and security concerns are likely at the forefront of the development and adoption of a slew of tools that individuals can use to make their online and increasingly visible “offline” behavior more private, or at least, more secure. Since the toolsets and adoption are in flux, this blog will attempt to survey the landscape as it exists in July 2019, reviewing the harms that consumers are trying to avoid, and will take some liberties in picking “flagship” products to represent a technique and in omitting less-adopted technologies for the sake of concision.

The main privacy violations that these tools help consumers to minimize fit neatly into Solove’s Privacy Taxonomy, with threats coming in “surveillance”, “identification”, and “secondary use” harms. Each product discussed in this blog post address one or more of these potential harms.

Surveillance harms may come from private or public entities who are able to read content exchanged between individuals. Just as the NSA is being excoriated for it’s wide-reaching surveillance procedures, recently Facebook began to block private and public messages depending on content. It’s relatively clear that these products are well-intended, but may carry alarming, negative consequences. Further harms may come when activity across disparate platforms and connection points can be identified as belonging to the same individual, leading directly to potential exploitation of individuals based on their history. Famously, an unaware father was recently alerted to his daughter’s pregnancy by a wayward advertisement. When sensitive information leaks and is used for identity theft, this quickly escalates into a security problem with serious financial and legal ramifications.

Online, there are countermeasures that individuals can take to obfuscate or subvert tracking:

It is important to note that much of this data collection is actively used to improve and personalize products and services. Netflix might not be able to recommend a spectacular show that is perfectly suited to your tastes if it isn’t able to merge data from your behavior and ratings with those of other Netflix users. In fact, Netflix asked everyone to participate in this project, and paid handsomely for the result. Amazon might not be otherwise able to notice that you’ve looked at reviews of healthy toothpastes, and serve you an ad with a better price and much better convenience than your local supermarket. Nonetheless, some feel that ad agencies and governments building up profiles of individuals’ likes, dislikes, behaviors, vices, and other “personal” matters is a violation of privacy.

What do you think? Did this survey miss any topics or important products? Let us know in the comments.

Facial Recognition at U.S. Airports: The “Future” is Now

Facial Recognition at U.S. Airports: The “Future” is Now
By Annie Lane | July 12, 2019

At many U.S. airports, passengers face long lines and multiple checkpoints for checking bags, obtaining boarding passes, screening carry-ons, and verifying identity to get to the gate. The Transportation Security Administration (TSA) hopes to streamline the process with facial recognition technology. As part of the Department of Homeland Security (DHS), the TSA is responsible for domestic and international air travel security in the US. The TSA estimates screening 965 million passengers annually, or roughly 2.2 million passengers daily. That number is growing at a rate of about 5% per year. Facial recognition systems promise to expedite the process and support the increasing passenger volume. Beyond security checkpoints, the TSA is partnering with airlines like JetBlue  and Delta to achieve a “curb-to-gate” vision with photos granting access at each checkpoint.

While facial recognition technology could unlock efficiencies, it also creates new risks and privacy concerns. A massive database of passenger images must be collected, stored and protected. Passengers have a right to provide consent, especially since the accuracy of facial recognition technology is questionable. The application of facial recognition technology by government agencies is also under the bipartisan scrutiny of Congress.

How Facial Recognition is Applied in Airports

The TSA lays out their plan to increase security and improve passenger security through automation of manual screening tasks in their Biometric Screening Roadmap. Traditionally, the Transportation Security Officers at the checkpoint compare the presented photo ID to the face of the person standing in front of them and matches the name on the boarding pass. The TSA has started screening pilots with U.S. Customs and Border Protection (CBP) to evaluate the facial recognition technology. In the pilots, at the Traveler Document Checker point, a camera takes a picture of the passenger’s face. This photo is then transferred to the cloud where an algorithm attempts to match the photo to the stored facial template database managed by CBP to identify the passenger. Upon finding the match, the passenger is permitted to proceed.

Storage and Security of Biometric Data
This system requires the storage of photos in a central database accessible to the federal agency. At the federal level, there is a collection of passport and visa photos. Applying this technology is a challenge for domestic flights because each state has their own database of driver’s license photos. However, a recent investigation by the Washington Post reveals that other federal agencies, including the FBI and ICE, have been accessing these state databases without due process required by the 4th Amendment. While the TSA is not currently involved in this invasion of privacy, this violates the principle of consent and betrays the public’s trust of the government’s use of facial recognition.

No data system is fully secure against attacks, so the huge database necessary becomes a desirable target. Increased access to the database introduces additional vulnerability. This is a legitimate concern – this June, the CBP reported that Perceptics, a private contractor, was hacked. The hack compromised around 100,000 images of license plates and travelers collected at border checkpoints. The CBP placed blame solely on Perceptics and chose to suspend the company rather than take any responsibility. Based on this recent response, we cannot expect the CBP or TSA to accept accountability for this new database as they partner with private companies.

Consent and Opting-out
The TSA biometric roadmap highlights that all passengers will have the opportunity to opt-out of the biometric screening. They can be screened manually using traditional methods. While it’s essential to provide the opportunity to provide consent and provide alternatives, these alternatives may come at a cost. The manual screening will likely takes longer and there may a social cost as strangers observe defiance of the “norm”. Two different passenger accounts confirm this and observe that opting out is not a clear choice for JetBlue and Delta’s boarding facial recognition systems. Even if a passenger opts out at the gate, their images have still been gathered in CBP’s cloud database as part of the flight gallery to be accessed by the private airline.

Accuracy of Facial Recognition
The system accuracy goal is correct identification of 96% of legitimate passengers. Even if this accuracy level is achieved, 1 in 25 passengers would require additional screening. While the majority of passengers may have a better experience, a subpopulation will face inconveniences. The [National Institute of Standards and Technology’s April evaluation of various facial recognition algorithms found that black and female subjects had consistently lower accuracy than white and male subjects. This means a particular subpopulation will disproportionately bear the burden of the technology. While prevalence of facial recognition is increasing, fairness has not been sufficiently addressed.

Taking Action

While facial recognition technology is already deployed at some American airports, there are opportunities to put the brakes on the program. The DHS has standards for gathering public opinion and assessing privacy risks including the creation of Privacy Impact Assessments. The House Oversight and Reform Committee and the House Homeland Security Committee both held hearings this summer on government use of facial recognition. We must hold our representatives accountable for protecting unnecessary invasions of privacy by government agencies.

Why the World Economic Forum’s Global Council on AI should focus on protecting children

Why the World Economic Forum’s Global Council on AI should focus on protecting children
By Ivan Fan | July 8, 2019

The advent of AI is a trend which will affect our children and our children’s children. In a world characterized by constant technological change, we must invest more in preparing future generations through improved governance of AI-interactions involving children, particularly in the context of areas such as education.

The newly created World Economic Forum (WEF) AI Council on Artificial Intelligence presents an opportunity to develop a global governance approach to AI, which should include a strong treatment of governance issues around AI-interactions with children. The forum is well positioned to do so; its Generation AI project has previously advanced important questions regarding uses of AI in relation to children.

The creation of the council comes in the wake of a recent trend of nations placing greater emphasis on cooperation with regard to overall AI governance. Multi-lateral efforts on the part of the EU and OECD, in particular, have sparked efforts toward developing a consensus around core AI issues in their respective memberships. Notably, the European Commission’s High Level Expert Group on AI recently released a set of ethical guidelines on AI and recommendations for trustworthy AI, formally addressing the need for governance around AI-interactions with children.

In a time when troubling terms such as “technological cold war” have cropped up, overcoming techno-nationalistic tensions and fostering collaboration has never been more important between great powers. The great challenge we face today is ensuring that people everywhere—in both developed and emerging countries—have sufficient access to AI resources. The best way to achieve this is by doubling down on opening up access to educational opportunities to youth everywhere, and the WEF is well positioned to provide critical, impartial leadership on this front.

Current talent pools are insufficient for taking advantage of the future range of occupations enabled by AI, and without systemic reform addressing rising inequality, societies will regress to a state in which opportunities are increasingly restricted to those able to access AI resources. National efforts such as the American AI Initiative, China’s New Generation Artificial Intelligence Development Plan, and the European Strategy on Artificial Intelligence all emphasize talent shortages as a significant impediment to implementing AI effectively.

This is why policies directed toward the expansion of the available talent pool are critical, which should include redesigning education systems to prepare children with the necessary skills to thrive in an AI-enabled world. Many countries agree that overhauling education systems to teach necessary cognitive, socio-cultural, and entrepreneurial and innovation competencies are a primary means of addressing talent shortages. Expanding access to STEM opportunities for women is also of vital importance, and must improve at all ends of the talent pipeline—from early childhood education all the way to the C-suite.

In his landmark book “AI Superpowers”, Kai-Fu Lee, co-chair of the WEF’s new council on AI alongside Microsoft President Brad Smith, writes about how perception AI is revolutionizing China’s education system. I serve as a research and teaching assistant to faculty here at UC Berkeley’s School of Information, and I have seen first-hand how new technologies can revolutionize the delivery of education in my own graduate program. Instructors now have unprecedented access to rich profiles of students and to dashboards notifying them of a whole host of AI-enabled features including high-fidelity, real-time notifications about performance at the individual and macro-level.

In revamping our education systems to use AI and to teach AI, it is crucial that the safety and rights of children are strictly respected by those who would impact their learning and growth. AI HLEG provides some ideas for the Global AI Council to consider – it recommends protecting children from “…unsolicited monitoring, profiling and interest invested habitualisation and manipulation” and giving children a “clean slate” of any public or private storage of data related to them upon reaching a certain age. The WEF’s Global Council on AI represents an outstanding opportunity to consider and iterate upon such ideas in order to better protect and serve the needs of our children.

Saving the Future of Phone Calls – The Fight to Stop Robocalls

Saving the Future of Phone Calls – The Fight to Stop Robocalls
By Anonymous | July 5, 2019

“Hello, this is the IRS. I am calling to inform you of an urgent lawsuit! You are being sued for failing to pay taxes and we have a warrant out for your arrest. Please call this number back immediately!”

The familiar noisy background laced with thinly veiled threats is a message many are unfortunately accustomed to. Robocalls are a pervasive annoyance that has become the top consumer complaint to the Federal Trade Commission (FTC). And despite robocalls being prohibited by law, Americans were bombarded by a record breaking 4.4 billion robocalls in June 2019. That’s 145 million calls per day, 13 calls per person!

Figure 1: YouMail Robocall Index:

So, how does robocallers obtain phone number anyways? Most often, they acquire numbers from third party data providers, who in turn acquired numbers from a variety of avenues that everyday users may not be aware are collecting and selling their data. Some of these sources include:

  • Toll free (1-800) numbers that employ caller ID which can collect phone numbers
  • Entries into contests where users provided phone numbers in the process
  • Applications for credit
  • Contributions to charities where users provided phone numbers in the process

Methods of manipulating users into giving up personal information have evolved over the years as well. Robocalls can disguise their numbers to appear as a local telephone number with neighboring area codes to trick users into picking up unfamiliar calls outside of their personal contacts. The variety of robocallers disguising themselves as government agencies, municipal utility providers, or even hospital staff to scam users into providing personal information has grown to such an astonishing extent that lawmakers are now paying attention.

Figure 2: FTC Phone Scams:

In November 2018, the Federal Communications Commission (FCC) called on carriers to develop an industry-wide standard to screen and block robocalls. In particular, the FCC urged carriers to adopt the SHAKEN (Secure Handling of Asserted information using toKENs) and STIR (Secure Telephone Identity Revisited) frameworks by the end of 2019. In particular, SHAKEN/STIR frameworks employs secure digital certificates to validate that calls are from the purported source and has not been spoofed. Each telephone service provider must obtain a digital certificate from a certified authority and this enables called parties to verify the accuracy of the calling number.

Furthermore, in January 2019, Senators Edward J. Markey and John Thune introduced the Traced Act that aims to require all telephone service providers, including those over the internet such as Google Voice or Skype, to adopt similar call authentication technologies.

Together, the collective drive by private industry and regulatory efforts will make it harder for the majority of robocallers to spam consumers at the touch of a button. Like spam emails, calls with suspicious or unverified origins can be traced and blocked en masse. And though these recent tactics are certainly a step in the right direction for consumer protection, some fear that historically underserved communicated might not upgrade in time and be risk being further isolated. Rural areas that often rely on older landlines will foreseeably struggle to adopt the new technology due to outdated equipment and cost to implement. Immigrant communities who make and receive international calls to foreign countries might be subjected to higher levels of discrimination as international calls cannot yet be fully authenticated. This means their calls may be more likely to be labeled as fraud and increased targeting by robocall operatives that will exploit this gap in technology to scam an already vulnerable population.

As the world continues to evolve with newer technology, it’s important to not only think about who will benefit from these changes, but also who will be left behind. In this case, as the FCC and private industry work together to protect consumers, they should also seek to mitigate the risk of scam and spam robocalls targeting vulnerable communities. One way to accomplish this is to work with other regulatory agencies, such as the Housing and Urban Development department, to create long term and sustainable incentives within rural areas to modernize their infrastructure. Another way is for private industries who are vested in international businesses to continue working closely with regulators to develop a global SHAKEN/STIR standard that protects an increasingly globalized world. Afterall, robocalls are hardly a uniquely American phenomenon. However, taking the lead in safeguarding the next generation can be a defining American trademark.



  • “How Do Robo-Callers and Telemarketers Have My Cell Number Anyway?” BBB,
  • “How to Know It’s Really the IRS Calling or Knocking on Your Door.” Internal Revenue Service,
  • “Phone Scams.” Consumer Information, 3 May 2019,
  • “Thune, Markey Reintroduce Bill to Crack Down on Illegal Robocall Scams.” Senator Ed Markey, 17 Jan. 2019,
  • Vigdor, Neil. “Want the Robocalls to Stop? Congress Does, Too.” The New York Times, The New York Times, 20 June 2019,
  • “YouMail Robocall Index: June 2019 Nationwide Robocall Data.” Robocall Index,

Audit organizations, trust and their relationship with ethical automated decision making

Audit organizations, trust and their relationship with ethical automated decision making
By Jay Venkata | July 5, 2019

The world runs on trust. Worldwide billions of dollars are spent every year on developing, and maintaining trust. Any transaction, whether it be supply chain, finance or healthcare related requires trust between people, businesses and entities. As an individual consumer, you are making decisions based on trust on an almost hourly basis. This goes from trusting the safety of your meals to trusting the financial transactions done through your bank. Audit organizations and regulators, both private and public, are responsible for maintaining this trust in society. I work at one of the Big 4 global audit firms. At the core of what each of these audit companies do is giving assurance to businesses and governments. The mission statement of my company actually is ‘Solving complex problems and building trust in society’. But what does trust look like in this digital world?

[Image 1]

Trust in the Digital World

In yesteryears, audit organizations would primarily base their decisions on financial ledgers, and sources of decisions can be razored down to select executives or managers. Manual and paper based processes could only be kept track of manually. However the trend towards automating business processes, and their associated accounting and strategic decisions is causing an interesting challenge for regulators. A realm of work historically led by humans such as deciding on credit card applications can increasingly be automated. There is now a need for alternative methods to develop the same level of trust again. One solution to this issue is to focus more on independently auditing the underlying algorithms. Audit firms may need to have technical staff who can work alongside the functional experts to decode the algorithms and get to the root of any errors or biases that could affect the decisions and outcomes. Hence there is a need for accounting colleges across the world to focus on these interdisciplinary skills that will make students more ready for their careers post graduation. Another challenge is that most businesses and governments do not seem very willing to publish algorithms, the data used to train them or the inferences made from the data.

[Image 2]

Auditing the algorithms

A longer term solution that could be effective is to work alongside the government to create transparency and openness standards that are applicable to all organizations. These types of guardrails exist already in financial statements and reporting, which are managed closely by the SEC in the US. The GDPR currently has a requirement to use “appropriate mathematical or statistical procedures” to avoid or reduce risk resulting from errors or inaccuracies. The French administration also announced that algorithms developed for the government use will be made publicly available, so that society at large can verify their correct application. There needs to be a similar push worldwide for rigorous standards on automated processes and decision making to create algorithmic accountability.

[Image 3]

Blockchain improves trust in transactions through distributed ledgers

This trend towards improving and automating trust could happen naturally as we move towards technologies like Internet of Things and Blockchain, which will create end-to-end traceability for products and transactions in a cheap and ubiquitous manner. However the case for auditing algorithms is clear. Audit firms and regulators need to be one step ahead of the organizations they are auditing at all times and this applies to the current scenario where the stakes couldn’t be higher- ensuring the integrity of data flow and decision making.

Works Cited

  • Abraham, C., Sims, R. R., Daultrey, S., Buff, A., & Fealey, A. (2019, March 18). How Digital Trust Drives Culture Change. Retrieved July 7, 2019, from
  • O’Neil, C., & Schermer, B. (2018, July 30). Audit the algorithms that are ruling our lives. Retrieved July 7, 2019, from
    What is Blockchain Technology? (2018, September 11). Retrieved July 7, 2019, from
  • Likens, S., & Bramson-Boudreau, E. (2019, May 02). Blockchain Promises Trust, But Can It Deliver? Presented by PwC – MIT Technology Review. Retrieved July 7, 2019, from