Are regulations like GDPR right solution to address online privacy concerns?

Are regulations like GDPR right solution to address online privacy concerns?
By Anonymous | July 19, 2019

With the internet turning the entire world into a potentially addressable market, anyone can build a niche business as long as they can find their customers. Personalized ads solve this problem by enabling businesses of all sizes to reach their customers from anywhere in the world. Ad-supported services such as Facebook and Google are extremely popular with the users because of their business model- they provide excellent service at free of cost. Google search , for example, enable anyone to find answers to anything. This immense value proposition of data driven free services loved by users and valued by businesses have revolutionized the economy. But for this business model to work , users need to share their data so that the internet companies can continuously improve their products and serve personalized ads.

However, Individual users have little knowledge or control of their personal data that these internet companies are using and sharing . A resulting conflict has emerged between the necessity of data collection and sharing for companies on the one hand and consumer autonomy and privacy for individuals on the other.

What is GDPR?

The first major regulatory framework to address this dilemma is arguably GDPR or “The General Data Protection Regulation” which came into effect starting May 25th , 2018. GDPR is a regulation in European Union (EU) on data protection and privacy for all citizens of the EU. Due to the open nature of the web, GDPR rules apply to any business that markets or sells their products to EU consumers or whenever a business collects or tracks the personal data of an individual who is physically located in the EU.

GDPR is designed to make it easier for consumers to protect their privacy and enable them to demand greater transparency and accountability from those who use their data. It mandates that businesses that collect and use personal data must put in place appropriate technology and processes to protect personal data of their consumers and sets substantial fines for failing to comply.

GDPR also introduced the concept of “privacy by design”which requires business to design their application in such a way that it collects only the least amount of personal data necessary for their purpose and receives person’s express and specific consent before collecting that limited personal data.

Impact of GDPR

Consumer attitude on Privacy

Hubspot partnered with the University of Virginia to explore how consumer attitudes have changed post GDPR. A group of more than 1000 subjects were surveyed across the EU and the US and the results show that consumer sentiment on privacy has actually decreased in 1 year since GDPR went into effect.

Fewer consumers are concerned with how companies are collecting their personal data.

Fewer expect organizations to be more transparent with their policies on data collection, use and sharing with third parties.

Competition

An analysis of the impact of GDPR on the ad-tech industry suggests that regulation can reinforce dominant players at the expense of smaller entities by further concentrating power — because big companies have greater resources to tackle compliance.

Is there a better way to address privacy concerns?

Though it is still early days, studies suggest that GDPR is likely not working the way it was expected to. Consumers have either become more indifferent towards their online privacy concerns or their confidence on the protection mechanisms have declined in the first year since GDPR went into effect. There is also growing evidence that GDPR is likely ending up in harming competition and helping internet giants to further increase their market share at the expense of the smaller players.

Instead of a one size fits all regulatory approach like GDPR , it might be worthwhile to define the context specific substantive norms for privacy as suggested by Nissenbaum and use that to constrain what information websites can collect, with whom they can share it, and under what conditions it can be shared. Secondly , conditions could be created for users to see not simply what they have shared — which GDPR requires, but also their profile information that gets built after merging their inputs with other data sources. Today, many users don’t care about privacy, particularly if the service is useful or saves them money. At the same time , most users likely have no idea what information about them that these internet companies may have uncovered. They might change their minds if they actually saw their profile data, not simply the raw inputs. And finally, once empowered , users should be trusted to take care of their own privacy.

References

[1] Nissenbaum-contextual-approach-privacy-online
[2] General-data-protection-regulation-consumer-attitudes
[3] Gdpr-effect-on-competition

Smart Home Devices help Amazon and Google to more invade your privacy

Smart Home Devices help Amazon and Google to more invade your privacy
By Martin Jung | July 19, 2019

The number of smart speakers with voice assistant such as Amazon’s Echo and Google’s Home has been increased to 66.4 million at the beginning of 2019. The speakers keep track of the questions people ask and store recordings of them.

On top of that, the speakers integrate tightly with many smart home devices, allowing owners to communicate to those devices leveraging the speakers and provide instructions by voice commands, remote controls or a touchscreen. Short list of examples of such smart devices is light bulb made by Philips; ceiling fans made by Hunter Fan; thermostats made by Ecobee, and Nest; security door ring made by Ring; door locks made by August; and self-steering vacuums from iRobot. Recently, Amazon Alexa partnered with UK National Health Service to allow owners to ask health questions and Amazon Alexa answers the question based on searches of the NHS site. This enablement is opening up a new level of privacy concern, since Amazon now has a private health information of the owners.

Amazon’s privacy policy and Google’s privacy policy state that the information would be shared within the companies or to the subsidiary companies. This means that the data that can be collected include not only the voice recording but also maps of homes (vacuums) or everyday schedule of the owners (light bulb and thermostats) or health condition. Moreover, especially when paired with other information about you such as home address, calendar information, it helps fill out a complete record of your behavior and surrounding situations. In sum, this situation clearly expands Surveillance footprint and Secondary Use in terms of Solove’s Privacy Taxonomy.

Owners of Amazon Echo and Google Home can delete their recorded voice or opt out of some data collection. However, by default the information is stored and the process of deleting the record is somewhat complicated. For a government policy to mitigate this problem, the expansive General Data Protection Regulation (GDPR) in Europe, people have the right to ask companies to stop using or delete their private information. However, U.S. regulation generally does not give people that much control. A California Consumer Privacy Act set to take effect in 2020 will give the owners the right to know what data is being collected and shared and allow the owners to deny companies the right to sell it.

Regardless how strict the regulation is, data hungry companies such as Amazon and Google will try to work out of the regulation and will continue to find a way to collect owner’s data to drive their business by expanding their ecosystem devices. It is becoming more important for users to feel the urgency and configure the devices or speakers not to collect their private data as much as they want to enjoy the Smart Home devices.

Big Medical Data, Big Ethics

Big Medical Data, Big Ethics
By Yue Hu | July 19, 2019

The collection and usage of personal medical and health data has come under increased scrutiny recently with technology and medical development. Obviously, most medical scientists increasingly want patients to donate massive amounts of sensitivity personal information for study such as the complex sets of factors causing SCA and determining survival. However, privacy protection and ethical medical research become a big concern due to difficulties in data flow control with hidden network of patient data distribution by healthcare organizations and their third-party vendors. How to balance protecting patients’ privacy with the benefits that big data brings to medical research becomes a popular topics and increase public attention.

Hidden Network of Medical Data Sharing
Receiving notice letter of data breach or spam call for asking payment of medical statement makes people scared and helpless. Recently, I received a notice letter of a data privacy incident involving Retrieval-Masters Creditors Bureau Inc. doing business as American Medical Collection Agency. The security compromise of company’s payments page from an independent third-party compliance firms affected millions of Quest Diagnostics Inc. customers. Based on an external forensics review, that an unauthorized user had access to companies system between August 1,2018 and March 30, 2019, and the hackers had 8 full months to gather personal information including first and last name, SSN, band account information name of lab or medical service provider, data of medical service, referring doctor, certain other medical information. Upwards of 20M customers of Quest Diagnostics and Laboratory Corporation of America had their data stolen.

This shocked news bring me to more consideration and concern about my and my family personal medical data and information. I had two questions on top of mind when I received this letter:

  • Am I asked for consent to share data with this company? Obviously, the answer is ***NO***! I never gives the right to share any data with this company. After the research online, I realize that my blood sample collected by WomanCare Center in last August was sent to the medical lab and this company collects receivables for medical labs.
  • How do I prevent this data privacy incident in the future? Definitely, it is hard! When I was asked for blood test by my doctor, I lost autonomy to choose lab test company. What is worse, information is collected by third party agency without knowledge and consent. Therefore, I totally lost control of my personal medical data flow.

This story indicates the huge hidden network of medical data distribution by healthcare organizations and their third-party vendors. When patients receive care at a healthcare provider (HCP) or organization (HCO), most of time they don’t have the freedom to choose the medical lab for their test. Moreover, they are not asked for consent to sending test and identity to these third-party labs and vendors. Unfortunately, patients can not find this unseen layer of networks until data breaches happen at these third party companies.

Cyber Criminals in Health Care
In the past five years, we’ve seen healthcare data breaches grow in both size and frequency, with the largest breaches impacting as many as 80 million people. Nowadays, medical data and identity is uniquely comprehensive and valuable for quality clinical care and health-related research, making it more valuable than a credit card information or location data. Moreover, today healthcare organizations comes to cloud, network, application, IoT, and etc., which brings difficulty for data security. According [a recent report](https://www.hcinnovationgroup.com/cybersecurity/news/13027679/report-healthcare-industry-workers-lack-basic-cybersecurity-awareness), SecurityScorecard ranks healthcare 9th out of all industries in terms of overall security rating. With frequent medical data breach, the public lost trust to health care industry’s who still use outdated technology and lack basic security awareness.

Cyber criminals also leads to financial and operational losses except for reputation loss and cost of recovery efforts. On the other hand, the security criminals will bring irretrievable physical, emotional, and dignitary harms. Once the data is inappropriately disclosed or theft, the patients are not possible to control their sensitive private medical data flow. Based on [A February 2017 survey from Accenture](https://newsroom.accenture.com/news/one-in-four-us-consumers-have-had-their-healthcare-data-breached-accenture-survey-reveals.htm), 50% of breach victims suffers medical identity with an average out-of-pocket cost of $2,500. Unfortunately, many breaches is detected with a fraud altered or an error on their credit card statement and their benefit explanation instead of receiving company enforcement notification.

Code of Medical Ethics
Upholding trust in the patient-physician relationship, to preventing harms to patients, and to respecting patients’ privacy and autonomy create responsibilities plays an important role in individual physicians, medical practices, and health care institutions when patient information is shared and distributed to third-party vendors. Due to the hidden and complicated network of medical data distribution between medical institution and third-party vendors, medical health organizations and individual physicians have the obligation to better secure patients’ data for vulnerable population protection and medical privacy respects.

  • Risk mitigation before breach: All health care organization should take action to approach security efforts. Training staff in proactive cyber awareness training, limiting the security access, provide early alters to trending cyberattacks and refining partners and third-party vendors to reduce the risks for data breach. It is always impossible to achieve total security. Every health care organization and medical institutes needs to evaluate the acceptable level of data breach risk and determine the cybersecurity strategies with professional cybersecurity providers.
  • Data Sharing with third-party: Reviewing partners’ and third-party vendors’ security level and standard before sharing medical data is very important for medical institutions. Collaborating with third-party companies lacking data security awareness will impose high risk of cyberattacks even high security level is adopted by the institutions. In addition, in order to enhance patient privacy, the institutions should apply technological solutions to anonymize, de-identity or perturb the data.
  • Actions after data breach: Ensuring that patients are promptly informed about the breach, what information is breached, and potential harms is important. The healthcare organization also provide information to patients to enable patients to mitigate potential adverse consequences of inappropriate disclosure of their personal medical information.
  • What patients can do after data breach: Data victims should remain vigilant for fraud and identity theft by reviewing and monitoring their account statement and credit reports closely. If patients believe that they are the victim of identity theft or have evidence for their personal information misusing, patients should immediately contact the FTC who can provide information about avoid identity theft.

Works Cited:

  • Breach of Security in Electronic Medical Records: https://www.ama-assn.org/delivering-care/ethics/breach-security-electronic-medical-records
  • One in Four US Consumers Have Had Their Healthcare Data Breached, Accenture Survey Reveals: https://newsroom.accenture.com/news/one-in-four-us-consumers-have-had-their-healthcare-data-breached-accenture-survey-reveals.htm
  • Top 10 Biggest Healthcare Data Breaches of All Time: https://digitalguardian.com/blog/top-10-biggest-healthcare-data-breaches-all-time
  • How to Prevent a Healthcare Data Breach in 2018: https://healthitsecurity.com/news/how-to-prevent-a-healthcare-data-breach-in-2018
  • The tricky ethics—and big risks—of medical ‘data donation’: https://www.advisory.com/daily-briefing/2018/07/18/personal-data
  • How to be a cybersecurity sentinel: https://www.advisory.com/research/health-care-advisory-board/multimedia/infographics/2018/how-to-be-a-cybersecurity-sentinel
  • Big data, big ethics: how to handle research data from medical emergency settings?: https://blogs.biomedcentral.com/on-medicine/2018/09/13/big-data-big-ethics-handle-research-data-medical-emergency-settings/
  • Debt Collector Goes Bankrupt After Health Care Data Hack: https://www.bloomberg.com/news/articles/2019-06-17/american-medical-collection-agency-parent-files-for-bankruptcy

Cyberbullying is on the Rise: What Can We Do to Stop It?

Cyberbullying is on the Rise: What Can We Do to Stop It?
By Hilary Yamtich | July 19, 2019

Seventh grader Gabriella* (name changed) comes to school and reports to me, her teacher, that last night another student at the school sent mean messages on Snap Chat about her to other students and now her friends don’t want to sit with her at lunch. I reported this to administrators who were unable to identify the sender of the messages and did not follow up further.

Cyberbullying is on the rise and female students are three times more likely than male students to be bullied online. Data from the survey “Student Reports of Bullying: Results from the 2017 School Crime Supplement to the National Crime Victimization Survey,” says that 21% of female students and 7% of male students between ages 12 and 18 experienced some form of cyberbullying in 2017. Students in grades 9 through 11 are most likely to experience cyber- bullying. This data shows overall a 3.5% increase from the last year that this data was collected (2014-15.)

Of course this problem is getting worse; the increase is not just due to an increase in reporting—much younger students are gaining access to social media through smart phone apps. Students are more savvy about how to use social media. And students know that it is easy to maintain some degree of anonymity online by creating fake accounts to bully their peers. As in Gabriella’s case, administrators rarely have the time or tools to thoroughly address these incidents.

There are three main tools being used to address the problem.

First, lobbying groups such as Common Sense Education are pushing for legislation that criminalizes electronic bullying. In some states, cyberbullying can be prosecuted and even minor offenders can serve time for such offenses. Cyberbullying can also be a hate crime if certain language is involved. However, the vast majority of cyberbullying cases do not reach the level of actual legal prosecution.

Second, tech companies are also developing tools to address the issue. According to one study conducted by an anti-bullying organization Ditch the Label, the largest number of cyberbullying instances happen via Instagram. Instagram is using machine learning algorithms to identify potentially abusive comments and in 2017 unveiled a tool that allows users to block certain words or even entire groups of users.

Finally, many states have policies about how schools are meant to respond to cyberbullying incidents. For instance, in California the government provides an online training program for administrators to prevent and respond to online bullying.

Ultimately, the effectiveness of all these tools comes down to engaging with the young people who are involved. We can only address cyberbullying if we know that it is happening—this can come from young people understanding what is happening and feeling comfortable enough to tell an adult, or from machine learning tools that tech companies use to flag incidents. Obviously the technical solutions are limited by the extent to which tech companies are incentivized to catch acts of cyberbullying and the effectiveness of the tools themselves. And the likelihood that students will bring all incidents to adult attention is also not always high. Even when adults are made aware of these incidents, school officials might not have the technical knowledge or time to fully address the issues. Students are essentially on their own to deal with cyberbullying—without thorough education about their rights to be free of bullying online, many students simply accept the abuse and silently suffer.

Gabrielle spoke up about what was happening to her, but little could be done. The bullying did not rise to the level of hate crimes due to lack of specific language and administrators were not able to identify with certainty which students were involved. Gabriella became more withdrawn throughout the year and by the end of the year was in counseling for depression.

Educators especially in middle and high school need to implement the policy recommendations that already exist to ensure that these incidents are addressed effectively. Teachers need to be supported in taking time to educate students about these issues. And tech companies need to work more directly with parents and young people to ensure that the protections that they design are actually used effectively.

Are Internet and Social Medias Making the Society More Polarized?

Are Internet and Social Medias Making the Society More Polarized?
By Shirley Deng | July 19, 2019

The Problem

Misinformation and fake news are the problems we today try very hard to combat, as they tend to result in conspiracy theories and plots, ended in hatred and causing the society to be more polarized. It seems like, these problems are only increasing, making larger impacts and more serious consequence due to ease access to the Internet. Rising scandals and strengthening regulations also help to put all these issues to people’s attention.

The Factors

Yet, the society, people and government institutions are also putting the blames to the Internet and more specially, social medias. On October 2018, Peter Bergen and David Sterman addressed on New American that, today, the main terrorist problem in the United States today is one of individuals radicalized by a diverse array of ideologies absorbed from the Internet [1]. Also, as the Stanford professor, Francis Fukuyama points out, polarization might be caused and fostered by many reasons. Though Americans are sorting themselves out geographically, living in increasingly politically homogeneous neighborhoods, social media and the proliferation of media channels via the Internet and TV has played a role allowing people to communicate exclusively with people like themselves [2].

Unquestionably, the development of Internet has enabled connection between people disregard geographic barriers, fostered all kinds of conversations in no time, and given people access to content in their own preference, all thanks to social media and recommendation algorithms. However, in psychology studies, there are also lots of other factors indicating why conspiracy theories spread fast and adopted in scale. Specifically, people who have a low level of analytic thinking, like to overestimate the casual tendency between co-occurring events, or are anxious and feel powerless are more likely to turn to conspiracy theories [3].

A group of researchers from Laboratory of Computational Social Science and other institutions have run an experiment on Facebook to test the difference of spreading patterns on scientific topics and conspiracy (rumors) [4]. The only difference between a science topic and a conspiracy rumor is that whether it is validated with a process. Their experiment and model resulted in very interesting findings that: regardless science topics or conspiracy rumors, when people first receive such information, they tend to share with their close friends first. In other words, most of times, information is taken by a friend who have the same profile (polarization), belonging to the same echo chamber. Users tend to aggregate in communities of interests, causing reinforcement and fostering confirmation bias, segregation and polarization. Interestingly, rumors, as they are aversely against the truth and are more easily to be picked up and thus they have a positive relation between lifetime and size, unlike science topics’ lifetime does not correspond to a higher level of interests.

Yes, Internet and social media sites might have fueled conspiracy rumors, not because Internet and social medias are evil in nature, but because people leverage Internet and social medias to foster their own bubbled communities when people tend to share the conspiracy-related information with other conspiracy believers rather than non-believers [5]. In this way, believes and bias misinformation would be reinforced inside each of these communities, resulting stronger polarized believes.

The Potential Solutions

Before blaming Internet and social medias, it is more meaningful and insightful for us to look into human factors. Lots of psychology studies have given us the hint that people are more interested in exaggerated, distorted information that fits into their theories. People who share them are either firm believers or people who have doubts, unsatisfied with current situations or people who have lower level of analytical thinking abilities.

Social and Education

When social medias might help on building bubbles, bursting the bubble seems to be an obvious way to help avoid people tent to share the same profiles going to extreme. Educations might include helping people to adopt the ideas that people are different, and life is not about debating being right or wrong. Oftentimes, extreme polarized information could be a mix of fact and rumors, which makes the situation more complex. It helps to expose people to different opinions and the corresponding facts and evidences, then we could encourage people to find common grounds.

Fact-Checking

The psychology studies also suggest that when people have doubts, we should give them facts. The rising amount of media outlets focused on fact-checking and political accountability reporting has definitely played an important role on helping with the issue.


Source: Mary Meeker’s Internet Trend Report [6]

The increasing amount of controversy on the credibility of journalist has made fact-tracking more important, as Alan Greenblatt puts, “This is an incredibly important time to be a journalist. Never has the watchdog role been more important.” [7] During the presidential campaign in 2016, at least 6 million people had flocked to a transcript of the debate that was fact-checked by 20 NPR journalists in real-time [8]. Globally, partnerships between social medias, Internet companies and science institutions also help to build a safer and healthier online environment.

Technology and Product

While Internet companies and social medias should not take all the blames, they could take up some responsivities to act proactively and maintain safer and healthier online communities. For example, algorithmic-driven solutions have been proposed and Google is developing a trustworthiness score to rank the results of queries to estimate the trustworthiness of a web source and build knowledge-based trust. WeChat, the social media giant in China, builds an in- app official fact-checking channel that helps to label rumors and stop them from spreading. WhatsApp, the messaging app who hosts a quarter of global populations on it, labels all forwarded messages and reminds its user to think twice before forwarding to others.

Legal

Last but not the least, legal should be an important resort to fight the bad actors in our communities, regardless online or offline. Lots of conspiracy theories that aims to driving people to polarized directions could be initiated by people with ulterior motives. In this regard, besides guidelines and policies, we should hold the bad actors accountable for their own actions. For example, in 2013, in order to combat rumors, the Chinese government brought out tough measures to stop the spread irresponsible rumors, threatening three years in jail if untrue posts online are widely reposted [10]. Although it drew lots of angry responds from the internet users in China initially, it did help to contain the spread of rumors and minimize bad impacts.

Citations:
[1] “The Real Terrorist Threat in America”, https://www.newamerica.org/international-security/articles/real-terrorist-threat-america/
[2] “The Great Recession has influenced populist movements today, say Stanford scholars”, https://news.stanford.edu/2018/12/26/explaining-surge-populist-politics-movements-today/
[3] “The Psychology of Conspiracy Theories” aps Association for Psychological Science, https://journals.sagepub.com/doi/pdf/10.1177/0963721417718261
[4] “The spreading of misinformation online”, PNAS January 19, 2016 113 (3) 554-559; first published January 4, 2016 https://doi.org/10.1073/pnas.1517441113
[5] “The internet fuels conspiracy theories – but not in the way you might imagine”, http://theconversation.com/the-internet-fuels-conspiracy-theories-but-not-in-the-way-you-might-imagine-98037
[6] “Internet Trend Report”, 2019, Mary Meeker
[7] “The Future of Fact-Checking: Moving ahead in political accountability journalism”, https://www.americanpressinstitute.org/publications/reports/white-papers/future-of-fact-checking/
[8] “NPR’s real-time fact-checking drew millions of readers”, https://www.poynter.org/fact-checking/2016/nprs-real-time-fact-checking-drew-millions-of-viewers/
[9] “Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources”, http://www.vldb.org/pvldb/vol8/p938-dong.pdf
[10] “China threatens tough punishment for online rumor spreading”, https://www.reuters.com/article/us-china-internet/china-threatens-tough-punishment-for-online-rumor-spreading-idUSBRE9880CQ20130909?feedType=RSS&feedName=technologyNews

Un-unemployed

Un-unemployed
By Mads Bulkow-Macy | July 19, 2019

The unemployment rate is often used as shorthand for the state of the entire economy. When the Federal Reserve signaled an intent to lower interests rates last week, many news stories supplied context by pointing to recent jobs numbers, headlined by low unemployment. The 3.7% June unemployment rate is near the 50-year low, suggesting that the economy is very healthy indeed. Why, then, would the Fed try to give the economy a boost?

Unemployment is near a 50-year low.


Seasonally adjusted unemployment rate fluctuation since 1969. (Source: Bureau of Labor Statistics)

Jerome Powell’s specific calculus will continue to be the source of much speculation, but one issue that economic headlines could do well to consider is what a low unemployment rate really means. The categories of “employed” and “unemployed”, while at first glance complementary, actually leave out a significant portion of the population. To understand why, it is useful to examine the process by which the Bureau of Labor Statistics develops this statistic.

Since a monthly census of the entire population is infeasible, the statistic is based off of a sample of about 60,000 households, then weighted to be demographically representative for the categories of “age, sex, race, Hispanic ethnicity, and state of residence.” The unemployed/employed determination is made via an interveiw. Reporting employment places a person in the “employed” category. In order to be counted as “unemployed,” a person must:

    • Not currently have a job.
    • Be actively seeking work (in the last four weeks).
    • Be available to work, supposing they receive an offer

Anyone who falls into neither the “employed” nor “unemployed” category is (in general) not counted in the workforce.

If we try to look at unemployment numbers and find, for instance, the number of households which are struggling to put food on the table, we will find that it is in many ways inadequate. First, employment in a single job does not necessarily mean that the person in question has sufficient means to support themselves or their family. Thus, it cannot be used as an accurate predictor of the strain on social safety nets. Secondly, there are a large class of would-be workers who do not actively engage in job seeking. These workers may be temporarily unable to engage in such activities, or may have been searching for long enough that they have become discouraged. This group includes those whose skills have become irrelevant in a changing workforce, and are working to learn a new set of skills before they attempt reentry. It also includes those who suspect that their attempts to seek employment will be met with discrimination or hostility. Note that this would disproportionatley affect groups which have probable cause to be concerned about employment discrimination, such as members of the trans community and people of color. Ultimately, the category created is likely to exclude a good portion of those who would consider themselves unemployed, and fails to capture a variety of forms of personal economic distress. It also fails to capture broader economic inefficiencies, such as underemployed workers or workers who have been forced to seek retraining.

In the creation and definition of these categories, the more narrow aim of the BLS seems to be to measure the availability of workers, in comparison to the current workforce. Yet even here it falls short, given the potential for job seekers with irrelevant skills to be counted as available, while underemployed workers – even those actively seeking advancement – are counted as unavailable.

While as a coarse metric the unemployment rate still serves a purpose as an economic indicator, the category of “unemployment” does not represent what it purports to. It would be useful for everyone from journalists to policy makers to treat it with caution, and consider more closely the people and stories it fails to include.

References

https://www.bls.gov/cps/cps_htgm.htm https://www.cnbc.com/2019/07/05/jobs-report-june-2019.html https://www.nytimes.com/2019/07/19/upshot/economy-fed-powell-rate-cuts-analysis.html

“I’m not worried about my privacy online.” — A Millennial’s Perspective

“I’m not worried about my privacy online.” — A Millennial’s Perspective
By Anonymous | July 12, 2019

As I type this, my Word document highlights a squiggly red line underneath the word “Millennial’s” in my title. How quickly I am to ignore the suggestion, knowing well that this title has been made ubiquitous from this generation’s views, stances, and actions: from
religion to politics, marriage to the economy.

Millennial’s are the generation born between 1981 and 1996, currently between the ages of 23 and 38. (Source: pewresearch.org)

Their perspectives have frustrated the Boomers above them and has quickly molded the world for the Gen Z’s below them. In light of recent privacy scandals in the technology industry and prevalence of “fake news” in the media, the millennials have not been ruffled. Do we chalk it up to apathy and ignorance? To their comfort with technology due to early exposure? To their abundant awareness and caution?

Many surveys have been conducted to understand the viewpoints of the varying generations with regards to security and privacy, and the root causes are still being understood. According to a recent study in 2015 by the Media Insight Project, only 20% of Millennials worried about privacy in general all of the time, their biggest concern being that their identity or financial information will be stolen from them.

Survey reached 1,045 adults across the US, ages 18-34. (Source: amercianpressinstitute.org)

Survey reached 1,045 adults across the US, ages 18-34. (Source: amercianpressinstitute.org)

As a part of that generation, which I would consider rather diverse, I can understand the different root causes for these perspectives.

One is that the Millennial generation was born in the digital age, where the internet was part of the every day person’s life, and the Millennials were the first true customers and fuel of social media. They haven’t known another world so they feel a sense of normalcy in the others having access to their information.

Another reason could be that Millennials have yet to see the repercussions of any security breaches. From Cambridge Analytica to the Marriott account breaches, they have understood that these events have occurred but have not yet been personally impacted by any of them.

On the other hand, Millennials feel that they are in control of their data, that they have chosen what to share online and they have actively accepted any risk of their data being leaked as they make the decision to engage with certain products or apps. They see no true harm in their data being released, except when it comes to financial information (as noted above). This is the idea that they have “nothing to hide” — a credo of the generation which feels the need to share everything.

Our data has always been around. Since before the internet, there has been data. We have just reached an age where we can capture, measure, and use to it to enhance our world like never before. There has to come a point where you accept the world you have become a part of and your role. It is the world you grew up in, the innovation that your data has lent itself to that has made your life easier and better. You start identifying tradeoffs: “If I don’t share my location with my Uber app I will need to figure out exactly what the address of this location that I am is and make sure I don’t misspell anything so that my Uber driver can come pick me up at the right spot.” You have chosen your life, the conveniences, the benefits, over the seemingly small and insignificant parts of privacy that you are handing away. And as a millennial, I may be naive, but we have reached a point where there is no “acceptable” alternative.

Sources

https://www.americanpressinstitute.org/publications/reports/survey-research/millennials-news/single-page/

https://www.pewresearch.org/fact-tank/2019/01/17/where-millennials-end-and-generation-z-begins/ft_19-01-17_generations_2019/

https://www.forbes.com/sites/larryalton/2017/12/01/how-millennials-think-differently-about-online-security/#6571f7e7705f

https://www.proresource.com/2018/05/why-millennials-and-gen-z-worry-less-about-online-privacy/

https://news.gallup.com/businessjournal/192401/data-security-not-big-concern-millennials.aspx

https://www.forbes.com/sites/blakemorgan/2019/01/02/nownership-no-problem-an-updated-look-at-why-millennials-value-experiences-over-owning-things/#7acd2f5522fc

A Simple (July 2019) Online Privacy Tech Stack

A Simple (July 2019) Online Privacy Tech Stack
By Eduard Gelman | July 12, 2019

As consumers become increasingly aware that their behavior is actively tracked by advertising firms and governments, and that this information is occasionally lost in high-profile, high-stakes leaks, many are beginning to modify their habits. Privacy and security concerns are likely at the forefront of the development and adoption of a slew of tools that individuals can use to make their online and increasingly visible “offline” behavior more private, or at least, more secure. Since the toolsets and adoption are in flux, this blog will attempt to survey the landscape as it exists in July 2019, reviewing the harms that consumers are trying to avoid, and will take some liberties in picking “flagship” products to represent a technique and in omitting less-adopted technologies for the sake of concision.

The main privacy violations that these tools help consumers to minimize fit neatly into Solove’s Privacy Taxonomy, with threats coming in “surveillance”, “identification”, and “secondary use” harms. Each product discussed in this blog post address one or more of these potential harms.

Surveillance harms may come from private or public entities who are able to read content exchanged between individuals. Just as the NSA is being excoriated for it’s wide-reaching surveillance procedures, recently Facebook began to block private and public messages depending on content. It’s relatively clear that these products are well-intended, but may carry alarming, negative consequences. Further harms may come when activity across disparate platforms and connection points can be identified as belonging to the same individual, leading directly to potential exploitation of individuals based on their history. Famously, an unaware father was recently alerted to his daughter’s pregnancy by a wayward advertisement. When sensitive information leaks and is used for identity theft, this quickly escalates into a security problem with serious financial and legal ramifications.

Online, there are countermeasures that individuals can take to obfuscate or subvert tracking:

It is important to note that much of this data collection is actively used to improve and personalize products and services. Netflix might not be able to recommend a spectacular show that is perfectly suited to your tastes if it isn’t able to merge data from your behavior and ratings with those of other Netflix users. In fact, Netflix asked everyone to participate in this project, and paid handsomely for the result. Amazon might not be otherwise able to notice that you’ve looked at reviews of healthy toothpastes, and serve you an ad with a better price and much better convenience than your local supermarket. Nonetheless, some feel that ad agencies and governments building up profiles of individuals’ likes, dislikes, behaviors, vices, and other “personal” matters is a violation of privacy.

What do you think? Did this survey miss any topics or important products? Let us know in the comments.

Facial Recognition at U.S. Airports: The “Future” is Now

Facial Recognition at U.S. Airports: The “Future” is Now
By Annie Lane | July 12, 2019

At many U.S. airports, passengers face long lines and multiple checkpoints for checking bags, obtaining boarding passes, screening carry-ons, and verifying identity to get to the gate. The Transportation Security Administration (TSA) hopes to streamline the process with facial recognition technology. As part of the Department of Homeland Security (DHS), the TSA is responsible for domestic and international air travel security in the US. The TSA estimates screening 965 million passengers annually, or roughly 2.2 million passengers daily. That number is growing at a rate of about 5% per year. Facial recognition systems promise to expedite the process and support the increasing passenger volume. Beyond security checkpoints, the TSA is partnering with airlines like JetBlue  and Delta to achieve a “curb-to-gate” vision with photos granting access at each checkpoint.

While facial recognition technology could unlock efficiencies, it also creates new risks and privacy concerns. A massive database of passenger images must be collected, stored and protected. Passengers have a right to provide consent, especially since the accuracy of facial recognition technology is questionable. The application of facial recognition technology by government agencies is also under the bipartisan scrutiny of Congress.

How Facial Recognition is Applied in Airports

The TSA lays out their plan to increase security and improve passenger security through automation of manual screening tasks in their Biometric Screening Roadmap. Traditionally, the Transportation Security Officers at the checkpoint compare the presented photo ID to the face of the person standing in front of them and matches the name on the boarding pass. The TSA has started screening pilots with U.S. Customs and Border Protection (CBP) to evaluate the facial recognition technology. In the pilots, at the Traveler Document Checker point, a camera takes a picture of the passenger’s face. This photo is then transferred to the cloud where an algorithm attempts to match the photo to the stored facial template database managed by CBP to identify the passenger. Upon finding the match, the passenger is permitted to proceed.

Storage and Security of Biometric Data
This system requires the storage of photos in a central database accessible to the federal agency. At the federal level, there is a collection of passport and visa photos. Applying this technology is a challenge for domestic flights because each state has their own database of driver’s license photos. However, a recent investigation by the Washington Post reveals that other federal agencies, including the FBI and ICE, have been accessing these state databases without due process required by the 4th Amendment. While the TSA is not currently involved in this invasion of privacy, this violates the principle of consent and betrays the public’s trust of the government’s use of facial recognition.

No data system is fully secure against attacks, so the huge database necessary becomes a desirable target. Increased access to the database introduces additional vulnerability. This is a legitimate concern – this June, the CBP reported that Perceptics, a private contractor, was hacked. The hack compromised around 100,000 images of license plates and travelers collected at border checkpoints. The CBP placed blame solely on Perceptics and chose to suspend the company rather than take any responsibility. Based on this recent response, we cannot expect the CBP or TSA to accept accountability for this new database as they partner with private companies.

Consent and Opting-out
The TSA biometric roadmap highlights that all passengers will have the opportunity to opt-out of the biometric screening. They can be screened manually using traditional methods. While it’s essential to provide the opportunity to provide consent and provide alternatives, these alternatives may come at a cost. The manual screening will likely takes longer and there may a social cost as strangers observe defiance of the “norm”. Two different passenger accounts confirm this and observe that opting out is not a clear choice for JetBlue and Delta’s boarding facial recognition systems. Even if a passenger opts out at the gate, their images have still been gathered in CBP’s cloud database as part of the flight gallery to be accessed by the private airline.

Accuracy of Facial Recognition
The system accuracy goal is correct identification of 96% of legitimate passengers. Even if this accuracy level is achieved, 1 in 25 passengers would require additional screening. While the majority of passengers may have a better experience, a subpopulation will face inconveniences. The [National Institute of Standards and Technology’s April evaluation of various facial recognition algorithms found that black and female subjects had consistently lower accuracy than white and male subjects. This means a particular subpopulation will disproportionately bear the burden of the technology. While prevalence of facial recognition is increasing, fairness has not been sufficiently addressed.

Taking Action

While facial recognition technology is already deployed at some American airports, there are opportunities to put the brakes on the program. The DHS has standards for gathering public opinion and assessing privacy risks including the creation of Privacy Impact Assessments. The House Oversight and Reform Committee and the House Homeland Security Committee both held hearings this summer on government use of facial recognition. We must hold our representatives accountable for protecting unnecessary invasions of privacy by government agencies.

Why the World Economic Forum’s Global Council on AI should focus on protecting children

Why the World Economic Forum’s Global Council on AI should focus on protecting children
By Ivan Fan | July 8, 2019

The advent of AI is a trend which will affect our children and our children’s children. In a world characterized by constant technological change, we must invest more in preparing future generations through improved governance of AI-interactions involving children, particularly in the context of areas such as education.

The newly created World Economic Forum (WEF) AI Council on Artificial Intelligence presents an opportunity to develop a global governance approach to AI, which should include a strong treatment of governance issues around AI-interactions with children. The forum is well positioned to do so; its Generation AI project has previously advanced important questions regarding uses of AI in relation to children.

The creation of the council comes in the wake of a recent trend of nations placing greater emphasis on cooperation with regard to overall AI governance. Multi-lateral efforts on the part of the EU and OECD, in particular, have sparked efforts toward developing a consensus around core AI issues in their respective memberships. Notably, the European Commission’s High Level Expert Group on AI recently released a set of ethical guidelines on AI and recommendations for trustworthy AI, formally addressing the need for governance around AI-interactions with children.

In a time when troubling terms such as “technological cold war” have cropped up, overcoming techno-nationalistic tensions and fostering collaboration has never been more important between great powers. The great challenge we face today is ensuring that people everywhere—in both developed and emerging countries—have sufficient access to AI resources. The best way to achieve this is by doubling down on opening up access to educational opportunities to youth everywhere, and the WEF is well positioned to provide critical, impartial leadership on this front.

Current talent pools are insufficient for taking advantage of the future range of occupations enabled by AI, and without systemic reform addressing rising inequality, societies will regress to a state in which opportunities are increasingly restricted to those able to access AI resources. National efforts such as the American AI Initiative, China’s New Generation Artificial Intelligence Development Plan, and the European Strategy on Artificial Intelligence all emphasize talent shortages as a significant impediment to implementing AI effectively.

This is why policies directed toward the expansion of the available talent pool are critical, which should include redesigning education systems to prepare children with the necessary skills to thrive in an AI-enabled world. Many countries agree that overhauling education systems to teach necessary cognitive, socio-cultural, and entrepreneurial and innovation competencies are a primary means of addressing talent shortages. Expanding access to STEM opportunities for women is also of vital importance, and must improve at all ends of the talent pipeline—from early childhood education all the way to the C-suite.

In his landmark book “AI Superpowers”, Kai-Fu Lee, co-chair of the WEF’s new council on AI alongside Microsoft President Brad Smith, writes about how perception AI is revolutionizing China’s education system. I serve as a research and teaching assistant to faculty here at UC Berkeley’s School of Information, and I have seen first-hand how new technologies can revolutionize the delivery of education in my own graduate program. Instructors now have unprecedented access to rich profiles of students and to dashboards notifying them of a whole host of AI-enabled features including high-fidelity, real-time notifications about performance at the individual and macro-level.

In revamping our education systems to use AI and to teach AI, it is crucial that the safety and rights of children are strictly respected by those who would impact their learning and growth. AI HLEG provides some ideas for the Global AI Council to consider – it recommends protecting children from “…unsolicited monitoring, profiling and interest invested habitualisation and manipulation” and giving children a “clean slate” of any public or private storage of data related to them upon reaching a certain age. The WEF’s Global Council on AI represents an outstanding opportunity to consider and iterate upon such ideas in order to better protect and serve the needs of our children.