A Problem to A Dress: Algorithmic Transparency and Appeal

A Problem to A Dress: Algorithmic Transparency and Appeal
By Adam Johns | April 13, 2020

Once upon a time, a million years ago (Christmas 2019), people cared about things buying fashionable gifts for their friends and family, rather than access to bleach. At this time, I was in the process of attempting to purchase a dress for my partner from an online store I’d shopped with in the past. Several days before the delivery cutoff for Christmas delivery, I received an unceremonious computer-generated email that my order had been cancelled. No sweat, I thought, and repeated the purchase. Cancelled again. As the deadline for the holidays approached, I called the particular merchant, who informed me that my order had been flagged by an algorithm as a security risk, my purchase had been cancelled, and there was in fact nobody I could speak to to appeal to, and no possibility of determining what factors had contributed to this verdict. I hung up the phone, licked my wounds, and moved on to other merchants for my last-minute shopping.

Upon later reflection, chastened by a nearly missed holiday gift deadline, I mused at what could have possibly resulted in the rejection. Looking back over my past purchases, it became apparent that in a year or two of shopping with this particular retailer, I hadn’t actually bought any women’s clothes. Perhaps it was the sudden change from menswear to dresses that led the algorithm to flag me (a not very progressive criteria for an otherwise progressive-seeming retailer). Whatever the reason, this frivolous example got me thinking about some very serious aspects of algorithmic decision making. What made this particular example so grating? Firstly, the decision was not transparent—I wasn’t informed that an algorithm had flagged my purchase until a number of calls to customer service. Secondly, I had no recourse to appeal—even after calling up, credit card info and personal identification in hand, nobody at the company was willing or able to overturn the decision. While such an algorithmic “hard no” was easy to shake off for a gift purchase, imagining such an approach applied to a credit decision, an insurance purchase, or a college application was disconcerting.

In 2020, algorithmic adjudication is becoming an increasingly frequent part of life. Machine learning may be broadly accurate in the aggregate, but individual decisions can always suffer from false positives and false negatives. When such a decision is applied to customer service or security, bad decisions can alienate customers and lead previously loyal customers to take their business elsewhere. When algorithms impact more consequential social matters like person’s access to health care, housing, or education, the consequences of a poor prediction take on higher stakes. Instead of just resulting in disappointed customers writing snarky blog posts, such decision making can amplify inequity, reinforce detrimental trends in society, and lead to self-reinforcing feedback loops of diminished individual and societal potential.

The growing importance of machine learning in commercial and government decision making isn’t likely to decline any time in the future. But to apply algorithms for maximum benefit, organizations should ensure that algorithmic decision making embeds transparency and a right to appeal. Let somebody know when they’ve been flagged, and what factored into the decision. Give them the right to speak to a person and correct the record if the decision is wrong (Crawford and Schultz’s concept of algorithmic due process offers a solid base for any organization trying to apply algorithms fairly). As a bonus, letting subjects of algorithmic decision making appeal offers a tantalizing opportunity to the data scientist: More training data to improve the algorithm. While it requires more investment, and a person on the other end of a phone, transparency and right to appeal can result in a rare win-win for algorithmic designers and the people to whom those algorithms are being applied, and ultimately lead us toward a more perfect future of algorithmic coexistence.

Reference:
Kate Crawford & Jason Schultz, Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, 55 B.C.L. Rev. 93 (2014), https://lawdigitalcommons.bc.edu/bclr/vol55/iss1/4

Transgender Lives and COVID-19

Transgender Lives and COVID-19
By Ollie Downs | April 10, 2020

Transgender Day of Visibility (TDOV) is March 31st every year; it is a day to celebrate the trans experience and “to bring attention to the accomplishments of trans people around the globe while fighting cissexism and transphobia by spreading knowledge of the trans community”. I spent this year’s TDOV voluntarily sheltering in place in my home in Berkeley, California, with two other non-binary housemates of mine. During this shelter-in-place, I am reminded of the struggles faced uniquely by trans and non-binary folks in light of COVID.

Being Counted
Being counted is essential to dealing with issues like COVID-19, but there are challenges associated with counting trans people. Knowing who is getting it, where and when, and how they are dealing with it, are all crucial questions to answer. Unique groups like the non-binary community may very well be at higher risk for contracting COVID-19–and we need to know that. The ethical implications of collecting this data are tricky. Being visible as trans/non-binary is crucial for some people, and dangerous for others. On one hand, being able to quantify how many people, and what kinds of people, identify that way and where allows us to not only understand the demographics–and thus potential challenges and experiences–of those people. Especially in public health and government settings, knowing where things are happening, and to whom, is crucial in designing solutions like enforcing quarantines and distributing resources. On the other hand, forcing people to identify themselves as one thing or the other is challenging for many, and divides the world into discrete parts when actual identities are fluid and on spectrums. The truth may be lost when a person is forced to choose between imprecise options.

Social isolation and Abuse
Shelter-in-place orders are effective tools for containing the spread of a disease. But they’re also very effective at containing people who may not get along. As many of us have experienced firsthand, being isolated with others can create tension and conflict–which can be deadly for people with identities or characteristics outside ‘the norm.’ Transgender people, especially youth, may be trapped with abusive parents, partners, or other people who may seek to harm them, especially in situations where other identities intersect with their gender. Many transgender individuals find community in social spaces like communithy centers or bars, and without access to them, these communities (like many other marginalized communities) will suffer.

Other intersecting identities
The intersection of gender with other identities is complex and precarious. Other examples of discrimination against people with marginalized identity are everywhere. One example can be found here. In this post, Nadya Stevens reveals the danger faced by “poor people, Black people and Brown people” who are “essential workers” who must commute on crowded, reduced-service public transportation. Transgender and non-binary people, who face poverty and racism at alarmingly high levels, are directly impacted by the policy changes like that of the MTA. There is some light at the end of this particular tunnel. Actor Indya Moore began a campaign to take direct action to support transgender people of color (donate on Cashapp to $IndyaAMoore), and Moore’s campaign raised so much money in its first week that their account was frozen. This cannot be an isolated campaign: policy efforts must be made to continue this action.

Education at Home
Policy shifts towards turning education online during this time have been extremely difficult, especially for anyone in an unsafe home environment, without access to the Internet, or who are otherwise unable to consume material or who learn better in classroom settings. Transgender and non-binary people, again, experience poverty and violence at high rates, which may be worsened by these policy measures, and also often face medical discrimination, and may be impacted by failure to make online learning accessible to deaf, blind, or otherwise ‘non-normative’ students.

Medical issues
It makes sense that hospitals and medical care providers are halting ‘non-essential’ services like surgeries to focus on the care of COVID-19 patients. But the classification of some surgeries as ‘non-essential’ can be devastating, especially for trans and non-binary patients. Gender-affirming procedures are often categorized this way, but for many patients, they are crucial for their health and safety in a transphobic world. Additionally, patients with AIDS–many of whom are transgender–are at a higher risk of death from COVID-19.

The Unknowns
What we don’t know could be the worst part of this epidemic. We don’t know if, or how, COVID-19 interacts with hormone treatments or HIV medication. We don’t know how it will impact the future of education or policy, or how social isolation and intersecting identities might change these outcomes.

What’s next?
Taking action is very difficult in a pandemic. This situation impacts everyone differently, but impacts transgender people as a community especially. What can be done? Until we can return to normal life, there are several actionable ideas; donate to funds you know will go towards transgender lives (Cashapp: $IndyaAMoore and many others), check in with your friends, family, colleagues, coworkers, and acquaintances who you know are transgender and offer your support; educate yourself and others about the struggles of the trans community; volunteer for organizations committed to transgender health. Finally, have hope. The transgender community has been more than resilient before. We will continue to be resilient now.

If you or anyone you know who is trans/non-binary/gender non-conforming and facing suicidality, please call Trans Lifeline at 877-565-8860.

Photo credits:
https://www.state.gov/coronavirus/
https://commons.wikimedia.org/wiki/File:Nonbinary_Gender_Symbol.svg
https://en.m.wikipedia.org/wiki/File:A_TransGender-Symbol_black-and-white.svg

The Ethics of Not Sharing

The Ethics of Not Sharing
By George Tao | April 10, 2020

In this course, we’ve thoroughly covered the potential dangers of data in many different forms. Most of our conclusions have led us to believe that sharing our data is dangerous, and while this is true, we still must remember that data is and will be an instrumental part in societal development. To switch things up, I’d like to present the data ethics behind not sharing your data and steps that can be taken to improve trust between the consumer and the corporation.

The Facebook-Cambridge Analytica data scandal and the Ashley Madison data leaks are among many news stories regarding data misuse that have been etched into our minds. However, we often remember the bad more vividly than the good, so as consumers, we seek to hide our data whenever possible to protect ourselves from the bad. However, we also must remember the tremendous benefits that data can provide for us.

One company has created a sensor that pregnant women can wear to predict when they are going into labor. This app can provide great benefits in reducing maternal and infant mortality, but it can also be very invasive in the type of data it collects. However, childbirth is an area that can use this invasive type of data collection to improve upon current research. Existing research regarding female labor is severely outdated. The study that modern medicine bases its practices on was done in the 1950s on a population of 500 women who were exclusively white. By allowing this company to collect data regarding women’s pregnancy and labor patterns, we are able to replace these outdated practices.

Shot of a beautiful group of young pregnant women taking a selfie together after a yoga session in studio

This may seem like an extremely naive perspective on sharing data, and it is. As a society, we have not progressed to the point where consumers can trust corporations with their data. One suggestion that this article provides is that data collectors should provide their consumers with a list of worst case scenarios that could happen with their data, similar to how a doctor lists side effects that can come with a medicine. This information not only provides consumers with necessary knowledge, but also helps corporations make decisions that will avoid these outcomes.

I believe that one issue that hinders trust between consumer and corporation is that of the privacy policy. Privacy policies and terms of agreement are filled with technical jargon that make them too lengthy and too confusing for consumers to read. This is a problem because I believe that privacy policies should be the bridge that builds trust between the consumer and the corporation. My proposed solution is to create two separate but identical privacy policies: one that is designed for legal purposes and one that is designed for understandability. By doing this, we provide consumers with knowledge of what the policy is saying while not losing any legal protections that the policy provides.

There are many different ways to approach the problem of trust, but ultimately, the goal is to create trust between the consumer and the corporation. When we have achieved this trust, we can use the data built by this trust to improve upon current practices that may be outdated.

Works Cited
https://www.wired.com/story/ethics-hiding-your-data-from-machines/

Ethical CRISP-DM: The Short Version

Ethical CRISP-DM: The Short Version
By Collin Cunningham | April 11, 2020

If you could impart one lesson to a fledgling data scientist, what would it be? I asked myself this question last year when data science author Bill Franks called for contributors to his upcoming book, 97 Things About Ethics Every Data Scientist Should Know.

The data scientists I have managed and mentored most often struggle with transitioning from academic datasets to real world business problems. In machine learning classes, we are given clearly defined problems with manicured datasets. This could not be further from the reality of a data science job: requirements are vague, data is messy and often doesn’t exist, and causality hides behind spurious correlations.

This is why I teach junior data scientists the ​Cross Industry Standard Process for Data Mining (CRISP-DM). ​Even though it was developed for data mining long ago, it is perfectly applicable to modern data science. The steps of CRISP-DM are:

  • Business Understanding
  • Data Understanding
  • Data Preparation\
  • Modeling
  • Evaluation
  • Deployment

These steps are not necessarily sequential as shown in the diagram; you often find yourself back at ​Business Understanding after an unsuccessful deployment. However, this framework gives much needed structure which smoothes the awkward transition from academia to industry.

And yet,​ this would not be the singular lesson I would impart. That lesson would be ethics. Without instilling ethics in data science education, we are arming millions of young professionals with tools of immense power but no notion of responsibility. Thus, I sought to combine the simplicity and applicability of CRISP-DM with ethical guardrails in developing Ethical CRISP-DM. Each step in CRISP-DM is augmented with a question on which to reflect during that stage.

Business understanding – What are potential externalities of this solution? We ask data scientists to lean on those with domain experience when refining requirements into problem statements. Similarly, these subject matter experts are the people who have the most insight into those who may be affected by a model.

Data understanding< – ​Does my data reflect unethical bias?/strong> As imperfect creatures, it is naive to view anyone as void of bias. It follows that data generated by humans inevitably holds the shadow of these biases. We must reflect on what biases could exist in our data and perform specific analysis to identify these biases.

Data preparation – ​How do I cleanse data of bias? ​The data cleansing we are all familiar with has a parallel cleansing phase in which we seek to mitigate the biases identified in the previous step. Some of these biases are easier to address than others; filtering explicitly racist words from a language model is easier than removing relationships between sex and career choice. Furthermore, we must acknowledge that it is impossible to completely scrape bias from data, but attempting to do so is a worthwhile endeavor.

Modeling – ​Is my model prone to outside influence? ​With the growing ubiquity of online learning, models often adapt to their environment without human oversight. To maintain the ethical standard we have cultivated so far, guardrails must be put in place to prevent nefarious evolutions of a model. When Microsoft released Tay onto Twitter, users were able to pervert her language model resulting in a racist, anti-semetic, sexist, Trump-supporting cyborg.

Evaluation and Deployment – ​How can I quantify an unethical consequence? ​The foundation of artificial intelligence is feedback. It is critical we create metrics to monitor high-risk ethical consequences. For example, predictive policing applications should monitor the distribution of crimes across neighborhoods to avoid over-policing.

Ultimately, we are responsible for the entire products we deliver including their consequences. Ethical CRISP-DM holds us to a strict regime of reflection throughout the development lifecycle, thereby assuring the models we deliver are built ethically.

The Robot of Wall Street

The Robot of Wall Street
By Vinicio De Sola | April 12, 2020

Since the start of the pandemic, the market has become more like a rollercoaster rather than the “Little Engine that Could” that was in the previous 8 to 10 years. We have weeks that seem to wipe out all the hard earnings of our 401(k), pension plans, and investments, while others it looks like the worst had happened, that investors are regaining confidence, and the market will rebound. For the high rollers, the people with high capital, this translates to long calls to their financial analysts, asking them what to do, when to sell, when to buy, how to weather the storm. But, for the majority of Americans, these analysts had a transformation: enter the Robots.

Robo Advisors are very different than human, financial analysts in several ways. First, they automate investment management decisions by using computer algorithms. Without having to bore the reader with the financial jargon, this means that they use portfolio optimization techniques with constraints based on the risk tolerance of the user. In essence, this means asking the investor a set of predetermined questions so they can assess the tolerance on a given scale (this varies from different robo-advisors). Second, because there is no real research focused on the client. The trading can be done in bulk by clustering similar users; the fees are way smaller when compared to human advisors – 0.25% compare to around 2 to 3%, and have way lower minimum balances, with some allowing the user to open the account with $0. In contrast, human advisors have some minimum thresholds of around $250,000 to consider the user.

So, Did the Robot kill the financial star? Not so fast. Robo advisors are far from perfect. Some recent research in the financial space had shed some light on the shortcomings of robo advisors. First, the process is far from standard: each company or brokerage firm has its method of evaluating investor risk tolerance, but on average, this means just asking a questionnaire of around 35 questions – which in reality can’t fit the whole reality of an investor. Some authors even found that some of the questions weren’t even related to assessing risk tolerance, but to sell products from the same brokerage firm or fund. Also, despite their name, these advisors don’t use Big Data, Artificial Intelligence, or social media to paint a clearer picture of the user, instead of focusing in broad clusters of risk – to keep fees small and trades free.

Another key and essential difference are to ask if the robo advisor is a fiduciary of the investor (does the advisor have the responsibility of acting on the best interest of the investor over their own). A considerable chunk of robo-advisors are not fiduciary, but instead only need to follow the suitability obligation (they are only bound to provide suitable recommendations to their clients). On the other hand, for the small group of robo-advisor that consider themselves fiduciary, the industry has the view that they can’t be fiduciaries. In essence, they only offer advice to their clients based on their goals, not on their full financial situation, so that any advice will be flawed from the start.

So, what should you, the reader, do? Many robo-advisor services had a substantial inflow of new users during the pandemic by people that want to buy the deep (Buying stocks after a significant drop in price). Still, for the long-period investors, their portfolios suffered significant losses given the massive tail risk that many advised portfolios have in times of crisis. Without being financial advice, I would recommend you, the reader, to do the following research when selecting a robo-advisor:

1. Does the company have a fiduciary duty with its clients?
2. Does the company have any form of human interaction or services? Is it a call center? Or just email-based? How quickly can they respond to a crisis?
3. Please read their privacy policies: many advisors also have other products that relate to marketing, so all your financial information will be in display for them.
4. Finally, decide if the robo-advisor portfolio matches your risk characteristic? If not, move to another one – you won’t pay much if you hop around advisors during set-up time, but once you decide one, always monitored your performance.

Sources
——-

– What Is a Fiduciary?, https://smartasset.com/financial-advisor/what-is-fiduciary-financial-advisor
– To Advise, or Not to Advise — How Robo-Advisors Evaluate the Risk Preferences of Private Investors, https://www.cfainstitute.org/research/cfa-digest/2019/01/dig-v49-n1-1
– Robo-Advisor vs. Personal Financial Advisor: How to Decide, https://www.nerdwallet.com/blog/investing/personal-financial-advisor-robo-advisor/
– Robo-Advisor Fee Comparison, https://www.valuepenguin.com/comparing-fees-robo-advisors

Collecting citizens data to slow down coronavirus spread without harming privacy

Collecting citizens data to slow down coronavirus spread without harming privacy
By Tomas Lobo | April 5, 2020

During these days, we have seen how coronavirus has been able to paralyze 4 billion humans, locking them in their houses. The human losses and economic consequences are far from being over. Some countries have been better than others at containing the spread, and there’s a lot that other countries can learn from them in order to more effectively contain their respective outbreaks. In all cases of success (eg. China (arguably), Singapore, Taiwan, Hong Kong, South Korea, etc.), the common denominator has been: a) test fast, b) increase surveillance.

Increased surveillance during a global health crisis can be very positive for the virus containment. For example, if somebody who has the virus decides to go grocery shopping, the consequences can be catastrophic. More surveillance would allow the Government to know exactly what are the whereabouts of people that can potentially contaminate other individuals and fine them if they disobey the rules of mobility. For a system like that to work and be endorsed by citizens, it would be necessary to a) have multiple sources of real time data collection and b) assure that people’s privacy is respected.

In order to understand who is more likely to have the virus, who already has the virus and what are the whereabouts of both groups of people, the Government would require: 1) to use the cameras systems available with added facial recognition capabilities, 2) to connect tests data to a national database, 3) to monitor peoples’ location via their smartphones, 4) to monitor peoples’ temperature with IoT thermometers (like Kinsa). A surveillance system like this would not be cheap, but considering that the coronavirus is expected to cost the economy trillions of dollars – 4% of the global GDP this year and who knows how much more in future years until a vaccine is invented – this cost looks little in comparison.

To implement a system like this, people are required to endorse it and embrace it, as most of the world’s political systems are democratic by nature. For this to happen, the Governments should demonstrate to the citizens that a system like the described above would only be used to contain the spread of the virus and to save hundreds of thousands of lives. The flow of information should be carefully controlled, so there are no leaks that could potentially damage individuals lives. Also, this data should be kept strictly for Governmental use.

In the coronavirus era, it’s imperative to innovate and embrace the use of data collection technologies to impose efficient, sustainable lockdowns that target individuals that are most at risk of spreading the disease. For that to be implemented effectively, securing citizens privacy is a key component that will guarantee the program’s endorsement. Time is ticking and it’s time to learn from what some Asian governments have been able to put together relatively quickly.

Sources:
https://www.aljazeera.com/news/2020/03/china-ai-big-data-combat-coronavirus-outbreak-200301063901951.html

Online Security in the Age of Shelter-In-Place

Online Security in the Age of Shelter-In-Place
By Percival Chen | March 31, 2020

With the implementation of the shelter-in-place orders around the country, the economy has been continuing to tank. One particular industry that is bucking the trend and is surging upwards is the video conference industry, and nowhere is this more apparent than with Zoom Video Communications, otherwise known simply as Zoom.

According to the New York Times, “[e]ven as the stock market has plummeted, shares of Zoom have more than doubled since the beginning of the year” [1]. Some people might say business is “zoom”-ing with the videoconferencing app, but with the rise of Zoom comes many other concerns, especially with regards to its privacy and security practices.

One recent practice that has gained particular notoriety is the concept of “Zoombombing” by attackers who can potentially hijack a meeting and dump whatever information they want onto the viewers [2]. This often leads to entire meetings getting shut down because once a user shares, there is no way for the host to immediately kick out the user once their screen has been shared.

Another issue that Zoom has had to deal with is the “recent and sudden surge in both the volume and sensitivity of data being passed through its network” [1]. The millions of Americans shifting to this new online reality of life, including the displaced students across the nation, are now using services such as Zoom to carry on with their jobs and with school as before. This sudden surge has led to an increased concern for the potential vulnerabilities of Zoom’s current security practice, a sentiment shared by even New York attorney general’s office. Communication is so vital to the operations of businesses, corporations, schools, and even just our daily lives, and now that all of that traffic is being funneled through a few providers, there is little wonder that this is becoming more of a concern, especially with the increase in sensitive information that is being passed virtually.

What kind of information is particularly at risk? It’s the personally identifiable information (PII). Think of this as data that can be used to identify a specific individual. This is basic information that Zoom collects, as do many other companies. It’s information like your name, home address, email, and phone number, but it also includes your Facebook profile information, credit/debit card (if it’s linked to your payment of Zoom), information about your job like your title and employer, general information about your product and service preferences, and information about your device, network, and internet connectivity [3]. Now, at this point, identity theft in the form of an information leak or hack attack can lead to serious consequences. I conducted some UserTesting research about a month ago, and some participants voiced that this potential issue was serious enough for them to personally investigate further and perhaps to even look for a different service for their needs rather than to Zoom, given that Zoom had access to a trifecta of personal, social, and financial information of a user.

Even as this blog post is released, I am sure that Zoom’s privacy policy will continue to evolve as different events unfold. It already has been updated several times since February. And while I don’t foresee video communications being shut down given the essential role that they play in the corporate scene, I do expect there to be many (many) new tweaks to the current system in place, and maybe after the pandemic is over, our world will be even more resilient to deal with the shifting landscape of data privacy and security.

Sources:
1. New York Attorney General Looks Into Zoom’s Privacy Practices – The New York Times. https://www.nytimes.com/2020/03/30/technology/new-york-attorney-general-zoom-privacy.html. Accessed 31 Mar. 2020.
2. Lorenz, Taylor. “‘Zoombombing’: When Video Conferences Go Wrong.” The New York Times, 20 Mar. 2020. NYTimes.com, https://www.nytimes.com/2020/03/20/style/zoombombing-zoom-trolling.html.
3. ZOOM PRIVACY POLICY – Zoom. https://zoom.us/privacy

Data Ethics in the Face of a Global Pandemic

Data Ethics in the Face of a Global Pandemic
By Natalie Wang | March 27, 2020

Since the first recorded case of Covid-19 four months ago, there have been more than 500,000 reported cases worldwide. Every day thousands of new cases of this infectious disease are being reported. Early studies show it affects people of all ages (although it appears to be particularly fatal in the elderly population) and is most often transmitted through contact with infected people. Due to its highly infectious nature, many countries have implemented international travel restrictions. Additionally, in the United States, as of March 24, more than 167 million people in 17 states, 18 counties, and 10 cities have received shelter-in-place orders from their state officials. The government is also attempting to head off the spread of Covid-19 with data.


Source: https://coronavirus.jhu.edu/map.html

As a disease progresses, there are different ways data can be used for public good. In the beginning of a spread, it is useful for public health officials to know some private information about the infected individuals. For example, where exactly they have been and who they have been in contact with. This can help determine how the virus was transmitted and who to warn to be on the lookout for early symptoms. Other demographic and health data may also provide clues about which populations will be particularly susceptible to the virus. However, while this data is extremely useful to public health officials especially in the beginning when a disease is more unknown, it does not mean it is useful or should be made available to the public. Personal privacy and risks associated with health data still need to be taken into account when sharing this kind of data. For example, knowing the names of the individuals with the first cases of Covid-19 would not help you protect yourself any better against the virus.

Once a disease is spread around a community, knowing specific personal information may not be as useful because an individual could have picked up the virus from many places. At this point, to prevent further spread of Covid-19, it may be useful to know where collective groups of people are gathering and general transportation patterns. Currently, Facebook, Google, and some other tech companies are discussing potentially sharing aggregated and anonymized user location data with the US government to analyze how effective social distancing measures are and how transportation patterns might be affecting the spread of Covid-19. From the general sentiment on Twitter, people are extremely unhappy with this idea. The main argument being that providing the government with increased data now will lead to further erosion of privacy down the line.


Source: https://www.wired.com/story/value-ethics-using-phone-data-monitor-covid-19/

To limit the privacy risk this additional digital surveillance will have on the population, tech companies should only collect and provide data that is needed and will lead to valuable insights on Covid-19. Obviously this is easier said than done, how do we know how much data is enough? Or what will be useful? The best way to address this would be to consult with multiple experts from different fields and to keep users informed during the process. In my opinion, Covid-19 is too important of an issue to not attempt to use all the resources we have. During public health emergencies, people cannot have the same level of personal privacy they have at other times. However, there should still be safeguards in place to protect a reasonable amount of privacy while also furthering public health.

As Al Gidari, the director of privacy at Stanford’s Law School tweeted, “The balance between privacy and pandemic policy is a delicate one… Technology can save lives, but if the implementation unreasonably threatens privacy, more lives may be at risk.” As a society, we are in a situation we have never been in before; Covid-19 is a dangerous global pandemic that needs to be addressed with all the technology we have. However, because the potential for misuse of personal data is so high, there needs to be more transparency between what data is being shared and how it is being used. Additionally, people should be careful and stay informed about what is going on.

Sources:
https://www.cdc.gov/coronavirus/2019-ncov/prepare/transmission.html
https://www.nytimes.com/interactive/2020/us/coronavirus-stay-at-home-order.html
https://www.theverge.com/2020/3/12/21177129/personal-privacy-pandemic-ethics-public-health-coronavirus
https://www.wired.com/story/value-ethics-using-phone-data-monitor-covid-19/
https://www.washingtonpost.com/technology/2020/03/17/white-house-location-data-coronavirus/

Streamlining US Customs during a state of emergency

Streamlining US Customs during a state of emergency
By Anonymous | March 27, 2020

I was one of those people traveling internationally when the US government started closing borders in response to the recent COVID-19 pandemic. Apple News was bombarding my phone with dramatic headlines about flight cancelations and horrendous lines at US Customs and Border Patrol (CBP) crossings, interwoven with constant reminders to avoid crowded areas. My airline and every other business I deal with had sent emails about disruptions in service and overwhelmed customer service hotlines.

Privacy is important to me, so I tend not to install unnecessary apps on my phone. In the anxiety-filled 24 hours leading up to my return flight I ignored my inhibitions and loaded every app that might help me along the way: airlines, banking, cellular provider, video downloads. I turned on additional push notifications (a feature I rarely enable) from Apple News and email accounts. The additional information blasts from news outlets, service providers, and concerned relatives did little to calm my nerves.

As the plane taxied to the runway and everyone finished cleaning their surroundings with anti-bacterial wipes, a lovely stranger told me about the Mobile Passport app that I could download to skip some lines at customs. Upon landing I downloaded the app, blindly accepted the Terms of Service, and proceded (with a cringe) to enter some of my most personally identifiable information: name, date of birth, sex, citizenship, and a clear photo of my face. I answered the basic customs declaration questionaire on my phone and sent it off to CBP before deboarding.

WHAT IS MOBILE PASSPORT CONTROL?
Mobile Passport Control (MPC) is the first process utilizing authorized apps to streamline a traveler’s arrival into the United States. It is currently available to U.S. citizens and Canadian visitors. Eligible travelers voluntarily submit their passport information and answers to inspection-related questions to CBP via a smartphone or tablet app prior to inspection.

I noticed that the app developer was not a government organization, and at the time I didn’t care. I wanted any advantage to make my connecting flight.

When I arrived at US customs I felt that much of my anxiety was unfounded. There were no significant lines and nobody in hazmat suits checking traveler’s temperatures. There was a dedicated line for Mobile Passport users, allowing me to avoid the clusters of touch-screen kiosks where most people filled out their declaration forms. After ten days quarantined without symptoms, I remain thankful that I did not have to touch those kiosks.

Upon a post-acceptance review of the Mobile Passport privacy policy I remain comfortable using this service on an as-needed basis. That is to say: I have deleted it from my phone, and will likely re-install it for future travel. Thanks to the clear and concise policy that includes GDPR specific requirements, a few important observations stand out:

1. The service provider uses de-identified information for targeted advertising from third party providers, yet offers a paid version that is ad-free.
2. Personally identifiable information (PII) is transmitted according to the respective requirements of CBP and TSA.
3. Transaction logs are pseudonymized. The customer’s name and passport number are not retained in association with form submission data.
4. Personal account data can be deleted upon request, however the pseudinymized access logs cannot be deleted because they have already been de-identified.
5. Device identifiers, which are commonly used for cross-app tracking, are not stored by the service provider.

While I’m not a fan of targeted advertising, I am willing to endure it for the convenience afforded by Mobile Passport. I assume that means Google and Facebook will learn a bit more about me from advertising analytics, but the fact that the app is not collecting the device identifiers suggests that they will not using my information in an aggregated manner. The fact that they acknowledge their inability to remove pseudinymized data provides a bit of comfort.

I’ve considered looking into the CBP and TSA policies to see what they have to say about my personal information, but going down the rabbit hole of the Department of Homeland Security’s (DHS) Data Management Hub didn’t seem worth the stress. It was refreshing to see that DHS clearly publishes privacy impact assessments for their various programs, so depending on how long it takes for this pandemic to subside I may still end up jumping into that hole.

AI Applications in the Military

AI Applications in the Military
By Stephen Holtz | March 27, 2020

In recent years countries across the world have started developing applications for artificial intelligence and machine learning for their militaries. Seven key countries lead in military applications of AI: the United States, China, Russia, the United Kingdom, France, Israel, and South Korea, with each one developing and researching weapons systems with greater autonomy.

This development indicates there is a need for leaders to examine the legal and ethical implications of this technology. Potential applications range from optimization routines for logistics planning to autonomous weapons systems that can identify and attack targets with little or no intervention from humans.

The debate has started from a number of sources. For example, “[t]he Canadian Armed Forces is committed to maintaining appropriate human involvement in the use of military capabilities that can exert lethal forces.” Unfortunately for the discussion at hand, the Canadian Armed Forces does not define what ‘appropriate human involvement’ means.

China has proposed a ban on the use of AI in offensive weapons, but appears to want to keep the capability for defensive weapons.

Austria has openly called for a ban on weapons that don’t have meaningful human control over critical functions, including the selection and engagement of a target.

South Korea has deployed the Super aEgis II machine gun which can identify, track, and destroy moving targets at a range of 4 kilometers. This technology can theoretically operate without human intervention and has been in use with human control since 2010.

Russia has perhaps been the most aggressive in its thinking about AI applications in the military, having proposed concepts such as AI-guided missiles that can switch targets mid-flight, autonomous AI operation systems that provide UAVs with the ability to ‘swarm’, autonomous and semi-autonomous combat systems that can make its own judgements without human intervention, unmanned tanks and torpedo boats, robot soldiers, and ‘drone submarines.’

While the United States has multiple AI combat programs in development including an autonomous warship, the US Department of Defence has put in place a directive that requires a human operator to be kept in the loop when taking human life by autonomous weapons systems. This directive implies that the same rules of engagement that apply to conventional warfare also applies to autonomous systems.

Similar thinking is applied by the United Kingdom government in opposing a ban on lethal autonomous weapons, stating that current international humanitarian law already provides sufficient regulation for this area. The Uk armed forces also exerts human oversight and control over all weapons they employ.

The challenge with the development of ethical and legal systems to manage the development of autonomous weapons systems is that game theory is at play and the debate is not simply about what is right and wrong, but about who can exert power and influence over others. Vladimir Putin is quoted as having said in 2017 that “Artificial Intelligence is the future, not only for Russia but for all humankind… Whoever becomes the leader in this sphere will become the ruler of the world.” With the severity of the problem so succinctly put by Russia’s President, players need to evaluate the game theory before deciding on their next move.

Clearly, in a world where all parties cooperate and can be trusted to abide by the agreed rules in deed and intent, the optimal solution is for each country to devise methods of reducing waste, stengthening their borders, and learning from eachother’s solutions. In this world, the basic ethical principles of the Belmont Report are useful for directing the research and development of military applications of AI. Respect for Persons would lead militaries to reduce waste through optimization, and to build defensive weapons systems. Beneficence and justice would lead militaries to focus on disaster response functions that they are all-too-often called upon to fulfill. Unfortunately, we do not always live in this world.

Should nations assess that by collaborating with other nations they expose themselves to exploitation and domination by bad actors, they will start to develop a combination of defensive, offensive, and counter-AI measures that would breach the principles shared in the Belmont report.

PORTLAND, Ore. (Apr. 7, 2016) Sea Hunter, an entirely new class of unmanned ocean-going vessel gets underway on the Williammette River following a christening ceremony in Portland, Ore. Part the of the Defense Advanced Research Projects Agency (DARPA)’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program, in conjunction with the Office of Naval Research (ONR), is working to fully test the capabilities of the vessel and several innovative payloads, with the goal of transitioning the technology to Navy operational use once fully proven. (U.S. Navy photo by John F. Williams/Released)160407-N-PO203-598
Join the conversation:
http://www.navy.mil/viewGallery.asp
http://www.facebook.com/USNavy
http://www.twitter.com/USNavy
http://navylive.dodlive.mil
http://pinterest.com
https://plus.google.com

Perhaps the most disturbing possibilities in the autonomous weapons systems are those that involve genocide committed by a faceless, nameless machine that has been disowned by all nations and private individuals. Consider the ‘little green men’ that have fought in the Donbass region of the Ukraine since 2014. Consider also the genocides that have occurred in the Balkans, Rwanda, Sudan, and elsewhere in the last fifty years. Now combine the two stories whereby groups are targetted and killed and there is no apparent human who can be tied to the killings. Scenarios like these should lead the world to broader regulatary systems whereby the humans who are capable of developing such systems are identified, registered, and subject to codes of ethics. Further, these scenarios call for a global response force to combat autonomous weapons systems should they be put to their worst uses. Finally, the global response force that identifies and responds to rogue or disowned autonomous weapons systems must develop the capability to conduct forensic investigations of the autonomous weapons systems to determine the responsible party and to hold it to account.

Works Cited

https://mwi.usma.edu/augmented-intelligence-warrior-artificial-intelligence-machine-learning-roadmap-military/

https://business.financialpost.com/pmn/business-pmn/two-of-canadas-ai-gurus-warn-of-war-by-algorithm-as-they-win-tech-nobel-prize

https://ploughshares.ca/2019/05/more-clarity-on-canadas-views-on-military-applications-of-artificial-intelligence-needed/

https://www.researchgate.net/publication/335422076_Militarization_of_AI_from_a_Russian_Perspective

https://futureoflife.org/ai-policy-russia/?cn-reloaded=1

Russian AI-Enabled Combat: Coming to a City Near You?

https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF

https://smallwarsjournal.com/jrnl/art/emerging-capability-military-applications-artificial-intelligence-and-machine-learning

https://www.cfc.forces.gc.ca/259/290/405/192/elmasry.pdf

https://www.cfc.forces.gc.ca/259/290/308/192/macdonald.pdf

The Army Needs Full-Stack Data Scientists and Analytics Translators 

https://www.eda.europa.eu/webzine/issue14/cover-story/big-data-analytics-for-defence

https://en.wikipedia.org/wiki/Artificial_intelligence_arms_race