Risk-based Security: Should we go for Travel Convenience or Data Privacy?

Risk-based Security: Should we go for Travel Convenience or Data Privacy?
By Jun Jun Peh | October 24, 2019

When the first Transportation Security Administration’s (TSA) pre-check application center opened in Indianapolis International Airport, it was the day of dream comes true for frequent travelers in the States. This is because successful applicants of the program will no longer be required to remove shoes, laptop and tiny liquid bottles from their bags anymore. In fact it is so convenient that the average time spent going pass the airport security has reduced significantly. Since then, the number of applications for TSA Pre-check increases tremendously year-over-year. Up to 2017, there has been more than 5 millions travelers enrolled in this program and there’re more than 390 application centers nationwide. The question is, what is the magic behind this ‘TSA Pre-check’ that attracts so many people to apply for it?

TSA pre-check by definition is a risk-based security system that allows trusted-travelers to walk pass the often long airport security lines via a shortcut in both line length and screening process, such as removing shoes and laptops from bags. In order to join this program, you have to pay an enrollment fee and submit to a background check and interview. After that, you will be treated like a VIP in the airport. Caveats to that: applicants have to have US citizenship / green card as a requirement to apply for it. Besides, applicants are required to provide personal details and biometrics information to TSA as part of the background check process.

This idea raised alarms among privacy advocates, in which they are concerned about the security of the information provided and its potential usage. The supposed-to-be private information will be monitored by the government for a prolonged period of time. According to Gregory Nojeim, a director at the Center for Democracy & Technology, the data could be held in database for 75 years and the database is queried by the FBI, state and local law enforcement as needed to solve crimes at which fingerprints are lifted from crime scenes. The prints may also be used for background check and info tracking as part of government’s data mining. Hence, TSA pre-check applicants are essentially trading their privacy and authorizing the government to obtain more information from them for the convenience in the airport for the rest of their lives.

Other than privacy concern, some people also challenged this program for the fact that it is only applicable by Americans. In addition to that, only elite travelers who can afford to pay the $85 USD application fees. The thought of discrimination kicks in when this risk-based security system considers only Americans are safe to join while travelers from the rest of the world are not. In fact, all travelers should be given equivalent consideration and same careful screening when it comes to customs and border protection instead of categorizing it based on nationality.

Furthermore, TSA said in a filing that the agency proceeds with hiring private companies to get more travelers into pre-check. For instance, credit card companies are providing enrollment fee rebate to get more people to sign up. In doing so, applicants are providing part of their info to these private companies as well. What worries people the most is that future implementations such as iris scan and chip implantation on travelers’ hand are being explored, for the purpose of testing easier boarding and custom clearance. If the proposal proves valid, it would cause more uproar to the privacy advocate communities.

While TSA agency claims that this risk-based security system can help CBP officers focus on potential threats and thus strengthens security screening process, the concerns that data privacy might be violated and misused cannot be ignored. Travelers who joined the program often choose to submit their privacy in exchange for travel convenience without considering implications of having their biometrics monitored. They should be given an option to opt out of this program and remove all the data permanently if they have good justification. For instance, if a green card holder decides to move out of the country, he or she should have the right to make sure data is not kept by Department of Homeland Security anymore.

From the standpoint of frequent travelers, we might be enthusiastic in making it easier and faster to get through airport security system. As we take on this path, we can only hope that the government systems are in place to protect the biometric data that we surrendered from falling into the wrong hands.

References:

1. Christopher Elliott, Oct 3rd 2013, TSA’s new Pre-Check program raises major privacy concerns. https://www.washingtonpost.com/lifestyle/travel/tsas-new-pre-check-programs-raises-major-privacy-concerns/2013/10/03/ad6ee1ea-2490-11e3-ad0d-b7c8d2a594b9_story.html
2. Joe Sharkey, March 9th 2015, PreCheck Expansion Plan Raises Privacy Concern, https://www.nytimes.com/2015/03/10/business/precheck-expansion-plan-raises-privacy-concerns.html
3. Image 1: https://www.tsa.gov/blog/2015/01/15/reflections-risk-based-security-2014
4. Image 2: https://www.dhs.gov/sites/default/files/publications/TSA%20FY18%20Budget.pdf

Can a sound ethics policy framework mend the fractured relationships between the US Military and Silicon Valley?

Can a sound ethics policy framework mend the fractured relationships between the US Military and Silicon Valley?
By Anonymous | October 28, 2019

The U.S. Department of Defense has recently begun searching for an ethicist with a deep understanding of artificial intelligence in response to the fallout from the much-maligned Project Maven, the showcase AI project between the Pentagon and Google. Project Maven was designed to apply AI in assisting intelligence officers in their analysis of drone footage with the overarching involvement being relegated to non-combat uses. However, the mere involvement of Google with the DoD sparked a protest among 3,100 Google employees fearing the technology could be used in lethal operations, resulting in a signed petition urging Google CEO Sundar Pichai to end the project and spurred on a philosophical debate as to whether or not the tech community would contribute to military operations.


Google employees petition against Project Maven

The military and the private sector partnership dates back to the Revolutionary War, where Robert Morris utilized his personal funds and credit to provide supplies, food, and transportation for the Continental Army. The capabilities of these contractors to quickly pivot to military needs through the delivery of surge support, their expertise in specialized fields, ability to free up military personnel, all at a lower cost than maintaining a permanent in-house capability has led to a long history of reliance on the private sector. As these industrial giants were the backbone of national defense in years past, the advent of autonomous capabilities has led to serious reservations among the most innovative sector of the American economy with Elon Musk arguing that AI is more dangerous than nuclear warheads at the South by Southwest tech conference in March 2018.

The push for further collaboration between the military and the private sector stems from a growing fear among U.S. military officials and respected technologists that the U.S. is at risk of losing an AI arms race to China and Russia. China has invested billions into military applications for AI while Russia, on the heels of President Vladimir Putin having announced “Whoever becomes the leader in this sphere will become the ruler of the world”, bolstered its annual AI budget with a $2 billion investment from the Russian Direct Investment Fund (RDIF) as of May 2019. Even with this perceived threat, AI scientists have grown increasingly concerned with DoD intentions and uses for artificial intelligence. The DoD has an existing policy on the role of autonomy in weapons, requiring a human to have veto power over any action an autonomous system might take in combat, but it lacks a comprehensive policy on how AI will be used across the vast range of military missions.

Enter the Joint Artificial Intelligence Center (JAIC), led by Lt. General Jack Shanahan. The JAIC is on-boarding an AI ethical advisor to help shape the organization’s approach to incorporating future AI capabilities. This advisor will be relied upon to provide input to the Defense Innovation Board, the National Security Council, and the Office of the Secretary of Defense to address the growing concerns with artificial intelligence and to provide recommendations to the Secretary of Defense in order to bolster trust among the DoD and the general public.

But is it an ethics policy, we as the general public, should be seeking? Since 1948, it has been mandated that all members of the United Nations uphold the Universal Declaration of Human Rights, which broadly mirrors the ethical principles of the Belmont Report, providing for the protection of individual privacy, prohibiting discrimination, and providing other protections of civil liberties. This was followed in 1949 by the Geneva Convention which crafted a legal framework for military activities and operations requiring methods of warfare not causing “superfluous injury or unnecessary suffering” (article 35) and requiring that care must be taken to spare the civilian population (article 57). Perhaps, instead of creating an AI ethics policy, the DoD should be more transparent in the uses of AI and develop a code of conduct focused the utilization of AI and the processes by which this code of conduct will be monitored and adhered. Regardless of a fleshed-out ethics policy or a code of conduct, the reality is that there is a need for clarity on the types of risks posed by military AI applications, and the U.S. military is positioned to lead the way in establishing these confidence-building measures to diminish possible dangers.

The DoD’s Implementation of sound ethical policies and codes of conduct, which are representative of the culture and values of the United States, can aid in bolstering domestic popular support and the legitimacy of military action. Furthermore, it is possible an adherence to ethical action will help develop partnerships with the private sector leveraging technological innovations, attract talented AI engineers, and promote alliances with global allies.

Power Shutoff – the New Normal for California?

Power Shutoff – the New Normal for California?
By Rong Yang | October 11, 2019

Classes were canceled on Thursday (10/10/2019) at the University of California, Berkeley, which was running on emergency power. Some staff members scrambled to preserve critical research work from harm as they worried about keeping laboratory animals safe. On a cold and clear Thursday morning, they were among tens of thousands of residents in Northern California, who woke up to a blackout as part of PG&E’s fire-precaution plan.

To protect public safety, PG&E (Pacific Gas and Electric Company) determined to turn off power due to gusty winds and dry conditions combined with a heightened fire risk. The shutdowns come a year after the largest wildfire in California history, killing 85 and destroying 19,000 buildings. Later May 2019, PG&E was blamed for the downed power it owns that caused the blaze. The high winds and dry weather create ideal fire conditions, authorities warn, with the potential to transform a spark into a raging inferno. PG&E fears windblown electrical lines could spark fires if power is not cut.

Since the shutdown, a few small fires have broken out, but nothing on the order of last year’s massive blazes. More than 2 million people, however, went dark. This is very disruptive to people’s lives and businesses to not have electric power. An economy as large as California and Silicon Valley, when you black out a significant fraction of the state, there are going to be large economic impacts. Northern California fire officials say a man dependent on oxygen died about 12 minutes after Pacific Gas and Electric shut down power to the area as part of a massive effort to prevent fire.

PG&E is an American investor-owned utility with publicly traded stock. PG&E denies that its financial travails had anything to do with the decision to turn off the power after its application claim of bankruptcy in early 2019 (PG&E said it expected wildfire-related liabilities topping $30 billion). “All those factors together mean that PG&E is extremely careful now, to try to take preemptive action,” says Barton Thompson, Jr., a professor of natural resources law at Stanford Law School. “Obviously PG&E is is particularly attuned to the need to take action. Part of that is wildfires, part of it is simply public scrutiny and prior public criticism.”

The question to myself then – Is it even legal action for PG&E to shut off the power? The answer is “yes” after some quick research. An investor-owned utilities are required to describe their processes for arriving at decisions like the one affecting California counties. De-energization program is the guidance developed by California State, referred to as “Public Safety Power Shutoff” as a preventative measure of last resort if the utility reasonably believes that there is an imminent and significant risk that strong winds may topple power lines or cause major vegetation-related issues leading to increased risk of fire. Actually shutting off power is nothing new for PG&E – the last time PG&E called a public safety power shutoff, for two days in June, it affected about 22,000 customers in the North Bay and the Sierra foothills, including Butte County and Paradise. The point I am still not clear enough is – how this “last resort” rule applies to this widespread power shutoff? What is the data that backs up the decision?

My next question is does the ownership structure of electricity distribution utilities matter? Distribution utilities are usually private or government-owned. There is constant debate as to which is “better” in terms of financial performance, service delivery and quality, or development in general. Is it possible for the government to take over PG&E and solve the mess? One reason for a potential negative answer is that taxpayers would have to take on the financial responsibility that just bankrupted PG&E — liability for its aging, unsafe electrical grid.(PG&E said it expected wildfire-related liabilities topping $30 billion). One idea has been discussed recently is breaking up PG&E into smaller companies. Regardless of the decision process, the disruption could serve as a focusing event for California to invest in local wind, solar, battery storage and other technologies that would prepare power outrage if it’s becoming the new normal.

School Surveillance in Upstate New York (not just China anymore)

School Surveillance in Upstate New York (not just China anymore)
By Matt Kane | October 11, 2019

In Lockport City in upstate New York, a small conservative town of about 20,000 people, school surveillance has come to America. After installing or updating 417 cameras and connecting them to SN Technologies’ Aegis system, the students in their high school will be monitored constantly by technology at a cost of over $3 million dollars for implementation as well as a yet undetermined additional cost in privacy of their students.

The system is billed as an early warning system against school shootings. The creators of the technology claim that the value of the system is its ability to detect guns as well as faces of those who are loaded into the system at the time of implementation. Those faces are recommended to be local mugshots of criminals and sex offenders. However, there is nothing stopping Lockport schools from loading students. And even if they used restraint, you still have the problem of false positives and technological bias. The NYCLU which has been probing the matter pointed out the results of an ACLU study that tested Amazon’s facial recognition tool against a mugshot database which incorrectly identified 28 members of the US Congress as criminals. Notably, most of those incorrectly identified were people of color.

The website of the system creator is extremely vague. Other then a video that shows it correctly identifying a shotgun in an armed robbery and Will Ferrell versus Red Hot Chili Peppers drummer Chad Smith, the main value being sold is the ability to alert school officials to people such as sex offenders, suspended students, fired employees, or known gang members. However, this is completely dependent on what the school system wants to use for training the system.

During the attempted implementation of the system, and in a bow to public pressure, the school district has removed the images of suspended students just as school was about to start. Since then, they have not been able to go fully operational due to several concerns expressed by the state, including inadequate security controls at the school district in guarding against a breach of the system. Additionally, there are legal concerns related to state technology laws and the Federal Educational Rights and Privacy Act (FERPA).

Local residents of color have been outspoken and clear that they are well aware of who this system is meant to identify. They believe that the minority population will be the target and bear the brunt of negative externalities and false positives of this new system. What is even a larger and more incredulous situation is that it seems no one is discussing the issue of consent. These are public schools with public employees and children. No one has asked for parental consent, employee consent, or been transparent on how the system will truly work. The answers given can be paraphrased as “let us start the pilot and we will figure those issues out during implementation.” Fortunately, the push back of parents and state regulators have held off the system so far, but some of the deeper and more difficult questions with regard to privacy are not even being asked yet because the cursory and top level questions aren’t being answered.

Does cashless society discriminate against the poor and elderly?

Does cashless society discriminate against the poor and elderly?
By Adara Liao | October 13, 2019

The increasing prevalence of cashless payments, including mobile phone payments, digital payments and credit cards, in the world means that cash circulation is dropping worldwide. Some retailers do not accept cash or incentive customers to pay using smartphones. In major cities in China, over 90 percent of people use WeChat and Alipay as their primary payment method, ahead of cash. In the US, the use of cash is just 16% of total transactions and is expected to decline as cards growth accelerate.

The growing use of cashless payment puts financially disenfranchised populations at a disadvantage as they cannot participate in services that require cashless payments and pay higher cost when transacting in cash.

Two populations who are less likely to participate in cashless payments are the elderly and the poor. The elderly are less able to manage cashless payment methods, especially without transition support. If local authorities or utility companies do not support cash, the elderly cannot get their government subsidies and face hurdles to pay for basic services.

In some countries, retailers may charge more for accepting cash or refuse cash payments altogether. In the US, many restaurants refuse to accept cash due to combination of incentives from credit card networks like Visa and Mastercard and desire to create a frictionless experience for high-value customers. For those who do not qualify for cards, or cannot afford mobile payment methods, they are excluded from these retailers.

In addition to being locked out of retailers, low-income people who participate in cashless payments options face stiff penalties for overdraft fees from their savings accounts and are ineligible to receive credit cards who reward wealthy users for money spent. People who rely on cash subsidize wealthier people who use credit cards.

About a third of the population in the US are underbanked, which means they opt to go without regular patronage of traditional money management services such as debit cards and checking accounts. Traditionally, possessing a bank account is required for participation in digital payment services. Although startups now act as savings accounts with low fees that replace traditional bank accounts, they do not have brick and mortar presence of traditional banks and require digital literacy which will still exclude the lowest segments of underbanked and poor. Thus, there is a population who require cash and need to be addressed by retailers and companies.

Governments can protect financial inclusion for people who are unable to participate digitally. In the US, The Cashless Retailers Prohibition Act of 2018 would make it illegal for restaurants and retailers not to accept cash or charge a different price to customers depending on the type of payment they use. Education can also play a part in helping elderly adopt digital payments. In Singapore, the government holds classes to help its senior population learn how to use digital payments such as paying in store using QR codes and topping up travel cards with debit cards [1].

It is not just tech-disadvantaged populations who are vulnerable to discriminatory actions in a cashless society. Fintech companies have access to a broader set of information on a customer’s financial habits and social network. Such information provides fintech companies with power to make lending decisions, or partner with credit-scoring or lenders to make discriminatory decisions.

With more information collected by fintech companies, consumers’ preferences can be shared with third parties or sold for profit. To the extent that fintech companies are susceptible to hacking or undisciplined sharing of consumers’ information with third parties, the privacy of consumers are compromised. Thus, sensitive information about consumers should be regulated and consumers have to be clearly informed about how and who will use their information.

As society moves towards cashless payments, groups without the financial ability or adoption ability to participate in cashless payments risk being put at a disadvantage. Possessing high-end smartphones, qualifying for debit cards and having the technological know-how to operate cashless payments methods safely are barriers to entry for disenfranchised groups. To ensure that all groups in society are able to fully participate in the changing landscape, governments can introduce laws to protect cash acceptance and be mindful of impact on disenfranchised groups when encouraging fintech industry.

References
[1] https://www.raconteur.net/finance/financial-inclusion-cashless-society
[2]https://www.theguardian.com/business/2018/jul/15/cashless-ban-washington-act-discrimination
[3]https://www.brookings.edu/wp-content/uploads/2019/06/ES_20190614_Klein_ChinaPayments_2.pdf
[4]https://www.brookings.edu/opinions/americas-poor-subsidize-wealthier-consumers-in-a-vicious-income-inequality-cycle/

Chart Source
[1] https://www.statista.com/statistics/568523/preferred-payment-methods-usa/
[2]https://www.emarketer.com/content/four-mobile-payment-trends-to-watch-for-in-2019

Robodebt – The Australian Government Debt Recovery scheme is set to expand

Robodebt – The Australian Government Debt Recovery scheme is set to expand.
By Stephanie Mather | October 4, 2019

Imagine having your hard earned wages ripped out of your tax return and held hostage while you try and prove the obvious flaws in an automated system. To add to this, your only form of recourse is a phone line that is notorious for its wait times. This is the reality faced by thousands of welfare recipients in Australia, trapped in a nightmare dubbed RoboDebt. Subject to two parliamentary enquiries and a Federal Court challenge, the ethics of data-matching to detect welfare fraud is in the spotlight as the Australian government continues to expand its reach into its citizens’ data.

The Canberra Times, 3 January 2017

In the Beginning
In early 2017, Australian news outlets exploded over the government’s new automated data matching capabilities with thousands of letters sent out to welfare recipients demanding information or a debt would be raised against them. On average, the letters contained a debt of AU\$1919, an extraordinary amount for an unemployed person who may only be receiving AU\$535.60 a fortnight in welfare. What was even more galling, a large number of these debts are due to a known issue in the data-matching algorithm: it does not correctly account for intermittent earnings across the year.

Most people who receive unemployment benefits have casual jobs whilst they look for full-time employment. Benefits entitlement rates are adjusted from fortnightly earnings but the Australian Tax Office (ATO) income data provided to Centrelink was simply averaged across the financial year. As a result, if a person had earned a high income (or was employed full-time) for part of the year, the RoboDebt system automatically assumed this was divided evenly across every fortnight. This data mismatch is the most common cause of debts being raised incorrectly. If the recipient did not respond in the 28 day window or failed to provide the correct timing of wage earnings, the debt was passed to a debt collection agency (with an additional 10% recovery fee). In some circumstances it was then taken directly from their next tax return by the ATO.

Burden of Proof
Data matching between ATO and Centrelink is not new: sharing agreements have been available since the 1990s. However, the onus used to be on the department to prove debts before they were raised, the department has coercive powers to get payslips from employers to show the dates of earnings. Only when the person was proven to have both benefits and wages concurrently was a debt pursued. RoboDebt has pushed down the burden of proof to the individual, who likely has a limited understanding of the potential impact of the averaged data, going back up to 7 years. For vulnerable people on welfare, collating this data in 28 days can be formidable and they are unlikely to be aware of their appeal avenues to gain their full records, up to and including the department collecting payslips from employers on their behalf.

There has been some improvements in the system, just this month Centrelink released the new ‘Check and Update your Income’ process which provides much more detail and transparency to the user as they update their data, including real-time updates on the debt incurred. However, as raised by the Ombudsman, it still fails to adequately explain that “DHS may use Australian Taxation Office (ATO) data to calculate any debt, which may mean they have to pay back more than they need to.” (p. 31, Ombudsman. DHS is still taking an unfair advantage over vulnerable people whom may not understand the implications of averaging. A better approach would be to match real-time wage data with the ATO, stopping overpayments before they occur and limiting the recovery of small debts.

Because of the many outdated postal addresses in the database, users would be oblivious to the debt until they were contacted by a debt collection agency or the money was taken from their tax return. The original program development focused on correctly matching the two datasets, without regard to the necessary communication piece; if the recipient did not respond within 28 days of the initial contact, the debt was confirmed and passed to a collection agency. This highlights the importance of thinking through all the data required for a just implementation of an automated debt recovery system as well as the ability of subjects to understand the appeals process. DHS did not demonstrate adequate effort to make contact and after the first senate inquiry, the debt notices are tracked via registered mail. By not tracking the letters, the department took advantage of the 28 day window to deem the debts valid and apply collection fees. If the original system design had considered how to create a just and fair access to the appeals process, this would not have been overlooked.

Cost Benefit Analysis to the Australian People
Robodebt was touted by the government as a way to catch the ‘dole-bludgers’, those in society making false representation to get access to additional public funds, benefiting all Australians through the resultant savings. But, the large false positive error rate has meant the mental harm and stress caused by the debts have been felt by many, and were a primary driver for the Senate inquiries. The 2017 inquiry raised issue with the fairness, transparency and usability of the RoboDebt system. Although some of the issues around customer communication have now been met, a second enquiry into Centrelink’s compliance program was opened in 2019 and a separate Federal Court challenge is pending.

With all these challenges against the system it would be right to assume that it at least has been profitable for the Budget given all the debts recovered? Wrong! In fact, the system has cost almost as much to administer as it has recovered to date. The balance of cost, both in monetary and societal terms, are in the terms of reference for the 2019 senate enquiry.


Robodebt Centrelink compliance program cost AU\$600m to recoup AU\$785m so far

Who’s Next?
The Department of Human Services has just released a Matching of Centrelink and Medicare Data Protocol. Medicare is the universal healthcare system in Australia and the matching seems to cover all non-medication services. According to the protocol, “the data-matching program is designed to detect false, manipulated and assumed identities used by customers in submitting multiple claims. It is also designed to detect false information provided by customers relating to employment, medical eligibility and relationship circumstances” (DHS, July 2019). The public should demand transparency from the DHS on how their medical information will be used so they can pass judgement if the expansion of the RoboDebt system is beneficial to the Australian people. The current sustained criticism suggests otherwise.

Innocent Until Proven Guilty? Not Here.

Innocent Until Proven Guilty? Not Here.
By Jeffrey Braun | October 4, 2019

The Chicago Police recently launched a Gun Offender Dashboard website that displays summary statistics on the incarceration status of people charged with gun offenses. It shows the number of people arrested and released on bail as well as the number of people arrested but not released on bail. The latter group remains in jail, awaiting trial.

As their motivation for creating the site, CPD leadership states they are concerned about the number of accused individuals who are released on bail shortly after being charged. They allege that offenders are released too easily and too frequently, only to commit crimes again once they are freed. By publishing the data, their hope is to increase public awareness of this problem and thereby encourage the county courts to be less willing to release alleged offenders.

But the dashboard does not stop at showing aggregate statistics. On a separate tab, it also reveals the names of all the individuals charged with gun offenses. Clicking on a name in the list allows the user to drill down into the details of the arrest. It also allows you to see prior criminal convictions for each individual, if any.

The site presents a serious ethical issue. By disclosing the names of individuals charged with gun offenses it can effectively deprive them of a fundamental right — their right to be presumed innocent until proven guilty, a right accorded to people charged with crimes in the US and in many other countries.

How can it deprive accused individuals of the right of presumed innocence? The people listed on the site have only been charged with a gun violation. They have not been convicted of a gun violation. In the minds of many people who will view the site, though, the distinction between being charged and being convicted may not be particularly meaningful. Many viewers will look differently upon those listed on the site simply because they appear there. This can in turn result in harmful consequences for alleged offenders, including lose of job opportunities, housing, and other benefits they would otherwise enjoy if their names did not appear on the site.

In its own defense, CPD states that the data is publicly available anyway. Anyone can go on to CPD’s site and view data on arrests the police have made, including gun offenses. This is true. However, the dashboard creates a whole new paradigm for accessing and consuming the data. It assembles and publishes it in such a way that it is easy for casual site visitors to see a long list of alleged offenders along with some very personal history that the accused individuals would likely prefer not be shared in such a public manner. This is a far cry from the research-like paradigm of looking up data on individual criminal cases or alleged offenders. It is a mass publication of data that can cause significant harm to people who have only been charged and not convicted of a crime.


(Names in above screen shot obscured by the author)

We can compare the dashboard’s data access paradigm to the social context of the traditional criminal trial for some additional insight here. In criminal court cases, jurors are admonished that defendants must be presumed innocent until the state has proven them to be guilty. People viewing CPD’s gun offender dashboard, on the other hand, will likely not take to heart any warnings about presumption of innocence while they peruse the site. It is just too tempting when presented with a long list of alleged offenders to assume that some, if not most, are guilty of the crimes. This is only reinforced by the site’s characterization of the accused individuals as “offenders”, a term that implies the accused individuals have already been found guilty.

What feels even more offensive and potentially damaging is the fact that site users can easily view past convictions of those people shown on the list. Here again, a social context comparison is informative. US rules of evidence generally prevent introduction of past criminal convictions as evidence to support the guilt of a person on trial for a new crime (https://www.nolo.com/legal-encyclopedia/evidence-prior- convictions-admissible-against-defendants-who-testify.html). In fact, when an individual on CPD’s Gun Offender dashboard list goes on trial, the jury will likely not be told whether the defendant had committed a crime previously. No such protection is afforded when an individual’s information is viewed on CPD’s site, however. His or her past criminal conviction record is right there, for all to see and use to draw conclusions about the accused individual.

We all yearn for effective ways to decrease gun violence. Seemingly on a daily basis, we watch with horror, sadness and despair as the media report on lives senselessly lost to gun violence. We cannot, though, walk away from our long-held beliefs in the rights of the accused as we search for answers to gun violence. The power and impact of aggregated and easily consumed public data can sometimes deprive the accused of their rights, as has happened here with the CPD gun offender dashboard.

The Chicago Police Department should just publish aggregate data on their gun offender site. It will accomplish their stated goal and it will avoid deprivation of individual rights. At the very least, CPD should replace the term “Offender” on the site with a term that clearly indicates the individuals listed have just been charged, not convicted. The ability to easily click through to data on past convictions must also be disabled.

Our “Public” Private Lives

Our “Public” Private Lives
By Anonymous | October 4, 2019

It used to be that what you did at your previous employer’s, in college, at that New Year’s party a decade ago was all a thing of the past, privy only to your own inner memory (and maybe a photo album or two). But now, in this day and age of Twitter, Snapchat, Instagram, and others, our “public” private lives are not so private anymore.

One danger is that there’s a lot out there that we are not even aware of ourselves. Once in a while I will get an email from a service that has changed their Terms of Service, and I will only then realize that I had an account with them. With some apps and sites, this is not a big deal. With others, like health vaults for instance, the lack of remembering where / when / why I signed up is more concerning. Questions like, what information have I shared with them, what information could I be losing, what information could be leaked, pop up.

Some of these accounts I probably have from testing out for a day and then abandoning ship, or perhaps they are legacy apps from decades ago and I forgot to close my account, or worse, maybe I accidentally spun them up without intending to do so by clicking one too many buttons in an automated onboarding flow.

On the one hand, having these accounts out in public makes me vulnerable to fraud, like account takeover (good thing my name is not super generic, like John Smith!). If my digital presence were to be audited (for instance if my future employer did a digital background check), that might result in unintended negative consequences. And what if my “friends” on these networks I never realized existed were involved in infamous pursuits. On the web, not only is your own persona being judged, but also who is in your extended circle of connections.

Regardless, there are 2 recommendations that I think are vital to at least mitigating any unwanted consequences.

One – Make it very apparent that a user when signing up for a service is signing up for terms and policies as well (this might involve a bit more friction that many one-click onboarding flows have at the moment).

Two – Make the fine print less fine and easier to digest for this limited attention span, TL;DR, and possibly less educated audience (aka mirror the language to that of web copy and marketing materials – which often cater to the reading level of a fifth grader for optimal comprehensibility).

The Belmont Report talks about how we should provide users with informed consent, which means with full information, at an easy to comprehend level, given to them voluntarily. To make it actually given “voluntarily”, we should reduce the amount of automation of opting in to consent. That means building in voluntary consent flows, which companies will likely balk at due to the increase in user friction this may cause.

The other issue that we need to address is that any changes to terms of service and policies should follow these same rules, as friction filled as they may be. These updates can’t just be emailed randomly, lost in spam folders, swiped left in app; they should force manual opt-in for users. This would ensure not just protection for users, but also incentivize companies coming up with these policy changes to make sure they are following the same strict protocol they did in coming up with the original policies to begin with.

In this more and more connected, open world, perhaps as we share more and more, we will care less and less. But for those who do care, let’s keep a transparency through comprehensible policies delivered in a very apparent way so users can truly keep tabs on their private information.

Lies Against the Machine

Lies Against the Machine
By Elle Proust | October 4, 2019

Lie detection machines are a mainstay of crime fiction, as effective tools for law enforcement. Outside of fiction they are less useful, both because of legal restrictions and not working especially well. As with many other areas of life, Artificial Intelligence(“AI”) has been put forward as a potential solution. Three recent examples; European Dynamics provides data science based tools to border agents in Europe, research scientists at Arizona State University have built a AI based immigration agent called ‘AVATAR’ intended to be used at the US/Mexico border, and Conversus has developed an AI based lie detector ‘EyeDetect’ to screen potential employees. Other than having names sounding like the villain company in a robot sci fi, the developed technology is concerning both because of the lack of transparency on their scientific efficacy and apparently little thought to ensuring that they are applied fairly.

Background

It is human fallibility at discerning lies that has driven scientific methods for doing so. A literature review of over 200 studies in 2003 found that humans could accurately detect whether a statement was true or false at around 54% of the time, so there. Policing, immigration and other law enforcement would be greatly improved by accurate lie detection. Attempts to measure lying by machine have been around since the early twentieth century, and have had varying levels of success through polygraphs, voice detection and even brain scans, though most have been rejected by courts for unreliability. Data science in the field is relatively new – but spreading widely.

How they work

EyeDect as in the tool utilised in the movie Blade Runner – monitors eye movements to detect lying. AVATAR also use eye movement but also adds in voice analytics and facial scanning. Finding the data set to train these AI systems on presents an issue, as in most cases we do not have fully labeled data. If someone gets away with a lie it is by definition mislabelled which creates a training problem. Lie detection studies have gotten around this by secretly filming undergraduate students committing dishonest acts and then asking them about it later. The AVATAR deep learning system system was actually trained recordings of faces of college students in the same manner. AVATAR does not disclose how it guarantees that the lying done by students is comparable to people lying at the border. AVATAR claims 75% accuracy but whether this is determined purely on the college students or at the border is unlcear. If it is at the border how it is accurately accounting false negatives is another question entirely. EyeDetect at least has acknowledged some per review assessments showing accuracy tending toward 50% but how they can ensure this does not happen to customers and potential employees does not appear to be publicised.

Applicability and fairness

AVATAR at least is a black box model, the owners of which readily acknowledge that they have no idea how the technology makes decisions – this can be worrying because then we do not know if the algorithms are making unbiased decisions. Conversus have argued that lie detectors are an improvement on the polygraph because no humans can manipulate it. It is true certainly that both employment screening and immigration can and likely are subject to human biases. However, algorithms can be biased and arguing otherwise is specious. As an example, Google did not intend to generate a racist image identifier, however no human would ever label a person of African descent as non-human, which the Google algorithm did – removing a human made a situation worse.

In addition, Professor Eubanks, author of automating inequality has argued that alogirthms can remove individual bias but “are less good at identifying and addressing bias that is structural and systemic”. Most of the data these algorithms are trained on are disproportionately white and affluent Americans – there is no guarantee that these will treat other populations fairly. In other situations, such as welfare programs oversampling of one group of people have led to other groups being identified as outliers simply for being different. Employment and Immigration are already difficult for marginalised groups and we should tread very carefully with anything that could amplfy existing issues.

Looking Forward

The United States at least has banned the use of any lie detectors in employment testing and as admissable evidence. There is no such prohibition against the use by border enforcement which explains the attempted rise of AVATAR. European Dynamics is being trialled by the Hungarian government, and EyeDetect has confirmed that is an operating in a Middle Eastern country but will not name which one – the human rights record in either case should not instill much confidence.

It seems likely systems will be used more widely, appropriate care must be taken.

Images Taken from: Blade Runner (1982), and https://www.eyetechds.com/avatar-lie-detector.html

The Sensorvault is Watching

The Sensorvault is Watching
By Anonymous | October 4, 2019

Imagine a world where the police could track down criminals soon after they commit a crime by tracking the location of their cell phones. This is a world where law enforcement and tech companies work together to bring wrongdoers to justice. Now imagine a world where innocent people are prosecuted for these crimes because their cell phones were present in the vicinity of these crimes. These people were unaware that their cell phones were constantly tracking their ware bouts and storing that data indefinitely. Believe it or not, we live in both of these worlds.

People who have an android phone or a mobile device that has Google Maps installed on it are most likely sharing their location with Google at all times. This data collection is not easy to opt out of, and many people do not know that they are sharing their location at all [1]. This data is stored in a Google database that employees call “Sensorvault” and it is even collected when people are not using location apps on their phones. Surprisingly, this data is still stored when users turn off their location history and can only be deleted with extensive effort by the user [2]. The Sensorvault “includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade [3].”

This treasure trove of data has proven to be useful to law enforcement throughout the United States. Police and federal agents were struggling to find the culprits of a bank robbery in Milwaukee that occured in October 2018. They served Google with a search warrant for information about the locations of the robbers phones, also referred to as a reverse location search. This is not the only time this kind of search warrant has been served, with another occassion being to identify members of a rally that turned into a riot in Manhattan [4].

But how are the devices of only criminals being tracked? They are not, in fact, the only devices being tracked and reported, and this had lead to concern from civil liberty groups [3]. When the police or FBI serves a reverse location search warrant, they are returned with a list of phone owners that were in the vicinity of the crime. In the case of the previously mentioned bank robbery, law enforcement requested all devices that were within 100 feet of the bank during a half hour block of time surrounding the robbery [4]. It is easy to believe that this could result in innocent people being linked to crimes that they did not commit. And this indeed did happen to Jorge Molina, who was detained by police for a week because his phone was in the vicinity of a murder, and his car matched the same make and model as the vehicle involved in the murder. When new evidence made clear that Mr. Molina was not involved, he was released [5]. This was time spent building a case against an innocent individual that could have been spent investigating other suspects further.

The facts explored here are cause for concern; locations of most mobile devices are being tracked without the owner’s knowledge and sometimes when they take measures to stop the recording, like turning off their location history. This data is being used to tie people to crimes that they might not have committed, but were in the wrong place at the wrong time. Steps must be taken to address both the lack of public awareness and the legal scope and use cases of this data.

Bibliography
[1] Nakashima, Ryan. (2018, August 13) AP Exclusive: Google tracks your movements, like it or not. https://www.apnews.com/828aefab64d4411bac257a07c1af0ecb
[2] The Associated Press. (2018, August 13) How to find and delete where Google knows you’ve been. https://www.apnews.com/b031ee35d4534f548e43b7575f4ab494
[3] Lynch, Jennifer. (2019, April 18) Google’s Sensorvault Can Tell Police Where You’ve Been. https://www.eff.org/deeplinks/2019/04/googles-sensorvault-can-tell-police-where-youve-been
[4] Brandom, Russell. (2019, August 28) Feds ordered Google location dragnet to solve Wisconsin bank robbery. https://www.theverge.com/2019/8/28/20836855/reverse-location-search-warrant-dragnet-bank-robbery-fbi
[5] Valentine-Devries, Jennifer. (2019, April 13) Tracking Phones, Google Is a Dragnet for the Police. https://www.nytimes.com/interactive/2019/04/13/us/google-location-tracking-police.html