Computer Vision and Regulation

Computer Vision and Regulation
By Hong (Sophie) Yang | July 3, 2020

What is computer vision?

Computer vision is a field of study focusing on training the computer to see.
“At an abstract level, the goal of computer vision problems is to use the observed image data to infer something about the world.”
(Page 83, Computer Vision: Models, Learning, and Inference, 2012).

The goal of computer vision is to understand the content of digital images. Typically, this involves developing methods that attempt to reproduce the capability of human vision. Object detection is a form of computer vision. Understanding the content of digital images may involve extracting a description from the image, which may be an object, a text description, a three-dimensional model, and so on. During inference of object detection, the model draws bounding boxes around the object based on extracted weights, which are the training coefficient from the labeled images. The bounding boxes give the exact xmin, ymin, xmax and ymax position of the object with the confidence value.

Use cases of Computer Vision

This is the list of professionally researched areas where have seen successful using computer vision.

  • Optical character recognition (OCR)
  • Machine inspection
  • Retail (e.g. automated checkouts)
  • 3D model building (photogrammetry)
  • Medical imaging
  • Automotive safety
  • Match move (e.g. merging CGI with live actors in movies)
  • Motion capture (mocap)
  • Surveillance
  • Landmark detection
  • Fingerprint recognition and biometrics

It is a broad area of study with many specialized tasks and techniques, as well as specializations to target application domains.

From YOLO to Object Detection Ethical issue YOLO (You Only Look Once), the real time object detection model created by Joseph Redmon in May 2016, is a real time object detection model, and Yolov5 was released June 2020, it is the most recent state of art computer vision model. YOLO solved the low-level computer vision problem; more tools can be built on top of YOLO model from automatic driving to cancer cell detection with real time monitoring.

The news in February 2020 shocked the machine learning community, Joseph Redmon announced that he had ceased his computer vision research to avoid enabling potential misuse of the tech – citing in particular “military applications and privacy concerns.”

The news spun the discussion of “broader impact of AI work including possible societal consequences – both positive and negative” and “someone should decide not to submit their paper due to Broader Impacts reasons?” That is where Redmon stepped in to offer his own experience. Despite enjoying his work, Redmon tweeted, he had stopped his CV research because he found that the related ethical issues “became impossible to ignore.”
Redmon said he felt certain degree humiliation for ever believing “science was apolitical and research objectively moral and good no matter what the subject is.” He said he had come to realize that facial recognition technologies have more downside than upside, and that they would not be developed if enough researchers thought about the broader impact of the enormous downside risks.

When Redmon first created Yolo3 in 2016, he wrote about the implications of having a classifier such as the YOLO. “If humans have a hard time telling the difference, how much does it matter?” On a more serious note: “What are we going to do with these detectors now that we have them?” He also insisted on the responsibility of the computer vision researchers to consider the harm our work might be doing and think of ways to mitigate it.

“We owe the world that much”, he said.

This whole debate led to these questions, which might go unanswered forever:

  • Should the researchers have a multidisciplinary, broader view of the implications of their work?
  • Should every research be regulated in its initial stages to avoid malicious societal impacts?
  • Who gets to regulate the research?
  • Shouldn’t the expert create more awareness rather than just quit?
  • Who should pay the price; the innovator or those who apply?

One big complaint that people have against Redmon’s decision is that experts should not quit. Instead, they should take the responsibility of creating awareness about the pitfalls of AI.

The article on Forbes “Should AI be regulated”, published in 2017, had pointed out that AI is a fundamental technology, Artificial Intelligence is a field of research and development. You can compare it to quantum mechanics, nanotechnology, biochemistry, nuclear energy, or even math, just to cite a few examples. All of them could have scary or evil applications but regulating them at the fundamental level would inevitably hinder advances, some of which could have a much more positive impact than we can envision now. What should be heavily regulated is its use in dangerous applications, such as guns or weapons. This led to the tough questions: Who to regulate it? At what level?

Surge Pricing – Is it fair?

Surge Pricing – Is it fair?
By Sudipto Dasgupta | July 3, 2020

What is Surge Pricing?

Surge pricing by rideshare companies like Uber, Lyft originates from the idea to adjust prices of rides to match driver supply to rider demand at any given time. During periods of excess demand, the number of riders is high compared to the cars and customers need to wait for a longer time. The rideshare companies increase their normal fare. The fares are increased by a multiplier which depends on the demand in real time. Whenever rates are raised due to surge pricing, the app lets riders know. Some riders will choose to pay, while some will choose to wait a few minutes to see if the rates go back down. Most of us who are regular users of the apps would have encountered surge pricing as depicted below.

Fig 1 : Surge Pricing

A brief history

Surge pricing is based on the concept of dynamic pricing which is not new. In the 1950s, the New York subway faced a problem. At peak times, it was overcrowded; at other times, the trains were empty. William Vickrey suggested the abandonment of the flat-rate fare in favor of a fare structure which takes into account the length and location of the ride and the hour of the day. This was called as peak-load pricing.

The ride share apps extends the concept of peak-load pricing through their surge prices. The difference is that the price calculation is not only dependent on peak load but also on other factors like driver availability, weather, zip code, special events , rush hours to mention a few. The apps rely on algorithms which are opaque to the consumer to compute the multiplier. The factors which influence the prices are not transparent to the riders.

Consciously or unconsciously we as riders accept the convenience in exchange of cost. We may not even check the fare multiplication factor while availing for a convenient ride.

Fairness Concerns

The opaque algorithms of surge pricing do raise multiple fairness concerns. Prices on Uber and Lyft rose to as much as five times normal rates in the immediate aftermath of a deadly shooting in downtown Seattle in January 2020. The automated surge pricing lasted for about an hour and drew widespread criticism before the companies manually reset prices to normal levels. In 2015, Spencer Meyer, a Connecticut Uber rider, sued Uber co-founder and then-CEO Travis Kalanick, alleging that Uber was engaging in price-fixing. Uber came under criticism for hiking prices during a hostage crisis that was unfolding in Sydney in 2014. They subsequently apologized for the same.

An analysis conducted by Akshat Pandey and Aylin Caliskan from George Washington University indicates possible disparate impact due to social bias based on age, house pricing, education, and ethnicity in the dynamic fare pricing models used by ride hailing applications.

Fig2 : City of Chicago ride-hailing data. The colors in each chart designate the average fare price per mile for each census tract.

The authors analyzed 100 million rides from the city of Chicago from November 2018 to December 2019 and reported increase in ride-hailing prices when riders were picked up or dropped off in neighborhoods with a low percentage of 1) people over the age of 40, (2) people with a high school education or less, and 3) houses priced under the median for Chicago.

The surge pricing manifests as decisional interference for the riders. Biases in the training data influence the outcomes of the algorithm. Hence the question arises what data are the algorithms trained on? How can the ride sharing apps reduce the opacity of the algorithms? Is it possible to explain the AI models behind the algorithms given the apps are used by a diverse group of riders with different levels of technology understanding?

What ridesharing companies have to say?

“When demand for rides outstrips the supply of cars, surge pricing kicks in, increasing the price,”. Uber said that surge pricing reduces the number of requests made during a peak time, while drawing more drivers to busy areas. “As a result, the number of people wanting a ride and the number of available drivers come closer together, bringing wait times back down.”

Looking Forward

Answering the questions on the opacity of the algorithms is important for addressing the fairness concerns. Can the complex algorithms be exposed to the users? The ride sharing apps can have simpler mechanisms to explain the multiplication factor and make it more predictable for the riders.

Addressing the Weaponization of Social Media to Spread Disinformation

Addressing the Weaponization of Social Media to Spread Disinformation
By Anonymous | July 3, 2020

The use of social media platforms like Facebook and Twitter by political entities to present their perspectives and goals is arguably a key aspect of their utility. However, social media is not always used in a forthcoming manner. One such example is the use of these sites by Russia to spread disinformation by exploiting platform engagement and the cognitive biases of the users. The specific mechanisms of their techniques are documented and summed up as a “Firehose of Falsehood”, which serves as a guide to identify specific harms that we can proactively guard against.

The context of the analysis was rooted in the techniques being employed around the time of Russia’s 2014 invasion of the Crimean Peninsula. The techniques employed would go on to be reused to great effect in 2016, when they were used against the United Kingdom in their Brexit referendum, as well as the United States in their presidential election. More recently, the Firehose has been used against many other targets, including 5G cellular networks and vaccines.

While their techniques share some similarities with those of their Soviet predecessors, the key characteristics of Russian propaganda are that they are high-volume and multichannel, continuous and repetitive, and lacking commitment to objective reality or consistency. This approach lends itself well to social media platforms, as the speed at which new false claims can be generated and broadly disseminated far outstrip the speed at which fact checkers operate – polluting is easy, but cleaning up is difficult.

Figure 1: The evolution of Russian propaganda towards obfuscation and using available platforms
(Sources: Amazon, CBS)

The Firehose also emphasizes exploiting audience psychology in order to disinform. The cognitive biases exploited include the advantage of the first impression, using information overload to force the use of shortcuts to determine trustworthiness, use of repetition to create familiarity, the use of evidence regardless of veracity, and peripheral cues such as creating the appearance of objectivity. Repetition in particular works because familiar claims appear are favoured over less familiar ones – by repeating the message frequently, that repetition leads to familiarity, which in turn leads to acceptance. From there, confirmation bias further entrenches those views.

Figure 2: A cross-section of specimens from the 2016 election
(Source: Washington Post)

Given the nature of the methods outlined, some suggested responses are:

1. Do not rely solely on traditional techniques of pointing out falsehoods and inconsistencies
2. Get ahead of misinformation by raising awareness and make efforts to expose manipulation efforts
3. Focus on thwarting the desired effects by redirecting behaviours or attitudes without directly engaging with the propaganda
4. Compete by increasing the flow of persuasive information
5. Turn off the flow by undermining the broadcast and message dissemination through enforcement of terms of service agreements with internet service providers and social media platforms

From an ethical standpoint, some of the proposed measures have some hazards of their own – in particular, the last suggestion (“turn off the flow”) may be construed as viewpoint-based censorship if executed without respect for the users’ autonomy in constructing their perspectives. As well, competing may be tantamount to fighting fire with fire, depending on the implementation. Where possible, getting ahead of the misinformation is preferable, as forewarning acts as an inoculant for the audience – by getting the first impression and highlighting attempts to manipulate the audience, it prepares the users to critically assess new information.

As well, if it’s necessary to directly engage with the claims being made, solutions proposed are:

1. Warn at the time of initial exposure to misinformation
2. Repeat the retraction/refutation, and
3. Provide alternative story while correcting misinformation to immediately fill the information gap that arises

These proposed solutions are less problematic than the prior options, as limiting the scope to countering the harms of specific instances of propaganda, despite the limitations highlighted above, preserves respect for users to arrive at their own conclusions.

In fighting propaganda, how can we be sure that our actions remain beneficent in nature? In understanding the objectives and mechanics of the Firehose, we also see that there are ways to address the harms being inflicted in a responsible manner. By respecting the qualifications of the audience to exercise free will in arriving at their own conclusions and augmenting their available information with what’s relevant, we can tailor our response to be effective and ethical.

The Russian “firehose of falsehood” propaganda model: Why it might work and options to counter it
Your 5G Phone Won’t Hurt You. But Russia Wants You to Think Otherwise
Firehosing: the systemic strategy that anti-vaxxers are using to spread misinformation
Release of Thousands of Russia-Linked Facebook Ads Shows How Propaganda Sharpened
What we can learn from the 3,500 Russian Facebook ads meant to stir up U.S. politics

Your Health, Your Rights: medical information not covered by HIPAA

Your Health, Your Rights: medical information not covered by HIPAA
By Adam Sohn | June 26, 2020


HIPAA (Health Insurance Portability and Accountability Act of 1996) protects your personal medical information as possessed by a medical provider. By HIPAA, you may obtain your record, add information to your record, seek to change your record, learn who sees your information, and perhaps most importantly, exercise limited control over who sees your information.

HIPAA protection provides security enshrined in law. However, the internet and Artificial Intelligence have provided additional vectors for personal medical information to be ascertained and distributed outside of a person’s control. The implications of data release from any vector are comparable to sharing from a medical setting.

Technology Generates and Discloses Medical Information
An example of entities not bound by HIPAA for most transactions, yet dealing in medical information, is the retail sector. As customers purchase a market basket of products aligned to certain medical status, astute predictive analytics systems operated by a retailer can infer the medical status. This medical status is free from HIPAA protections as it has no origins in a medical setting. Furthermore, the status is not provided information.

Famously, the astuteness of Target’s predictive analytics was on display in 2012 when coupons for baby supplies were sent to the home of a teenage girl. While it is alarming enough that Target has a database of inferred medical information (in this case, pregnancy), Target went a step further by disclosing this information for anyone handling the teenage girl’s mail to happen upon. This triggered a public understanding of the privacy risks related to data aggregation; where mundane data becomes a building block of sensitive information.

Exploring Privacy Protections
Exploring the state of protections that do exist to prevent unwanted disclosures such as the Target case reveals a picture of a system that has room to mature.

– One way to prevent unwanted disclosure is to personally opt-out of mailed advertisements from Target, per instructions in Target’s Privacy Policy. This is an unrealistic expectation for a customer to be able to foresee such a need.
– Another method is to submit a complaint to the FTC regarding a violation of a Privacy Policy. However, Target’s Privacy Policy is vague on these matters.

Expanding the view to regulatory changes that do not yet exist, yet are in the approval progress, there is a relevant bill in Congress. CONSENT (The Customer Online Notification for Stopping Edge-provider Network Transgressions Act) was brought to the Senate in 2018 and is currently under review in the Committee on Commerce, Science, and Transportation. CONSENT would turn the tide into the public’s favor with regard to the security of Personally-Identifying Information (PII) by requiring a distinct opt-in for sharing or using PII. However, the bill is only applicable to data transacted online, which is only a portion of the relationship a consumer has with a retailer.

Clearly, consumer behavior is trending towards online purchases. However, brick-and-mortar purchasing can not be overlooked, as it is also increasing.

Advice to Consumers
In light of the general laxness of protections, the methods for keeping your information secure falls under the adage caveat emptor – buyer beware. For individual consumers, options to keep your information safe are:
– Only share the combination of PII and medical information in a setting where you are explicitly protected by a Privacy Policy.
– Forgo certain conveniences in order to remain obscure. This entails using cash in a brick-and-mortar store and refraining from participating in loyalty programs.

[New York Times – Shopping Habits]
[Consumer Privacy Bill of Rights]

Discriminatory practices in interest-based advertising

Discriminatory practices in interest-based advertising
By Anonymous | June 26th, 2020

Economics and ethics

The multi-billion dollar online advertising industry is incentivised to ensure that ad dollars convert into sales, or at least high click-through rates. Happy clients equate to healthy revenues. The way to realize this goal is to match the right pair of eyeballs for each ad – quality, not quantity, matters.

Interest-based ads (sometimes referred to as personalized or targeted ads) are strategically placed for specific viewers. The criteria for viewer selection could be from immutable traits like race, gender and age, or online behavioral pattern. Unfortunately, both approaches are responsible for amplifying racial stereotypes and deepening social inequality.

Baby and the bathwater

Dark ads exclude a person or group from seeing an ad by targeting viewers based on an immutable characteristic, such as sex or race. This is not to be confused with the notion of big data exclusion, where ‘datafication unintentionally ignores or even smothers the unquantifiable, immeasurable, ineffable parts of human experience.’ Instead, dark ads refer to a deliberate act by advertisers to shut out certain communities from participating in its product or service offering.

Furthermore, a behaviorally targeted ad can act as a social label even when it contains no explicit labeling information. When consumers recognize that the marketer has made an inference about their identity in order to serve them the ad, the ad itself functions as an implied social label.

Source: The Guardian

That said, it’s not all bad news with these personalized ads. While there are calls to simply ban targeted advertising, one could argue for the benefits of having public health campaigns, say, delivered in the right language to the right populace. Targeted social programs could also have better efficacy if it reaches the eyes and ears that need them. To take away this potentially powerful tool for social good is at best a lazy approach in solving the conundrum.

Regulatory oversight

In 2018, the U.S. Department of Housing and Urban Development filed a complaint against Facebook, alleging that the social media platform had violated the Fair Housing Act. Facebook’s ad targeting tools enabled advertisers to express unlawful preferences by suggesting discriminatory options, and Facebook effectuates the delivery of housing-related ads to certain users and not others based on those users’ actual or imputed protected traits.

Source: The Truman Library

A 2016 investigation by ProPublica found that Facebook advertisers could create housing ads allowing posters to exclude black people. Its privacy and public policy manager defended the practice, underlining the importance for advertisers to have the ability to both include and exclude groups as they test how their marketing performs – nevermind that A/B testing itself often straddles the grey area in the ethics of human subject research.

Source: ProPublica


Insofar as the revenues for online businesses are driven by advertising revenue, which is dictated by user traffic, interest-based ads are here to stay. Stakeholders with commercial interests will continuously defend its marketing tools with benevolent use cases. Lawmakers need to consistently address the harm itself – that the deliberate exclusions (and not just the ones from algorithmic bias and opacity) serve to exacerbate inequalities from discriminatory practices in the physical world.

In the example above, the HUD authorities did well to call out Facebook’s transgressions, which are no less serious to those of the Jim Crow era. As a society, we have moved forward with Brown v. Board of Education. Let us not slip back in complacency in justifying segregatory acts; and of being complicit in Plessy v. Ferguson.

Data-driven policy making in the Era of ‘Truth Decay’

Data-driven policy making in the Era of ‘Truth Decay’
By Silvia Miramontes-Lizarraga

Through the advances of digital technology, it has been made possible to collect, store and analyze large amounts of data that contain information of various subjects of interest, otherwise known as Big Data. One of the effects of this field is the increase in data-driven methods for decision making in businesses, technology, and sports; as these methods have been proven to boost innovation, productivity and economic growth. But if the availability of data has been significantly increasing, why do we lack data-driven methods in policy-making to target issues of social value?

Background on Policy-making:

Society expects the government to deliver solutions to address social issues. Thus, its challenge is to improve the quality of life of its constituents. Public policy is a goal-oriented course of action which encompasses a series of steps: 1) Recognition of the Problem, 2) Agenda Setting, 3) Policy Formulation, 4) Adopting of Policy, 5) Policy Implementation, and 6) Policy Evaluation. This type of decision-making involves numerous participants. Consequently, the successful implementation of these policies cannot be ideologically driven. The process requires government officials to be transparent, accountable, and effective.

So how could these methods help?

The lack of utilization of data-driven methods is conspicuous when addressing the many problems of our educational system. For example, government officials could utilize data to efficiently locate the school districts in need of more resources. Similarly, when addressing healthcare, they can successfully compare plans to determine the best procedures and most essential expenditures to complete in the middle of a global pandemic. Thus, by successfully adopting these new technologies, our officials can begin closing ‘data gaps that have long impeded effective policy making’. However, in order to achieve this, government officials and their constituents must develop the awareness and appreciation of concrete and unbiased data.

Why ‘Truth Decay’ complicates things

Although there is potential in implementing data-driven methods to better inform policy makers, we have stumbled upon a hurdle: the on-going rise of ‘Truth Decay’, a phenomena described in a RAND initiative which aims to restore the role of facts and analysis in public life.

In recent years, we have heard about the problem with misinformation and fake news, but most importantly, we have reached a point where people no longer agree on basic facts. And if we do not agree on basic facts how can we possibly address social issues such as education, healthcare, and the economy?

Whether we have heard it from a friend in the middle of a Facebook political comment war, or from a random conversation on the street, we have come to realize that people tend to disagree on basic objective facts. More often than not, we get very long texts from friends filling us in on their latest social media comment debacle with ‘someone’ who does not seem to ‘agree’ with the presented facts – the facts are drowned out by their opinions. The line between opinion and facts fades to the point where facts are no longer disputed, but instead rejected or simply ignored.

So what now? How do we actively fight this decay to keep the validity of facts afloat and demystify quantitative methods to influence our representatives, and possibly transform them into better informed policy makers?

First Steps

Whenever we find someone with distinct political views, say from an opposing political party, we could try to convince them to look at the issue from another perspective. Perhaps, we can point out the disparity between facts and beliefs in an understated way.

We can also actively avoid tribalization. Rather than secluding ourselves from groups with opposing political views, we can try to build understanding and empathy.

Additionally, we can also change our attitude toward ourselves and others. We must acknowledge that sometimes we need to change our beliefs in order to grow. Meaning that making our beliefs part of our identity is not the optimal way to fight this ongoing ‘Truth Decay’. It is important to remember that our beliefs may be inconsistent over time, and thus, we are not defined by them.

Lastly, we can embrace a new attitude: call yourself a truth seeker, try your best to remain impartial, and be curious. Keeping your mind open might allow you to learn more about yourself and others.

RAND Study

Police Shootings: A Closer Look at Unarmed Fatalities

Police Shootings: A Closer Look at Unarmed Fatalities
By Anonymous

Last year, fifty-five people were killed by police shootings while “unarmed.” This number comes from the Washington Post dataset of fatal police shootings, which is aggregated by “local news reports, law enforcement websites and social media” as well as other independent databases. In this dataset, the recorded weapons that victims were armed with during the fatal encounter range from wasp spray to barstools. Here is a breakdown of the types of arms that were involved in the fatal police shootings of 2019.


We see a large number of fatalities among people who were armed with guns and knives, but also vehicles and toy weapons. In my opinion cars and toys are not weapons and would more appropriately fit the category of “unarmed.” But what exactly does “unarmed” mean? The basic Google search definition is “not equipped with or carrying weapons.” Okay, well what is a weapon? Another Google search defines weapons as “a thing designed or used for inflicting bodily harm or physical damage.” Toys and cars were not designed for inflicting bodily harm, but may have been used to do so. Now with the same logic, would we call our arms and legs “weapons,” since many people have used their appendages to inflict bodily harm? No. So why do we distinguish the cars and toys from the “unarmed” status?

This breakdown of the categories leads to bias in the data. When categorizing the armed-status of victims of police shootings, the challenge of specificity arises. Some may find value in having more specific descriptions for each of the cases in the dataset, but this comes at the cost of distinguishing certain cases that really should be in the same bucket; in this case, “vehicles” and “toy weapons” should be contained in the “unarmed” bucket, rather than their own separate categories. The exclusion of those cases would provide lower counts to the actual number of unarmed people who were killed by police. Including the cases that involved vehicles and toy weapons brings the count of unarmed fatalities from 55 to 142. In other words, the bias inflicted by granular categorization underestimated the number of unarmed victims of police shootings in 2019.

Now let’s look at the breakdown by race, specifically White versus Black (non-Hispanic).

For Washington Post’s definition of unarmed, 45% of the victims were White, while 25% were Black. For toy weapons, 50% were White, and 15% were Black. For vehicles, 41% were White, and 30% were Black. For all of those cases combined, 44% were White, and 25% were Black.

Now some may interpret this as “more White people are being killed by police,” and that is true, but let’s think about the population of White and Black folks in the United States. According to the 2019 U.S. Census Bureau, 60% of the population are White while only 13% are Black or African American. So when we consider, by race, the percentage of unarmed people who were killed by police in comparison with the percentage of people in the U.S., we see a disproportionate effect on Black folks. If the experience of Black and White folks were the same, then we would expect only 13% of police-shooting victims to be Black and 60% to be White. However, we see a much lower number, proportionally for White folks (44% unarmed White victims), and a much higher number for Black folks (25% unarmed Black victims).

This highlights the disproportionate effect of police brutality towards Black folks, yet the point estimates provided from this data may not be fully comprehensive. When police reports are fabricated, when horrific police killings of Black and Brown folks go under the radar, we risk the data provided by the Washington Post to be further biased. However, this bias would suggest an even greater disparity between the victimization by police shootings of unarmed Black and Brown folks. As we consider data in our reflections of current events, we have to be mindful of the potential biases that may exist in the creation and collection of the data, as well as our interpretation of it.

Digital Equity

Digital Equity
By Anusha Praturu | June 19, 2020

It’s no secret that in 2020, it is becoming increasingly difficult to participate in modern society without some level of access and literacy with basic technologies, the Internet being chief among them. And with the widespread growth of WiFi hotspots, smartphones, and other Internet-capable devices, it’s becoming easier for most of us to remain connected all the time. Despite this, the technology gap appears to be widening for lower-income American households.

According to a 2019 Pew Study detailed in the figure below, 44% of individuals with annual household incomes under $30,000 do not have home broadband services, and 46% do not have access to a traditional computer. These individuals are becoming progressively more dependent on smartphones for Internet access, even for tasks that are traditionally undertaken on a larger screen, such as applying to jobs or for educational purposes.

Pew Research graphic depicting lower levels of technology adoption among lower income American households


Obviously these issues have many downstream implications, which contribute to perpetuating a cycle of inequality. These circumstances start to describe an issue known as digital inequity.

Defining Digital Equity and Digital Inclusion
Simply put, digital equity refers to people’s equal ability to access and use the necessary technology to participate in the social, democratic, and economic activities of modern society. The National Digital Inclusion Alliance (NDIA) defines Digital Equity as follows:

Digital Equity is a condition in which all individuals and communities have the information technology capacity needed for full participation in our society, democracy and economy. Digital Equity is necessary for civic and cultural participation, employment, lifelong learning, and access to essential services.

A related concept, Digital Inclusion, refers to the acts of remediation that governments, activists, and other stakeholders are proposing and attempting in order to achieve digital equity. NDIA describes it as:

Digital Inclusion refers to the activities necessary to ensure that all individuals and communities, including the most disadvantaged, have access to and use of Information and Communication Technologies (ICTs). This includes 5 elements: 1) affordable, robust broadband internet service; 2) internet-enabled devices that meet the needs of the user; 3) access to digital literacy training; 4) quality technical support; and 5) applications and online content designed to enable and encourage self-sufficiency, participation and collaboration. Digital Inclusion must evolve as technology advances. Digital Inclusion requires intentional strategies and investments to reduce and eliminate historical, institutional and structural barriers to access and use technology.

Causes of Inequity
As you can imagine, the causes of digital inequity are deep-rooted and manyfold. Some of the primary causes, as detailed by the Benton Institute include a lack of robust infrastructure and discrimination in delivering technology and digital services to specific areas or populations. Other barriers stem from broader issues such as disparities in socio-economic status, digital literacy, accommodations for special needs and disabilities, or resources for non-English speakers. These multifaceted sources of inequity cannot simply be addressed with a single piece of legislation or grant in funding. Rather, they require radical systemic change, including substantial and ongoing investment in lower-income and rural communities, as well as broader awareness of the growing issue.

Ongoing Attempts at Remediation
In April 2019, Senator Patty Murray of Washington introduced the Digital Equity Act of 2019. This act would establish grants for the purposes of (1) promoting digital equity, (2) supporting digital inclusion activities, and (3) building capacity for state-led efforts to increase adoption of broadband by their residents. These efforts would contribute to achieving the goals outlined in the graphic below.

Infographic detailing the three primary goals of the Digital Equity Act of 2019

Since being first introduced to the Senate in April and subsequently reintroduced to the House in September 2019 by Rep. Jerry McNerney of California, this act has yet to face a vote in either chamber.

In addition to legislation, there are several non-profit organizations that are committed to addressing the issue of digital inequity in the US. The leader of such organizations is the aforementioned National Digital Inclusion Alliance, which “combines grassroots community engagement with technical knowledge, research, and coalition building to advocate on behalf of people working in their communities for digital equity.” Organizations such as NDIA take a many-sided approach to digital inclusion, including spreading awareness, fundraising, research, advocacy, and lobbying policymakers.

Outstanding Barriers to Digital Equity
On top of those outlined above, there are still several barriers to achieving digital equity in the US. Simply the fact that legislation on the issue was introduced over a year ago and has not made any progress through the legislative branch of government is a clear indication that more public advocacy and awareness is needed for such action to have momentum. Another issue that must be addressed by digital inclusion efforts is the rapid pace of new developments in technology. Inclusion efforts must be up to date and compatible with the evolving technological landscape. Digital equity cannot be achieved if lower-income and otherwise disadvantaged populations are restricted to outdated technology or insufficient access to the latest developments.

The bottom line is, the road to digital equity is long and not without obstacles, but efforts to close the gap are long overdue. Further, the gap will only continue to widen with each year that goes by and technology advances, yet no action is taken to promote digital inclusion. The first of many steps will be to lobby policymakers to push the Digital Equity Act through to a vote so that the issue of inequity might finally start to be addressed and penetrate the American consciousness in a meaningful way.

You Already Live in a Smart Home, but Jarvis Doesn’t Work For You

You Already Live in a Smart Home, but Jarvis Doesn’t Work For You
By Isaac Chau | June 5, 2020

Ask people to describe a home in “the future,” and more than a few will describe a house that’s something like Tony Stark’s mansion: lights that turn on when you walk in, appliances that brew your coffee when your alarm goes off, and voice-controlled everything, from ambient music to air conditioning, all done through conversation with your butler that’s actually a computer.

On second thought, that sounds a lot like a modern smart home, consisting of Google Assistant or Amazon Alexa controlling and coordinating the actions of products like smart refrigerators, smart light bulbs and switches, smart doorbells, smart TVs, and of course, smart toasters. Sure, Google Assistant and Alexa might not be as interesting to talk to as Jarvis or Rosie Jetson, but given that the vast majority of US adults own smartphones and the fact that voice-controlled home speakers have been incredibly affordable for years now, it’s hard to argue that we aren’t already living in the future. However, I’m willing to bet that Tony Stark never worried about his A.I. butler sharing his personal information. Indeed, the sci-fi future many of us already live in takes on a dystopic tint when you look closer at the devices that enable it.

Google and Amazon, the two main players in the smart-home market, provide the masses inexpensive voice-controlled speakers for basically no profit because what users actually trade for the convenience of a smart-home device is not money but rather access to their homes. Through these devices, users can buy products and services, and the speakers direct them to choices that result in money going back to the company that sold them their speaker. Users’ queries to the speakers are collected and integrated with web browsing, location tracking, their social network, and other information to further personalize ads that can be delivered to any of their internet-connected devices.

You might say, “So what?” to the smart-speaker business model. Google and Amazon’s privacy policies state they do not share your information with third parties (besides those working for them). Hundreds of millions of people already entrust these companies with their information in the form of online shopping and services, and while they can sometimes be a little creepy, targeted ads are widely accepted and can even improve the online experience. If Google and Amazon can hold up their end of the privacy agreements, what’s wrong with building a smart home around their voice assistants?

The problem lies with other internet-connected products necessary for a smart home, many of which have more lax data privacy and security infrastructure. For example, Ring, which sells home security systems based around video-doorbells, has been criticized heavily for its practices which left users vulnerable to hackers, not to mention encouraging users to share video with police departments. The ethics of helping police surveil neighbors aside, the ease with which bad actors can access improperly set up Ring video cameras is alarming and directly opposes the intentions of any consumer security-minded enough to install video cameras in their home.

Televisions are a more popular smart device than doorbells that can compromise the privacy of a home. Smart televisions are connected to the internet and can recognize the content you watch in order to deliver targeted content. Compared to Google and Amazon, however, television manufacturers are more free with user information and more prone to security breaches. Samsung controlled more than 40% of the North American television market in 2019, and their privacy policy allows them to share with third parties any information they collect from users, including voice recordings. In 2017, the Federal Trade Commission fined Vizio for collecting user information without consent, and that same year, WikiLeaks released documents alleging that the CIA can hack smart TVs and use their microphones as listening devices.

There are few new non-smart TVs for sale. Just like with smart speakers, manufacturers make little money selling TV hardware, at least at the lower- and middle-end of the market. The real money is in data, and the purchase of a television means years of access to observing a household’s behavior. And unlike with smart light bulbs, refrigerators, and doorbells, there are no reasonable dumb alternatives to smart TVs for the consumers that care about their privacy. As consumer demand grows for smarter, connected home goods, other products may end up just like the television: always watching us back. There’s a chance we’ll all be living in smart homes soon, whether we want to or not.

The Police State Is Monitoring Your Social Media Activity and Is Encouraged To Track and Arrest You For Exercising Your First Amendment Rights

The Police State Is Monitoring Your Social Media Activity and Is Encouraged To Track and Arrest You For Exercising Your First Amendment Rights
By Anonymous | June 5, 2020

In light of the nationwide protests following outrage over the deaths of George Floyd and several others in the hands of police officers this past week, the nation is as polarized as ever. Millions of citizens are supporting grassroots organizations that aim to highlight systemic injustice and advocate for police reform, while some police departments, city governments, and other political actors are pushing back against the gatherings.

The president of the United States himself has verbalized his position against the demonstrations that are occurring across the country, antagonizing protestors and encouraging police to become more forceful in their suppression of citizens’ First Amendment rights. Just weeks after the President commended protestors for opposing the nationwide lockdown in response to Covid-19, his rhetoric has quickly shifted to condemnation of Black Lives Matter protestors. Audio released from the President’s call with the Governors regarding how to handle the demonstrations reveals that Trump said, “You’ve got to arrest people, you have to track people, you have to put them in jail for 10 years and you’ll never see this stuff again.” Trump’s overt endorsement of the surveillance and incarceration of citizens is alarming and provides a necessary junction for discussion about the ethics of data monitoring by law enforcement. When the President encourages police across the country to track and persecute its civilians, especially those who are in ideological opposition to the Administration and the police state, many Americans’ are at risk.

Law enforcement can and has data from mobile devices to investigate citizens in the past. From companies like Geofeedia and their social media monitoring, to Google’s Sensorvault database of location histories, to companies like Securus which used geolocation data from cell phone carriers to track citizens, Americans are facing ubiquitous threats to their privacy. These three instances of data collection and use by law enforcement elucidate the argument for greater corporate responsibility and the urgent need for legislative reform. In this post, the focus will be on Geofeedia and the risk that this type of data collection and monitoring brings to light.

Figure 1

In 2016, the ACLU uncovered that a company called Geofeedia had been providing personal data about protestors to police departments. Geofeedia aggregated and sold data accessed through Facebook, Twitter, and Instagram to multiple police departments. This data was used to track the movements of protestors, which led to the identification, interception, and arrest of several people. This type of surveillance was occurring in notable hotspots of civil unrest like Ferguson in 2014 (following the murder of Michael Brown) and Baltimore in 2015 (following the murder of Freddie Gray), but the ACLU of California published public records showing that police departments across the state were rapidly acquiring social media monitoring software to monitor activists. There has been extremely little debate about the ethics of this technology or oversight on the legality of its use. And, only after the ACLU reviewed public records and released the information, did the social media platforms suspend Geofeedia from utilizing their data. Still, many civil liberties activists have voiced reasonable concerns around the lack of foresight and responsibility by these social media companies. Nicole Ozer, technology and civil liberties policy director for the ACLU of California, made the point that, “the ACLU shouldn’t have to tell Facebook or Twitter what their own developers are doing. The companies need to enact strong public policies and robust auditing procedures to ensure their platforms aren’t being used for discriminatory surveillance.”

Ozer’s point is especially poignant when considering that Geofeedia is just the tip of the iceberg. Despite the pubic criticism of Geofeedia by the social media companies involved, this has not reduced the utilization of social media profiling by law enforcement. There is a myriad of other companies who perform similar services that were not exposed in the ACLU report, and even Geofeedia emails detailed that their contract with Facebook allowed them a gradual reactivation of data.

With a federal administration that is visibly callous and ignorant of the constitution, it is as important as ever for companies and local legislators to fight to protect the data rights of citizens and ensure that technology companies are acting in the best interests of the people. Individuals who show their solidarity with victims of police brutality and systemic racism could be subjected to unconstitutional surveillance and oppression because of the content of their speech on social media or their presence at public assemblies. If the police use technological tools to continually monitor the movement of citizens, certain individuals will essentially be made political prisoners of a country under martial law that is quickly demonstrating its totalitarian nature.

Figure 2