Dangers of Predicting Criminality

Dangers of Predicting Criminality
By Kritesh Shrestha | March 9, 2022

Facial recognition as a technology has seen major improvements within the last 5 years, today it is common to use facial recognition commercially for biometric identification. According to test conducted by National Institute of Standards and Technology, the highest performing facial identification algorithm as of April 2020 has an error rate of 0.08% compared to 4.1% of highest performing in 2014. [3] Though these improvements are commendable, concerns arise when attempting to apply these algorithms on high stake issues such as criminality.

Tech to Prision Pipeline
On May 5th 2020, Harrisburg University announced that a publication entitled, “A Deep Neural Network Model to Predict Criminality Using Image Processing” is being finalized. In this publication, a group a of Harrisburg University professors and a Ph.D student claim to have developed an automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal. [4] This measure of criminality is said to have an 80% accuracy with no racial bias just by using a picture of an individual’s face. Data being used behind the software is biometric and criminal legal data provided by the New York City Police Department (NYPD). While the intent of this software is to help prevent crime, it caught the eye of 2,435 academics that signed an open letter demanding the research remains unpublished.

Those that signed the open letter, the Coalition for Critical Technology (CCT), raised concerns over the data used to create the algorithm. The CCT argue that data generated by the criminal justice system cannot be used for classifying criminality as the data would unreliable. [5] The dataset contains history of racially bias and unjust convictions which will feed that same bias into the algorithm. Another study, _”The ‘Criminality from Face’ Illusion”_, looking into the plausibility of using predicting criminality with facial recognition asserts, “there is no coherent definition on which to base development of such an algorithm. Seemingly promising experimental results in criminality-from-face are easily accounted for by simple dataset bias”. [2] A study conducted by the National Criminal Justice Reference Service concluded that for sexual assault alone, wrongful conviction occurred at at rate of 11.6%. [6] The use of unreliable data to classify an individual’s likelihood to commit crimes is harmful as it would validate unjust practices that have occurred over the years.

If an individual was wrongly convicted awaiting to be exonerated, their family members or those that look like them might be labeled as “likely” to commit crimes. The study announced by Harrisburg University has since been pulled from the publication public discussion and the CCT.

Resurgence of Physiognomy
While use of facial recognition algorithms as a predictor is relatively new, the practice of using outer appearance to predict characteristics, __physiognomy__, dates back to the 18th century. [1] Physiognomy, in the past, has been used to promote racial bigotry, block immigration, justify slavery, and permit genocide. While physiognomy has been disproven, the pseudo science seems to be on the rise with the increase uses of facial recognition. The issue with Physiognomy lies in the belief that physical features are a good indicators for complex human behavior. The simplistic belief is problematic in that it skips several levels of abstraction, ignoring the…role of learning and environmental factors in human development. [2] I don’t believe predicting criminality in a vacuum is not harmful, though given the history of physiognomy, predicting criminality seems to be regressive.

The use of facial features as an identifier for criminality is inherently bias as it means accepting the assumption that individuals with certain facial features are more likely to commit crime. With the knowledge that bias exists within our criminal justice system it is irresponsible to recommend the use of criminal justice data to predict criminality. The implication of an algorithm being able to predict criminality is frightening as it could be used to further unjust actions.

Open Ended Thought Experiment in Predicting Criminality
What would the world look if an algorithm has reliable data and is 100% accurate at predicting criminality?
– If a child were to be born into this world with all of the features that classify as “likely to commit crime”; should that child be monitored?
– What rights would that child have to their own privacy if the algorithm is certain that the child will be a criminal?
– What does it mean for the future of child, should they be denied rights due to this classification?

[1] Arcas, Blaise Aguera y, et al. “Physiognomy’s New Clothes.” _Medium_, Medium, 20 May 2017, https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a.
[2] Bowyer, Kevin W., et al. “The ‘Criminality from Face’ Illusion.” _IEEE Transactions on Technology and Society_, vol. 1, no. 4, 2020, pp. 175–183., https://doi.org/10.1109/tts.2020.3032321.
[3] Crumpler, William. “How Accurate Are Facial Recognition Systems – and Why Does It Matter?” _How Accurate Are Facial Recognition Systems – and Why Does It Matter? | Center for Strategic and International Studies_, 16 Feb. 2022, https://www.csis.org/blogs/technology-policy-blog/how-accurate-are-facial-recognition-systems-%E2%80%93-and-why-does-it-matter#:~:text=Facial%20recognition%20has%20improved%20dramatically,Standards%20and%20Technology%20(NIST).
[4] “Hu Facial Recognition Software Predicts Criminality.” _Harrisburg University_, 5 May 2020, https://web.archive.org/web/20200506013352/https://harrisburgu.edu/hu-facial-recognition-software-identifies-potential-criminals/.
[5] Technology, Coalition for Critical. “Abolish the #TechToPrisonPipeline.” _Medium_, Medium, 21 Sept. 2021, https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16.
[6] Walsh, Kelly, et al. _The Author(s) Shown below Used Federal Funding Provided by …_ Office of Justice Programs, 1 Sept. 2017, https://www.ojp.gov/pdffiles1/nij/grants/251115.pdf.

Considerations for Collecting Social Determinants of Health in Healthcare

Considerations for Collecting Social Determinants of Health in Healthcare
By Anonymous | March 9, 2022

I previously worked in a group that built healthcare technology solutions and ran studies to understand their efficacy. One of the studies that I worked on involved capturing Social Determinants of Health (SDOH). In this blog post I will give a brief overview of SDOH within healthcare systems and then think through some of questions and considerations for collecting SDOH at the point of care.


Figure 1: Overview of Social Determinants of Health
Image source: https://www.kff.org/racial-equity-and-health-policy/issue-brief/beyond-health-care-the-role-of-social-determinants-in-promoting-health-and-health-equity/

SDOH are factors in peoples lives that impact health outcomes and quality of life. These factors include economic stability, physical environment, access to resources, community context, and access to healthcare.(1) There have been many studies that show that SDOH have an impact on health outcomes. (2) Motivated by the need to increase health equity, many healthcare systems are starting to collect SDOH. (3)

Increasingly Electronic Health Records (EHR) are including fields for collecting SDOH data, which means that SDOH data, that have been entered into the EHR, become a part of the patient health record. (4) Healthcare systems are storing, viewing, and in some cases are analyzing patient SDOH data. This also means that patient SDOH can be viewed and analyzed in combindation with patient medical data.

Some healthcare systems have started collecting SDOH data without a clear plan for how to use them. There have been targeted healthcare programs to help address SDOH, such as Kaiser Permanente’s Healthy Eating Active Living Zones Initiative in California, which have had positive results. (5) But overall there are “inadequate healthcare-based solutions for the core problems such as access to care, poverty and food insecurity”. (4) In addition, even though most clinicians recognize the need for treating patients as a whole, SDOH are not their main area of expertise. (6)

Here are some considerations for the lifecycle of collecting, storing and analyzing SDOH data within healthcare:

Informed Consent: There are no clear plans in place for how SDOH data will be used, which creates challenges in gathering informed consent given the lack of clarity around what will happen with the data.

Data Completeness: Some communities, especially those at higher risk, are more likely to be hesitant to share SDOH with their clinicians. (7) This creates challenges of self selection bias for the data that are collected. It also creates challenges with the analysis and eventual interventions since the data that are collected are likely to be an incomplete view.

Codification: According to the Healthcare Information and Management Systems Society, some SDOH factors have been codified by the International Classification of Diseases (ICD) but others are still not available. (4) In addition there is no standardized method or survey for collecting SDOH from patients. Not only does this put more emphasis on the SDOH that have been codified but it also makes it difficult to understand and share results.

Storage and Privacy: The Health Insurance Portability and Accountability Act (HIPAA) outlines 18 identifiers that are categorized as Protected Health Information (PHI). (8) HIPAA regulates that PHI data have heightened data security and privacy standards associated with them.

Figure 2: Overview of the 18 identifiers of Protected Health Information
Image source: https://www.iri.com/solutions/data-masking/hipaa/

All healthcare data have increased data security and privacy standards but PHI is the highest level. One challenge of collecting SDOH data is that they are highly sensitive data but do not currently fall within the PHI identifier list so they do not have the same level of security and privacy regulations associated with them. Not only the lack of clarity for regulation but also the ability for someone to have access to a broad dataset about an individual raises concern about the potential for harm if these data were to be leaked.
Actionability: Clinicians will be asked to collect and consider patient’s SDOH as part of the care process but most clinicians have not been trained in how to incorporate SDOH into the treatment plan. (9) This raises questions about the standards of care. It also raises questions about why the data should be collected without a clear plan for use.
Sharing: One of the goals for collecting SDOH data is to improve health outcomes. Some of the potential solutions for improving SDOH are to implement policies and add more resources to communities of need. In order to influence and help implement these solutions, either patient data or the analysis of patient data would need to be shared. This raises some concerns about whether the patients know that their data would be used and shared in this way.

SDOH contribute to approximately 80% of patient conditions and mortality. (14) It’s imperative to address SDOH needs, disparities in healthcare, and work towards more equitable care. It’s equally important to make sure that we are not introducing new challenges, with data and privacy, that could potentially negatively impact patients.


Beyond Health Care: The Role of Social Determinants in Promoting Health and Health Equity

Predictive Models as a Means to Influence Consumer Behavior

Predictive Models as a Means to Influence Consumer Behavior
By Erick Martinez | March 9, 2022

The world’s largest and most successful tech companies have built their wealth selling ads and promoting various products and services. A significant portion of that success comes from their ability to market and personalize ads down to the individual level, to serve ads which are continuously more and more “relevant” to the consumer. As tech continuously amasses even more granular data and develops increasingly sophisticated models, will their influence become a problem for individual decision making? Should we or could we set a practical ethical limit on the improvement of potent stimuli based on deep learning and other predictive analyses relying on big data? I don’t think there’s any present evidence to support the idea that tech companies can direct our every move in some sort of apocalyptic post-modern sci-fi sort of way. I do believe however, that tech companies have a degree of influence over their users which is at best, significantly persuasive and at worst, manipulative and coercive.

Due Process
I’d like to borrow a legal framework that applies quite directly to our case. Due process outlines the entitlements allowed to an individual throughout their treatment in various legal settings. The need to expand the rights of individuals with respect to big data based systems is echoed in “Big Data and Predictive Reasonable Suspicion”, which concerns the extent to which law enforcement can apply big data based systems in order to “know” a suspect [1]. Such systems circumvent the protections afforded to every citizen against unreasonable search and seizure as from predictive models and extensive databases, law enforcement can justify the seizure of a suspect, a far cry from the limited “small” data afforded by law enforcement in traditional settings [1]. In our consumer structure adequate due process would allow for an individual to appeal a specific model’s determination, their data sources, the extent of personalization permitted in the advertisements they receive, and the persuasive methods found to be effective against an individual. Due process would have been especially useful for Uber drivers as detailed in “How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons” by the New York Times.

Uber made use of various known behavioral science mechanisms: loss aversion, income targeting, compulsion looping, all informed by the massive amounts of data collected on their drivers [2]. Similar techniques can be seen in social media/entertainment sites such as Google, Facebook, Instagram, etc. The fear of missing out on a particular aspect of social groups is reinforced by the ephemeral posting structures of such platforms and mirrors the loss aversion tactics employed by Uber. Compulsion looping is exemplified via the transition to an endless scroll as well as the timed nature of various push notifications; these mechanisms serve to confine the user in a loop of anticipation, challenge, and reward [3].

Users should be able to see how their actions are being influenced and to what extent they are affected; they should be able to see which features factor into how ads are presented and structured within the platform whenever that structure is informed by the data gathered on the individual. The lack of such information was harrowing in the case of Uber drivers, since drivers are independent contractors they cannot be compelled to work a specific schedule, however from insights garnered from the data they have on their drivers they were able to compel drivers to specific locations which was more profitable for Uber but was not necessarily more profitable for the driver. A similar argument is made from digital media companies: offering up more of your data helps make ads more relevant for you, a benefit for the consumer as it’s often framed. However relevant ads are very profitable for such entities and users might not be so keen on the ads no matter their relevancy [4].


[1] [Big Data and Predictive Reasonable Suspicion. Andrew Guthrie Ferguson](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2394683)
[2] [How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons. New York Times](​​https://www.nytimes.com/interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html)
[3] [The Compulsion Loop Explained. Game Developer](https://www.gamedeveloper.com/business/the-compulsion-loop-explained)
[4] [Experiencing Social Media Without Ads. Zhivko Illeieff](https://medium.com/swlh/experiencing-social-media-without-ads-56576974b40b)

Valuable AI versus AI with values

Valuable AI versus AI with values
By Ferdous Alam | March 9, 2022

The landscape
Many academics and researchers posit that the emergence of human-level artificial intelligence will be achieved within the next two decades. In 2009, 21 AI (Artificial Intelligence) experts participating in AGI-09 experts believe AGI(Artificial Intelligence) will occur around 2050, and plausibly sooner. In 2012/2013, Vincent C. Muller, the president of the European Association for Cognitive Systems, and Nick Bostrom from the University of Oxford, conducted a survey of AI researcher where 60% responded that AGI likely to happen before 2040. In 2017 May, 352 AI experts who published at the 2015 NIPS and ICML conferences were surveyed resulting in estimate that there’s a 50% chance that AGI will occur until 2060. In 2019, 32 AI experts participated in a survey on AGI timing with 45% of respondents predict a date before 2060.[1]

There is little contention or disagreement in terms of the benefits A.I will provide by analyzing data, integrating information and a much faster rate than humanly possible. However, how we utilize the insights and apply them to form decision making is not an easy problem to solve. The Brooking Institute mentions that “The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole” [2]

Recent surveys showed that the overwhelming majority of Americans (82%) believe that robots and/or AI should be carefully managed. This figure is comparable to with survey results from EU respondents.[3] There however is a caveat when it comes to alignment of the survey result with how we perceive or correlate intelligence with positive traits. Due to what is known as ‘affect heuristic bias’s we often rely on our emotions, rather than concrete information, when making decisions. This leads us to overwhelmingly associate intelligence with positive rather than negative traits or intuitively conclude that those with more intelligence possess other positive traits to a greater extent. Hence, even though we may show an overall concern there is a possibility that we may fall into the pitfalls of miscalculating possible cost associated with AI/AGI adoption.

Embedding values
S. Matthew Liao “argues that human-level AI and superintelligent systems can be assured to be safe and beneficial only if they embody something like virtue or moral character and that virtue embodiment is a more appropriate long-term goal for AI safety research than value alignment.” [4]

In his 1942, before the term AI/AGI was coined, science fiction writer Isaac Asimov in his short story “Runaround”, proposed three laws or robotics which can be seen as a corollary that can be applied to AI/AGI. According to his proposal the First Law states: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Finally, the Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While this is a novel attempt but embedding virtue or principles/laws from a consequentialist perspective might fall short. It is argued “that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know the terminal goals of the system.” [5]
Another heuristic approach might be to consider the four ethical principles by EU High-Level Expert Group on AI that closely resembles the commonly accepted principles of bio ethics, excerpted from Beauchamp and Childress (2008), which include: Principle of respect for autonomy, Principle of beneficence, Principle of nonmaleficence, Principle of justice.

The proposed four principles by this group when it comes to AI [6] are:

I) Respect for human autonomy – AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills. This essentially seems to cover Principle of respect for autonomy, Principle of beneficence from the Bioethics

II) Prevention of harm (Principle of nonmaleficence)- AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings. 30 This entails the protection of human dignity as well as mental and physical integrity.

III) Fairness (Principle of justice)–The substantive dimension implies a commitment to: ensuring equal and just distribution of both benefits and costs, and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatization

IV) Explicability -This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible –explainable to those directly and indirectly affected.
The principle of explicability seems to be a completely new addition to the previous framework which has significant implications. According to Luciano Floridi and Josh Cowls, “the addition of the principle of ‘explicability,’ incorporating both the epistemological sense of ‘intelligibility’ (as an answer to the question ‘how does it work?’) and in the ethical sense of ‘accountability’ (as an answer to the question ‘who is responsible for the way it works?’), is the crucial missing piece of the AI ethics jigsaw.”

The tradeoff between the value that AI promises versus the values we can need to embed within the decision-making process is both intriguing and challenging. Moral values and principles in terms of systematically investigating what makes acts right or wrong has been debated for eons. While an objective value system that is optimal is unlikely to emerge anytime soon, yet the various perspective and the framework proposed would serve as a starting point which we can use to look at different perspective and strive towards a better solution.

1. Cem Dilmegani, (2022). When will singularity happen? 995 experts’ opinions on AGI. https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
2. Darrell M. West and John R. AllenTuesday, (2018). How artificial intelligence is transforming the world. https://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/
3. Baobao Zhang and Allan Dafoe (2019) Artificial Intelligence: American Attitudes and Trends https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/executive-summary.html
4. S. Matthew Liao (2020). Ethics of Artificial Intelligence https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190905033.001.0001/oso-9780190905033-chapter-14
5. Roman Yampolskiy(2019). Unpredictability of AI https://www.researchgate.net/publication/333505954_Unpredictability_of_AI
6. Independent High-Level Expert Group on Artificial Intelligence (2019) https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf
7. Luciano Floridi and Josh Cowls, (2019). A Unified Framework of Five Principles for AI in Society https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/7

Google Pixel Privacy Issues

Google Pixel Privacy Issues
By Anonymous | March 9, 2022

I had been a passionate Android and Google Pixel smartphone user for a decade. However, I was very frustrated with Google’s privacy policy because I just could not make sense of it. I did not have a mental model to analyze the privacy of a Pixel device until I started thinking about the ethical, legal, and privacy implications of products and services at the University of California, Berkeley School of Information.

This public advocacy blog is meant for the general public to have a point of view on the potential privacy and legal implications of Google Pixel. There are some fundamental issues1 at play here. The first issue is that Google allows third party user tracking companies to access an advertising id for monetization without user consent. The advertising id, referred to as AdId moving forward, is an identifier uniquely set for personalization of services on the device.

The second issue is that apps running on Android can consume specific functionality of other Android apps circumventing the permissions mechanisms, leading to tighter integration, broader data sharing, and reduced data privacy among Android apps.

The third issue is that ad tracking companies get user’s IP addresses – considered personal as per EU law – and use the addresses for tracking user behavior on the devices without user consent.

The fourth issue is that tracking companies are predominantly based in the US, China and India with very little presence in the EU, leading to violations of EU and UK data protection laws for exchange of personal data beyond the UK/EU without any special safeguards.

The fifth issue is that Android apps leverage third party tracking and share personally identifiable information from children’s apps, sharing more data than necessary and without adequate level of data protection. Lastly, Google does not oblige the data protection law that requires companies to disclose data practices adequately.

Fig 1: Smartphones steal information without your consent

I’ll discuss the above issues based on the principles in the Belmont report, Nissenbaum’s contextual privacy and GDPR. The Belmont2 report is based on three principles – Respect, Beneficence, and Justice. Respect for persons is defined in terms of autonomy given to them in decision making and consent. The principle for Respect also includes respect for persons with diminished autonomy like children, minorities etc. The second principle of Beneficence refers to the notion of maximizing benefits and minimizing harms from privacy violations. Finally, the third principle of Justice refers to equity to each person according to individual effort, societal contribution, merit, and need. On the other hand, Nissenbaum’s3 contextual privacy essentially looks for similarities in the offline world in terms of social norms of privacy and expects digital corporations to apply similar contextual privacy to online worlds of users. For situations where there is no analogous offline social setting, common sense privacy around potential hypothetical offline social settings is recommended. GDPR is more commonly known to us as data privacy principles defined by the EU.

Based on the first Belmont principle of Respect, one could argue that Google Pixel device does not ask for consent from users when sharing AdId or IP addresses as discussed above in the first and third issues. The same is true for children because of their diminished decision making/autonomy as referred to in the fifth issue above. In such situations, Pixel does not ask parents of the children for consent to track and provide ads or personalized services. Google Pixel is a novel enough idea that there is perhaps no social precedent to data privacy in an offline social world. The second issue of tighter integration among Android apps does not bode well for beneficence. In fact, it does the opposite of amplifying harm through privacy violations.

Fig 2: Apps integrated with other Android apps amplifying data sharing

Now, if we analyze Google Pixel through the lens of Nissenbaum’s contextual privacy, we realize that the context is novel and does not have an existing social standard in a similar offline world. In such a case, we have to ask ourselves: Would it be ok for my neighbor, let alone a stranger or a business, to track what I am reading or speaking at home every moment and make notes? Would it be ok for a stranger to track my friends or relatives? Would it be ok for a stranger to track my belief systems, hobbies, interests, or political inclinations? In the current social setting in the USA, the answer is a categorical no and downright creepy. Our personal lives are deeply personal. Therefore, Nissenbaum’s contextual privacy application makes us realize that data sharing without consent is a privacy violation. In fact, GDPR law does not allow transfer of data outside the UK and EU for the sake of tracking. However, most of the tracking companies are located in the USA, China and India and are clearly violating GDPR regulations. Perhaps, these tracking companies are getting away because they are not large enough and it is practically impossible for regulators to track a number of such smaller companies.

Does that mean that Google and the likes should completely disrupt their own business models? As part of Google’s corporate social responsibility and long term sustainability of business model, there should be an enforcement of every professional contributing to the design of the Pixel to insist on higher internal standards irrespective of regulations and laws policing the behavior of their products and services.

1. Proceedings on Privacy Enhancing Technologies. Konrad Kollnig*, Anastasia Shuba, Reuben Binns, Max Van Kleek, and Nigel Shadbolt https://arxiv.org/pdf/2109.13722.pdf
2. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. April 18, 1979. https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf
3. Nissenbaum, Helen F. (2011). A Contextual Approach to Privacy Online. Daedalus 140:4 (Fall 2011), 32-48. https://ssrn.com/abstract=2567042

Amazon’s ADAPT and it’s Harm on Workers

Amazon’s ADAPT and it’s Harm on Workers
By Alice Ye | March 2, 2022

Artificial intelligence is rapidly making its way into the workplace, being incorporated in all aspects of the business. One of the greatest drivers for this widespread adoption is the potential of AI to increase employee productivity. Some companies have received positive feedback on using AI in this way. However, some companies have been criticized for treating workers like robots. Amazon in particular has received a lot of attention for using AI to monitor warehouse worker productivity.

What is ADAPT?

Amazon created an AI system called ​​Associate Development and Performance Tracker (ADAPT) that monitors each worker’s productivity and automatically fires workers. ADAPT operates based off of Amazon’s proprietary productivity metric for measuring the productivity of each associate (Carey 2018). The most Amazon has shared about the metric is that it’s based on customer demand and warehouse location (Lecher 2019).

Amazon warehouse facilities will hold weekly productivity performance reviews using ADAPT data. ADAPT will automatically generate any warnings or termination notices to the employee without input from managers. These notices can be overridden in certain scenarios, like mechanical issues or peak operation levels (e.g. Christmas holidays, Amazon Prime Day, etc.) (Carey 2018).

Amazon states that they are able to provide quality, fast services to customers because the system allows associates to be both detailed and efficient (Carey 2018). However at what cost? With ADAPT in place, Amazon workers feel that their workday is managed down to every second. The combination of monitoring, automated supervision, and inflexible expectations leave workers feeling like Amazon views them as robots, instead of humans (Tangermann 2019). Many previous employees have shared that mental and physical health issues are common within the facility environments (Zahn 2019).

Why is ADAPT harming Amazon workers?

The two aspects, out of many, that are at the core of Amazon’s unethical AI usage are the degree of monitoring and the imbalance between the company and employees.

ADAPT monitors workers to a level of detail that takes away autonomy and violates their privacy. Amazon tracks a worker’s Time Off Task (TOT) by capturing gaps in activity and workers are expected to explain each gap. If the explanation is deemed unreasonable or too long, then a warning is issued. If an unreasonable break is 2 hours or longer automatic termination is issued (Carey 2018). This constant detailed monitoring of employees tied with heavy penalties is a loss of autonomy. Some workers have felt that they can’t even properly use their breaks for going to the bathroom because the warehouses are so large (Kantor 2021). This level of surveillance can also be considered a harm to worker privacy (Solove 2006). Workers have lost control over their personal time because they are required to justify how they spent every minute of it. Not only is taking away autonomy an ethical problem (Belmont Report 1979) and violation of privacy, but it can also impact profits. Research shows that loss of employee autonomy leads to distrust and dissatisfaction which in the long term contributes to high employee turnover, slower business growth and lower profits (Matthews 2020).

Figure 1: Showing how one of Amazon’s warehouses, JFK8, is the size of 15 football fields.Workers are expected to walk far distances within their short, timed breaks. Chang W. Lee/The New York Times

Another unethical aspect of ADAPT is the imbalance between company profits and employee welfare. This imbalance is in the AI system and in its implementation. First, taking a look inside the ADAPT shows how it was built to focus on increasing profits. Amazon has mentioned that their proprietary productivity metric is based on customer demand and warehouse location (Lecher 2019). Both factors that align with business profits, rather than employee development. Amazon uses the same productivity metric with the same expectations for all employees. There are no adjustments made for employees who have special circumstances (e.g. medical issues) or who thrive better with different measures (Carey 2018). This further emphasizes how the main purpose of the ADAPT is to increase company profits, not help employees.

Next, looking into how Amazon uses ADAPT shows that exceptions are made to benefit the company but inflexible to employee needs. Amazon explicitly states that automated terminations can be overridden when warehouses are at peak operations levels, like Amazon Prime Day (Carey 2018). This gives allowances when Amazon needs employees the most but there is no equal opportunity given to employees to dispute the automated decisions. For example, an employee could have a medical condition consistently limiting their speed of fulfilling orders but the ADAPT system doesn’t allow the employee to take longer breaks. This inflexibility paired with intense scrutiny has been cited as the cause for prevalent mental health issues amongst workers (Zahn 2019). If Amazon balanced the ADAPT system to distribute benefits between company profits and employee welfare, then some negative consequences would be mitigated.

Figure 2: Amazon worker expressing his opinion that worker health needs to be taken more seriously by Amazon. Spencer Platt/Getty Images

How could Amazon improve their usage of AI on productivity?

There are a couple ways Amazon could improve the way they’re using AI on worker productivity. First, change the role of AI. Instead of using AI to measure how many tasks employees are completing, AI can be used to make tasks easier to complete. For example, Amazon currently uses ADAPT to track how quickly workers can count and verify the number of items in an order. Instead, AI can count the number of items and the worker can investigate packages where there are discrepancies. Then worker time would be spent on more stimulating, valuable tasks. They would be less bored and more satisfied with their work which has been shown to increase productivity and reduce turnover rates (How 2021). Thus, focusing AI in a different way could still result in higher productivity but not cause harm to employees.

Another way is to allow employees to customize the metrics they are evaluated against. In practice, Amazon could provide a range of performance metrics and have employees select metrics that suit their working style. Employees and managers could set what the thresholds and expectations should be based on personal circumstances. This would address the two unethical aspects of ADAPT that were discussed earlier. Employees would gain some control over how ADAPT is tracking their activity and provide a bit more balance in power between Amazon and the workers.

Final Thoughts

Many companies look to Amazon for how to be a successful business. Thus, it’s vital that Amazon ethically uses AI to drive worker productivity. If Amazon is allowed to continue, other companies will do the same, resulting in a new norm. In fact, other companies have already started following in Amazon’s footsteps, like Cognito monitoring their customer service reps talking speed (Roose 2019) and Walmart eavesdropping on employee conversations (Woollacott 2020). As Amazon’s ADAPT system gets more awareness, I hope legislation is created to protect all types of workers, even contract workers.

* Carey, Crystal S. (September 4, 2018). “Case No. 05-CA-224856.” Philadelphia, PA: Morgan Lewis. Retrieved February 13, 2022, from https://cdn.vox-cdn.com/uploads/chorus_asset/file/16190209/amazon_terminations_documents.pdf
* Lecher, C. (2019, April 25). How Amazon automatically tracks and fires warehouse workers for ‘productivity.’ The Verge. Retrieved February 13, 2022, from https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations
* Tangermann, V. (2019, April 26). Amazon Used An AI to Automatically Fire Low-Productivity Workers. Futurism. Retrieved February 13, 2022, from https://futurism.com/amazon-ai-fire-workers
* Zahn, M., & Paget, S. (2019, May 9). ‘Colony of Hell’: 911 Calls From Inside Amazon Warehouses. The Daily Beast. Retrieved February 13, 2022, from https://www.thedailybeast.com/amazon-the-shocking-911-calls-from-inside-its-warehouses?ref=scroll
* Kantor, J., Weise, K., & Ashford, G. (2021, December 15). Inside Amazon’s Employment Machine. The New York Times. Retrieved February 13, 2022, from https://www.nytimes.com/interactive/2021/06/15/us/amazon-workers.html
* Solove, Daniel J. (2006). A Taxonomy of Privacy. University of Pennsylvania Law Review, 154:3 (January 2006), p. 477. https://ssrn.com/abstract=667622
* The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. April 18, 1979. https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf
* Matthews, V. (2020, April 9). Productivity secrets: better leaders and working smarter. Raconteur. Retrieved February 13, 2022, from https://www.raconteur.net/business-strategy/productivity/productivity-secrets/
* Roose, K. (2019, June 24). A Machine May Not Take Your Job, but One Could Become Your Boss. The New York Times. Retrieved February 13, 2022, from https://www.nytimes.com/2019/06/23/technology/artificial-intelligence-ai-workplace.html
* Woollacott, E. (2020, February 10). Should you be monitoring your staff with AI? Raconteur. Retrieved February 13, 2022, from https://www.raconteur.net/technology/artificial-intelligence/ai-workplace-surveillance/
* How AI is increasing employee productivity. (2021, September 24). Memory. Retrieved February 13, 2022, from https://memory.ai/timely-blog/ai-increase-employee-productivity

Exploring Bias in AI Recruiting Systems

Exploring Bias in AI Recruiting Systems
By Marque Green | March 2, 2022

Recruiting management systems are widely utilized today by most employers to automate several aspects of the recruiting process, particularly for medium and highly skilled roles. In the 1980’s and 1990’s, digital job postings and resume-scanning software were originally created to make it easier for job seekers to apply for jobs and employers to manage their applications from a larger pool of applicants. AI recruiting and talent management tools have disrupted and reshaped job seeking and hiring processes resulting in inefficiency as well as creating a class of “hidden workers” whose talent remains unrecognized. Automated tools spotlight formal education, employment continuity and keywords at the expense of any attempt to recognize the humanity of the applicant.


Initially both job applicants and employers felt these systems were largely beneficial. Hosting job postings online can make applying for jobs more accessible and equitable as they are available to the public. Employers also saw an increase in the average number of applications per job posting. In the early 2010s, a job posting would yield about 120 applicants on average. With the increase of these recruiting systems, by the end of 2010s, companies were seeing job postings yield about 250 applicants on average.[3] Due to this gain in applicants, companies found these technologies were leading to larger and more diverse candidate pools. An additional benefit with these systems is it allows for employers to find candidates with highly specialized skills by keyword matching. This has led to more targeted candidates for highly skilled jobs. Finally, these systems have led to productivity gains within talent acquisition teams.

Unfortunately, recruiting management systems do lead to unintended harms for both employers and job applicants. Despite digital job postings leading to more job applicants, very few (about 4 to 6 on average) make it beyond the AI resume-scanning software that often discriminates via arbitrary selection and evaluation criteria. For example, this software tends to focus on what candidates don’t have rather than the value and capacities they bring to a company. Specifically, hospitals who only accepted candidates with experience in “computer programming” on their CV, when all they needed were workers to enter patient data into a computer [3]. AI resume-scanning software oversimplifies what criteria make a candidate good or bad and can quickly reject candidates on biased criteria. Often candidates are rejected for failing to meet specific criteria without considering their other qualifications. The shortcomings of these systems are well known. Among executives interviewed by Harvard for a study on common recruiting processes, 90% admitted they knew automated recruiting software was mistakenly filtering out viable candidates but fixing these problems would require a complete overhaul of the recruiting and hiring process [4].

Since recruiting management software rejects and excludes most candidates, there has been a new class of worker created known as the “hidden worker”. These are individuals who are able and willing to work but remain locked out by structural problems within the recruiting process like requiring traditional qualifications and continuous employment.

Source: NextAvenue

Hidden workers tend to be caregivers, veterans, immigrants, those with physical disabilities, less-advantaged populations, or those who lack traditional qualifications. Resume-scanning software unfairly penalizes someone who has gaps in their employment or who lack traditional education credentials without considering the applicant’s other qualifications. Consequently, millions of ready and able workers are unfairly excluded from the job market.

How can we strive to make recruiting management software fairer and more equitable? First, talent acquisition teams and hiring managers should refine job descriptions. Instead of focusing on finding the “perfect candidate”, hiring teams should focus on identifying critical skills and create descriptions that accurately reflect the skills needed to do the job. AI has the potential to enable companies to understand the background of their current employees and determine what variables correlate to their success. Additionally, AI systems should shift from negative to affirmative filters, focusing on the applicant’s strengths instead of highlighting weaknesses. As recruiting management systems continue to grow and evolve, employers have a responsibility to reduce systemic bias and create more equitable recruiting processes. By refocusing away from traditional measures of success toward evaluating desired skills and competencies, the recruiting and hiring process can become more equitable and efficient for job seekers and employers of any sort.

[1] Automate and Manage a Recruitment Management Process for Your Organization. Streebo https://www.streebo.com/recruitment-management-system
[2] Kerry Hannon. What’s Keeping Family Caregivers and the Long-Term Unemployed from Getting Hired. Next Avenue https://www.nextavenue.org/hidden-workers-ai-getting-hired/
[3] James Vincent. Automated hiring software is mistakenly rejecting millions of viable job candidates. Next Avenue https://www.theverge.com/2021/9/6/22659225/automated-hiring-software-rejecting-viable-candidates-harvard-business-school
[4] Fuller, Raman, Sage-Gavin, Hines. Hidden Workers: Untapped Talent. Harvard Business Review https://www.hbs.edu/managing-the-future-of-work/Documents/research/hiddenworkers09032021.pdf

Why Explainable AI Matters

Why Explainable AI Matters
By Severin Perez | March 2, 2022

When justifying a decision or rule for their kids, parents sometimes resort to an old standby: “because I said so.” It’s an unsatisfying, but effective response–especially in light of the uneven power dynamic between parents and children. In essence, many artificial intelligence (AI) systems, and the organizations that deploy them, are now using the same strategy. Why were you denied a bank loan? Because the AI said so. Why did the police arrest you? Because the AI said so. Why are you getting ads for extremist political causes? Because the AI said so.

We don’t accept the “because I said so” argument from judges, police officers, doctors, or executives, and we shouldn’t accept it from AI systems either. In order to verify that an AI is making a valid decision, it should be an explainable AI. If the AI is explainable, then we can ensure that its processes align with our laws, regulations, and social norms. Further, an explainable AI is one that we can interrogate to confirm that no privacy harms are inherent in the system itself, or the data that feeds it.

Explainable AI

“AI and Machine Learning” by [Mike MacKenzie](https://www.flickr.com/photos/152824664@N07/30212411048/) is licensed under CC-BY-2.0.

An explainable AI is one that behaves in a way that most users can understand. When the system produces an output, we should be able to say why and how. In order to be explainable, the AI must also be interpretable, meaning that we have an idea about how the internal technology operates. In other words, the system is not a “black box”, where data goes in and decisions come out, with no clear connection between the two. [1]

Consider an AI used by a bank to decide which customers qualify for a loan. If the AI is explainable, then we should be able to identify which variables, weights, and mechanisms it uses to make decisions. For example, the AI might take the customer’s savings account balance (SAB) and requested loan amount (RLA) as variables and make a decision based on the formula “if RLA is less than 2 * SAB, then approve the loan, otherwise deny the loan.”

Of course, the above example is a drastic oversimplification. Realistically, an AI for approving bank loans will consider a vast amount of data about bank customers, put the data through a neural network optimized for accuracy, and output decisions that even the system designers may not fully understand. This raises serious questions about fairness, bias, and transparency.

Potential for Harm

AI systems are now making decisions in a variety of fields that were previously the exclusive purview of expert human decision-makers, including justice, law enforcement, health, and commerce. This is problematic not only because we want such systems to be accurate, but also because they can have baked-in bias that perpetuates patterns of discrimination and inequity–even in cases when the designers and users are not themselves bad actors. [2]

The good news is that explainable AI can provide us with the means to identify the sources of bias in a system. Since an explainable AI system is transparent by nature, we can evaluate what data it is using, and how. This is an important mechanism for identifying privacy harms at the information processing stage of data usage. [3] Further, if we can see what data a system is using, we can ask follow-on questions about where the data came from, whether it’s accurate, and whether we feel it is sensitive and merits special protections. In other words, we can hold the AI and its designers accountable.

Never Trust, Always Verify

“Paying with your face” by [Eliza Galstyan](https://commons.wikimedia.org/wiki/File:Paying-with-your-face.jpg) is licensed under CC-BY-SA-4.0.

As much as we might like to live in a world where we can universally trust governments and corporations to use our data responsibly, that world doesn’t yet exist. Companies like Clearview AI are harvesting photos from social media and using them to feed a facial recognition system that is popular with law enforcement. [4] Similarly, in 2018 we learned that Cambridge Analytica had been improperly acquiring data from Facebook to build voter profiles for use in election campaigns. [5]

If Clearview AI and Cambridge Analytica had followed the principles of explainable AI, either by virtue of social norms or regulatory requirement, then we would have had earlier and more frequent opportunities to raise questions about possible abuse of our data. Not only could we have asked whether they had our consent to use our data, but we could also have evaluated the mechanisms their systems used to make decisions about us. As it stands now though, such companies are unaccountable to their data subjects.

In order to avoid such abuses, one argument is to employ robust privacy policies so that consumers can make informed choices about how and where they generate data. Although this is a worthy goal, it’s not enough on its own. In reality, regulations like the European Union’s General Data Protection Regulation (GDPR) have helped to drive an increase in the length and difficulty of privacy policies, making it even harder for the average consumer to understand them. [6] Explainable AI would provide additional insights into the ways in which systems are using our data, making it easier for us to identify and mitigate potential harms.

Trade Offs

Of course, explainable AI isn’t free–there are trade offs that we must consider, including with overall accuracy and performance. [1] “Black box” AI systems have become popular not because designers prefer opacity, but because they’re more effective than fully transparent systems. Somewhat paradoxically, explainable AI may also introduce new privacy risks by opening additional attack vectors for malicious actors to steal data from a system. [7]

As AI usage spreads to new areas of human life, it’s up to us to decide how such tools should be used. It may be that we don’t need to understand how many AI systems work, because the decisions they make may not be all that sensitive. In other cases though, as in justice and health, the trade-off between explainability and accuracy merits deeper consideration. The choices we make in these cases will have long-lasting implications, not only for our social values, but for how we live our lives.


1. Bundy, A., Crowcroft, J., Ghahramani, Z., Reid, N., Weller, A., McCarthy, N., & Montgomery, J. (2019). Explainable AI: the basics. The Royal Society. https://royalsociety.org/topics-policy/projects/explainable-ai/
2. Hoffman, Anna L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, Volume 22, No. 7, 900-915. https://doi.org/10.1080/1369118X.2019.1573912
3. Solove, Daniel. (2006). A Taxonomy of Privacy. University of Pennsylvania Law Review, Vol. 154, No. 3, p. 477, January 2006, GWU Law School Public Law Research Paper No. 129. https://ssrn.com/abstract=667622
4. Hill, Kashmir. (2020, January 18). The Secretive Company That Might End Privacy as We Know It. The New York Times. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
5. Confessore, Nicholas. (2018, April 4). Cambridge Analytica and Facebook: The Scandal and the Fallout So Far. The New York Times. https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
6. Amos, R., Acar, G., Lucherini, E., Kshirsagar, M., Narayanan, A., & Mayer, J. (2020). Privacy Policies over Time: Curation and Analysis of a Million-Document Dataset. arXiv. https://arxiv.org/abs/2008.09159
7. Zhao, X., Zhang, W., Xiao, X., & Lim, B. (2021). Exploiting Explanations for Model Inversion Attacks. arXiv. https://arxiv.org/abs/2104.12669
8. Kerry, Cameron F. (2020). Protecting privacy in an AI-driven world. Brookings. https://www.brookings.edu/research/protecting-privacy-in-an-ai-driven-world/

A Currently National Privacy Regulation in the U.S. – and Upcoming Plans to Improve It

A Currently National Privacy Regulation in the U.S. – and Upcoming Plans to Improve It
By Anonymous | March 2, 2022

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) regulates the interactions between healthcare patients and healthcare providers (doctors, hospitals, etc.), and contains an important provision on patient privacy referred to as “The Privacy Rule.” The Department of Health and Human Services describes this rule as follows:

The Rule requires appropriate safeguards to protect the privacy of protected health information and sets limits and conditions on the uses and disclosures that may be made of such information without an individual’s authorization. The Rule also gives individuals rights over their protected health information, including rights to examine and obtain a copy of their health records, to direct a covered entity to transmit to a third party an electronic copy of their protected health information in an electronic health record, and to request corrections.

Rather than follow Europe’s approach of adopting a holistic method of regulating privacy via such an act as the General Data Protection Regulation (GDPR), the United States’ privacy regulatory landscape operates on a patchwork of regulations like the California Consumer Privacy Act (CCPA). In that sense, HIPAA stands as a notable exception to U.S. regulatory norms concerning privacy.

Source: Delphix.com

With the advent of digital health – an area of growth for the U.S. health industry particularly as a result of the COVID-19 pandemic – at least one HIPAA blind spot has emerged. HIPAA does not currently regulate digital health companies’ ability to collect and utilize consumer health data. But that could change as a result of a new bi-partisan Congressional bill, the Health Data Use and Privacy Commission Act. Sponsored by Bill Cassidy (R-LA) and Tammy Baldwin (D-WI), the bill would create a commission that would advise both Congress and President Joe Biden how to modernize current health privacy laws. The commission would focus on “issues relating to the protection of individual privacy and the appropriate balance to be achieved between protecting individual privacy and allowing appropriate uses of personal health information” (Morgan Lewis).

Source: Pexels.com

Improvements to HIPAA could help address patient privacy concerns across a number of dimensions. Using legal scholar Daniel Solove’s Taxonomy of Privacy, it might appear as though privacy issues would emerge for digital health offerings during the collection of patient data. But Eric Wicklundof indicates in HealthLeaders Online that such offerings “opened the door to new ways that such data can be misused;” in that sense, under the terms of Solove’s framework, information processing and dissemination poses greater risks to patients under the current iteration of HIPAA. It is not difficult to imagine how identification of individuals or breaches of confidentiality can occur for a given health consumer in this landscape.

Source: Pexels.com

The probability that this bill passes Congress is unclear. Ultimately, should this bill be passed, consumers will likely be in a better position to enjoy the technical innovations and tantalizing potential benefits that digital health applications and other modern healthcare ventures have to offer, without having to sacrifice an undue level of personal privacy in return. As Helen Nissenbaum puts it in “A Contextual Approach to Privacy Online,” “contexts, not political economy, should determine constraints on the flow of information.” Because digital health applications are assuming a role similar to that of a doctor in society, they should be regulated under the same principles and penalties as doctors have been under HIPAA.

Hipaa explained. HIPAA Journal. (2021, June 14). Retrieved February 27, 2022, from https://www.hipaajournal.com/hipaa-explained/
Nissenbaum, H., A Contextual Approach to Privacy Online (2011). Daedalus 140 (4), Fall 2011: 32-48, Available at SSRN: https://ssrn.com/abstract=2567042
Office for Civil Rights. (2021, December 7). The HIPAA Privacy Rule. HHS.gov. Retrieved February 27, 2022, from https://www.hhs.gov/hipaa/for-professionals/privacy/index.html
Solove, D. A Taxonomy of Privacy. University of Pennsylvania Law Review, Vol. 154, No. 3, p. 477, January 2006, GWU Law School Public Law Research Paper No. 129, Available at SSRN: https://ssrn.com/abstract=667622
Swanson, S., & Hirsch, R. (n.d.). New legislation aims to upgrade HIPAA to account for New Healthcare Technologies. New Legislation Aims to Upgrade HIPAA to Account for New Healthcare Technologies – Health Law Scan – 02 | Morgan Lewis. Retrieved February 27, 2022, from https://www.morganlewis.com/blogs/healthlawscan/2022/02/new-legislation-aims-to-upgrade-hipaa-to-account-for-new-healthcare-technologies
Wicklund, E. (n.d.). New bill would update HIPAA to address new technology. HealthLeaders Media. Retrieved February 27, 2022, from https://www.healthleadersmedia.com/technology/new-bill-would-update-hipaa-address-new-technology

Alexa is always listening…

Alexa is always listening…
By Anonymous | March 2, 2022

We have seen a tremendous rise in the use of virtual personal assistants in the last few years. It is very common for families to have smart home devices installed throughout their homes, everything from doorbells to voice activated lights. Coupled with this new era of technology rapidly evolving is a new wave of concerns surrounding privacy and data protection of its consumers.

Of course, to provide a personalized experience, virtual personal assistants are collecting and storing some information about how you interact with them.

Let’s focus on Amazon’s Alexa smart home ecosystem, which has come into the spotlight over the years with privacy violations and bugs that have resulted in a loss of privacy for users.

As Shannon Flynn reports, [“Alexa is an always-on device, so it’s constantly listening for the wake word and will record anything that comes after it”](https://www.lifewire.com/can-alexa-record-conversations-5205324). The light on Alexa devices turns blue to indicate when they are recording, but people aren’t necessarily looking for these visual cues. There is an inherent risk of Alexa recording conversations accidentally when it should not. Amazon’s explanation for these instances has always been that Alexa was activated by a word that sounded like its wake word “Alexa” and consequently began recording. In one extreme case, a couple in [Portland had a private conversation recorded and forwarded to one of their contacts. They only found out because the person called them to tell them they had received that message](https://www.theguardian.com/technology/2018/may/24/amazon-alexa-recorded-conversation).

These accidentally recorded conversations are also uploaded to Amazon’s servers immediately, which poses an even larger risk because sensitive and personally identifiable information is now being transmitted and stored by Amazon. While the driving force behind Alexa is artificial intelligence and machine learning, there is also a [“human quality-control team that reviews user recordings to ensure Alexa’s accuracy” as reported by Robert Earl Wells III](https://www.lifewire.com/is-alexa-safe-to-use-4780145). So in addition to sensitive data being stored in Amazon’s servers, Amazon employees may end up with access to very precise information about a specific user or users. Users have a way to delete this information if accidentally recorded, but they have to also be aware that something is actually happening, like in the case of the Portland couple.

Amazon has recently introduced new restrictions on these human reviews, and given users the option to opt out completely. There are also ways to disable this “always listening” feature individually on each Alexa device, but this also results in the voice-activation feature being unusable. Users will manually have to activate the microphone for Alexa to listen and respond. While this is a safer option in terms of privacy, your speech and interaction data is still being sent off to Amazon’s servers.

At what point Alexa stops recording after being activated is not immediately clear. The intention behind recording conversations in the first place is so that Alexa can learn more about its users to provide more personalized experiences.

The most you can do to protect your privacy is ensure that recordings are being deleted regularly and, at the very least, muting Alexa so that your conversations are not being recorded and analyzed by Amazon. We don’t have a clear understanding of what exactly they do with the data beyond using it to further personalize experiences for their customers, but there is tremendous potential for the company to make assumptions about you based off of the voiceprints and potential biometrics that it is logging.

You are responsible for protecting your own privacy when it comes to smart devices like Alexa, so take advantage of the mechanisms that Amazon provides to increase your protection instead of sticking to the factory defaults.