Google Pixel Privacy Issues

Google Pixel Privacy Issues
By Anonymous | March 9, 2022

I had been a passionate Android and Google Pixel smartphone user for a decade. However, I was very frustrated with Google’s privacy policy because I just could not make sense of it. I did not have a mental model to analyze the privacy of a Pixel device until I started thinking about the ethical, legal, and privacy implications of products and services at the University of California, Berkeley School of Information.

This public advocacy blog is meant for the general public to have a point of view on the potential privacy and legal implications of Google Pixel. There are some fundamental issues1 at play here. The first issue is that Google allows third party user tracking companies to access an advertising id for monetization without user consent. The advertising id, referred to as AdId moving forward, is an identifier uniquely set for personalization of services on the device.

The second issue is that apps running on Android can consume specific functionality of other Android apps circumventing the permissions mechanisms, leading to tighter integration, broader data sharing, and reduced data privacy among Android apps.

The third issue is that ad tracking companies get user’s IP addresses – considered personal as per EU law – and use the addresses for tracking user behavior on the devices without user consent.

The fourth issue is that tracking companies are predominantly based in the US, China and India with very little presence in the EU, leading to violations of EU and UK data protection laws for exchange of personal data beyond the UK/EU without any special safeguards.

The fifth issue is that Android apps leverage third party tracking and share personally identifiable information from children’s apps, sharing more data than necessary and without adequate level of data protection. Lastly, Google does not oblige the data protection law that requires companies to disclose data practices adequately.


Fig 1: Smartphones steal information without your consent

I’ll discuss the above issues based on the principles in the Belmont report, Nissenbaum’s contextual privacy and GDPR. The Belmont2 report is based on three principles – Respect, Beneficence, and Justice. Respect for persons is defined in terms of autonomy given to them in decision making and consent. The principle for Respect also includes respect for persons with diminished autonomy like children, minorities etc. The second principle of Beneficence refers to the notion of maximizing benefits and minimizing harms from privacy violations. Finally, the third principle of Justice refers to equity to each person according to individual effort, societal contribution, merit, and need. On the other hand, Nissenbaum’s3 contextual privacy essentially looks for similarities in the offline world in terms of social norms of privacy and expects digital corporations to apply similar contextual privacy to online worlds of users. For situations where there is no analogous offline social setting, common sense privacy around potential hypothetical offline social settings is recommended. GDPR is more commonly known to us as data privacy principles defined by the EU.

Based on the first Belmont principle of Respect, one could argue that Google Pixel device does not ask for consent from users when sharing AdId or IP addresses as discussed above in the first and third issues. The same is true for children because of their diminished decision making/autonomy as referred to in the fifth issue above. In such situations, Pixel does not ask parents of the children for consent to track and provide ads or personalized services. Google Pixel is a novel enough idea that there is perhaps no social precedent to data privacy in an offline social world. The second issue of tighter integration among Android apps does not bode well for beneficence. In fact, it does the opposite of amplifying harm through privacy violations.


Fig 2: Apps integrated with other Android apps amplifying data sharing

Now, if we analyze Google Pixel through the lens of Nissenbaum’s contextual privacy, we realize that the context is novel and does not have an existing social standard in a similar offline world. In such a case, we have to ask ourselves: Would it be ok for my neighbor, let alone a stranger or a business, to track what I am reading or speaking at home every moment and make notes? Would it be ok for a stranger to track my friends or relatives? Would it be ok for a stranger to track my belief systems, hobbies, interests, or political inclinations? In the current social setting in the USA, the answer is a categorical no and downright creepy. Our personal lives are deeply personal. Therefore, Nissenbaum’s contextual privacy application makes us realize that data sharing without consent is a privacy violation. In fact, GDPR law does not allow transfer of data outside the UK and EU for the sake of tracking. However, most of the tracking companies are located in the USA, China and India and are clearly violating GDPR regulations. Perhaps, these tracking companies are getting away because they are not large enough and it is practically impossible for regulators to track a number of such smaller companies.

Does that mean that Google and the likes should completely disrupt their own business models? As part of Google’s corporate social responsibility and long term sustainability of business model, there should be an enforcement of every professional contributing to the design of the Pixel to insist on higher internal standards irrespective of regulations and laws policing the behavior of their products and services.

References:
1. Proceedings on Privacy Enhancing Technologies. Konrad Kollnig*, Anastasia Shuba, Reuben Binns, Max Van Kleek, and Nigel Shadbolt https://arxiv.org/pdf/2109.13722.pdf
2. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. April 18, 1979. https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf
3. Nissenbaum, Helen F. (2011). A Contextual Approach to Privacy Online. Daedalus 140:4 (Fall 2011), 32-48. https://ssrn.com/abstract=2567042

Amazon’s ADAPT and it’s Harm on Workers

Amazon’s ADAPT and it’s Harm on Workers
By Alice Ye | March 2, 2022

Artificial intelligence is rapidly making its way into the workplace, being incorporated in all aspects of the business. One of the greatest drivers for this widespread adoption is the potential of AI to increase employee productivity. Some companies have received positive feedback on using AI in this way. However, some companies have been criticized for treating workers like robots. Amazon in particular has received a lot of attention for using AI to monitor warehouse worker productivity.

What is ADAPT?

Amazon created an AI system called ​​Associate Development and Performance Tracker (ADAPT) that monitors each worker’s productivity and automatically fires workers. ADAPT operates based off of Amazon’s proprietary productivity metric for measuring the productivity of each associate (Carey 2018). The most Amazon has shared about the metric is that it’s based on customer demand and warehouse location (Lecher 2019).

Amazon warehouse facilities will hold weekly productivity performance reviews using ADAPT data. ADAPT will automatically generate any warnings or termination notices to the employee without input from managers. These notices can be overridden in certain scenarios, like mechanical issues or peak operation levels (e.g. Christmas holidays, Amazon Prime Day, etc.) (Carey 2018).

Amazon states that they are able to provide quality, fast services to customers because the system allows associates to be both detailed and efficient (Carey 2018). However at what cost? With ADAPT in place, Amazon workers feel that their workday is managed down to every second. The combination of monitoring, automated supervision, and inflexible expectations leave workers feeling like Amazon views them as robots, instead of humans (Tangermann 2019). Many previous employees have shared that mental and physical health issues are common within the facility environments (Zahn 2019).

Why is ADAPT harming Amazon workers?

The two aspects, out of many, that are at the core of Amazon’s unethical AI usage are the degree of monitoring and the imbalance between the company and employees.

ADAPT monitors workers to a level of detail that takes away autonomy and violates their privacy. Amazon tracks a worker’s Time Off Task (TOT) by capturing gaps in activity and workers are expected to explain each gap. If the explanation is deemed unreasonable or too long, then a warning is issued. If an unreasonable break is 2 hours or longer automatic termination is issued (Carey 2018). This constant detailed monitoring of employees tied with heavy penalties is a loss of autonomy. Some workers have felt that they can’t even properly use their breaks for going to the bathroom because the warehouses are so large (Kantor 2021). This level of surveillance can also be considered a harm to worker privacy (Solove 2006). Workers have lost control over their personal time because they are required to justify how they spent every minute of it. Not only is taking away autonomy an ethical problem (Belmont Report 1979) and violation of privacy, but it can also impact profits. Research shows that loss of employee autonomy leads to distrust and dissatisfaction which in the long term contributes to high employee turnover, slower business growth and lower profits (Matthews 2020).


Figure 1: Showing how one of Amazon’s warehouses, JFK8, is the size of 15 football fields.Workers are expected to walk far distances within their short, timed breaks. Chang W. Lee/The New York Times

Another unethical aspect of ADAPT is the imbalance between company profits and employee welfare. This imbalance is in the AI system and in its implementation. First, taking a look inside the ADAPT shows how it was built to focus on increasing profits. Amazon has mentioned that their proprietary productivity metric is based on customer demand and warehouse location (Lecher 2019). Both factors that align with business profits, rather than employee development. Amazon uses the same productivity metric with the same expectations for all employees. There are no adjustments made for employees who have special circumstances (e.g. medical issues) or who thrive better with different measures (Carey 2018). This further emphasizes how the main purpose of the ADAPT is to increase company profits, not help employees.

Next, looking into how Amazon uses ADAPT shows that exceptions are made to benefit the company but inflexible to employee needs. Amazon explicitly states that automated terminations can be overridden when warehouses are at peak operations levels, like Amazon Prime Day (Carey 2018). This gives allowances when Amazon needs employees the most but there is no equal opportunity given to employees to dispute the automated decisions. For example, an employee could have a medical condition consistently limiting their speed of fulfilling orders but the ADAPT system doesn’t allow the employee to take longer breaks. This inflexibility paired with intense scrutiny has been cited as the cause for prevalent mental health issues amongst workers (Zahn 2019). If Amazon balanced the ADAPT system to distribute benefits between company profits and employee welfare, then some negative consequences would be mitigated.


Figure 2: Amazon worker expressing his opinion that worker health needs to be taken more seriously by Amazon. Spencer Platt/Getty Images

How could Amazon improve their usage of AI on productivity?

There are a couple ways Amazon could improve the way they’re using AI on worker productivity. First, change the role of AI. Instead of using AI to measure how many tasks employees are completing, AI can be used to make tasks easier to complete. For example, Amazon currently uses ADAPT to track how quickly workers can count and verify the number of items in an order. Instead, AI can count the number of items and the worker can investigate packages where there are discrepancies. Then worker time would be spent on more stimulating, valuable tasks. They would be less bored and more satisfied with their work which has been shown to increase productivity and reduce turnover rates (How 2021). Thus, focusing AI in a different way could still result in higher productivity but not cause harm to employees.

Another way is to allow employees to customize the metrics they are evaluated against. In practice, Amazon could provide a range of performance metrics and have employees select metrics that suit their working style. Employees and managers could set what the thresholds and expectations should be based on personal circumstances. This would address the two unethical aspects of ADAPT that were discussed earlier. Employees would gain some control over how ADAPT is tracking their activity and provide a bit more balance in power between Amazon and the workers.

Final Thoughts

Many companies look to Amazon for how to be a successful business. Thus, it’s vital that Amazon ethically uses AI to drive worker productivity. If Amazon is allowed to continue, other companies will do the same, resulting in a new norm. In fact, other companies have already started following in Amazon’s footsteps, like Cognito monitoring their customer service reps talking speed (Roose 2019) and Walmart eavesdropping on employee conversations (Woollacott 2020). As Amazon’s ADAPT system gets more awareness, I hope legislation is created to protect all types of workers, even contract workers.

References
* Carey, Crystal S. (September 4, 2018). “Case No. 05-CA-224856.” Philadelphia, PA: Morgan Lewis. Retrieved February 13, 2022, from https://cdn.vox-cdn.com/uploads/chorus_asset/file/16190209/amazon_terminations_documents.pdf
* Lecher, C. (2019, April 25). How Amazon automatically tracks and fires warehouse workers for ‘productivity.’ The Verge. Retrieved February 13, 2022, from https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations
* Tangermann, V. (2019, April 26). Amazon Used An AI to Automatically Fire Low-Productivity Workers. Futurism. Retrieved February 13, 2022, from https://futurism.com/amazon-ai-fire-workers
* Zahn, M., & Paget, S. (2019, May 9). ‘Colony of Hell’: 911 Calls From Inside Amazon Warehouses. The Daily Beast. Retrieved February 13, 2022, from https://www.thedailybeast.com/amazon-the-shocking-911-calls-from-inside-its-warehouses?ref=scroll
* Kantor, J., Weise, K., & Ashford, G. (2021, December 15). Inside Amazon’s Employment Machine. The New York Times. Retrieved February 13, 2022, from https://www.nytimes.com/interactive/2021/06/15/us/amazon-workers.html
* Solove, Daniel J. (2006). A Taxonomy of Privacy. University of Pennsylvania Law Review, 154:3 (January 2006), p. 477. https://ssrn.com/abstract=667622
* The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. April 18, 1979. https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf
* Matthews, V. (2020, April 9). Productivity secrets: better leaders and working smarter. Raconteur. Retrieved February 13, 2022, from https://www.raconteur.net/business-strategy/productivity/productivity-secrets/
* Roose, K. (2019, June 24). A Machine May Not Take Your Job, but One Could Become Your Boss. The New York Times. Retrieved February 13, 2022, from https://www.nytimes.com/2019/06/23/technology/artificial-intelligence-ai-workplace.html
* Woollacott, E. (2020, February 10). Should you be monitoring your staff with AI? Raconteur. Retrieved February 13, 2022, from https://www.raconteur.net/technology/artificial-intelligence/ai-workplace-surveillance/
* How AI is increasing employee productivity. (2021, September 24). Memory. Retrieved February 13, 2022, from https://memory.ai/timely-blog/ai-increase-employee-productivity

Exploring Bias in AI Recruiting Systems

Exploring Bias in AI Recruiting Systems
By Marque Green | March 2, 2022

Recruiting management systems are widely utilized today by most employers to automate several aspects of the recruiting process, particularly for medium and highly skilled roles. In the 1980’s and 1990’s, digital job postings and resume-scanning software were originally created to make it easier for job seekers to apply for jobs and employers to manage their applications from a larger pool of applicants. AI recruiting and talent management tools have disrupted and reshaped job seeking and hiring processes resulting in inefficiency as well as creating a class of “hidden workers” whose talent remains unrecognized. Automated tools spotlight formal education, employment continuity and keywords at the expense of any attempt to recognize the humanity of the applicant.


Source:streebo

Initially both job applicants and employers felt these systems were largely beneficial. Hosting job postings online can make applying for jobs more accessible and equitable as they are available to the public. Employers also saw an increase in the average number of applications per job posting. In the early 2010s, a job posting would yield about 120 applicants on average. With the increase of these recruiting systems, by the end of 2010s, companies were seeing job postings yield about 250 applicants on average.[3] Due to this gain in applicants, companies found these technologies were leading to larger and more diverse candidate pools. An additional benefit with these systems is it allows for employers to find candidates with highly specialized skills by keyword matching. This has led to more targeted candidates for highly skilled jobs. Finally, these systems have led to productivity gains within talent acquisition teams.

Unfortunately, recruiting management systems do lead to unintended harms for both employers and job applicants. Despite digital job postings leading to more job applicants, very few (about 4 to 6 on average) make it beyond the AI resume-scanning software that often discriminates via arbitrary selection and evaluation criteria. For example, this software tends to focus on what candidates don’t have rather than the value and capacities they bring to a company. Specifically, hospitals who only accepted candidates with experience in “computer programming” on their CV, when all they needed were workers to enter patient data into a computer [3]. AI resume-scanning software oversimplifies what criteria make a candidate good or bad and can quickly reject candidates on biased criteria. Often candidates are rejected for failing to meet specific criteria without considering their other qualifications. The shortcomings of these systems are well known. Among executives interviewed by Harvard for a study on common recruiting processes, 90% admitted they knew automated recruiting software was mistakenly filtering out viable candidates but fixing these problems would require a complete overhaul of the recruiting and hiring process [4].

Since recruiting management software rejects and excludes most candidates, there has been a new class of worker created known as the “hidden worker”. These are individuals who are able and willing to work but remain locked out by structural problems within the recruiting process like requiring traditional qualifications and continuous employment.


Source: NextAvenue

Hidden workers tend to be caregivers, veterans, immigrants, those with physical disabilities, less-advantaged populations, or those who lack traditional qualifications. Resume-scanning software unfairly penalizes someone who has gaps in their employment or who lack traditional education credentials without considering the applicant’s other qualifications. Consequently, millions of ready and able workers are unfairly excluded from the job market.

How can we strive to make recruiting management software fairer and more equitable? First, talent acquisition teams and hiring managers should refine job descriptions. Instead of focusing on finding the “perfect candidate”, hiring teams should focus on identifying critical skills and create descriptions that accurately reflect the skills needed to do the job. AI has the potential to enable companies to understand the background of their current employees and determine what variables correlate to their success. Additionally, AI systems should shift from negative to affirmative filters, focusing on the applicant’s strengths instead of highlighting weaknesses. As recruiting management systems continue to grow and evolve, employers have a responsibility to reduce systemic bias and create more equitable recruiting processes. By refocusing away from traditional measures of success toward evaluating desired skills and competencies, the recruiting and hiring process can become more equitable and efficient for job seekers and employers of any sort.

References:
[1] Automate and Manage a Recruitment Management Process for Your Organization. Streebo https://www.streebo.com/recruitment-management-system
[2] Kerry Hannon. What’s Keeping Family Caregivers and the Long-Term Unemployed from Getting Hired. Next Avenue https://www.nextavenue.org/hidden-workers-ai-getting-hired/
[3] James Vincent. Automated hiring software is mistakenly rejecting millions of viable job candidates. Next Avenue https://www.theverge.com/2021/9/6/22659225/automated-hiring-software-rejecting-viable-candidates-harvard-business-school
[4] Fuller, Raman, Sage-Gavin, Hines. Hidden Workers: Untapped Talent. Harvard Business Review https://www.hbs.edu/managing-the-future-of-work/Documents/research/hiddenworkers09032021.pdf

Why Explainable AI Matters

Why Explainable AI Matters
By Severin Perez | March 2, 2022

When justifying a decision or rule for their kids, parents sometimes resort to an old standby: “because I said so.” It’s an unsatisfying, but effective response–especially in light of the uneven power dynamic between parents and children. In essence, many artificial intelligence (AI) systems, and the organizations that deploy them, are now using the same strategy. Why were you denied a bank loan? Because the AI said so. Why did the police arrest you? Because the AI said so. Why are you getting ads for extremist political causes? Because the AI said so.

We don’t accept the “because I said so” argument from judges, police officers, doctors, or executives, and we shouldn’t accept it from AI systems either. In order to verify that an AI is making a valid decision, it should be an explainable AI. If the AI is explainable, then we can ensure that its processes align with our laws, regulations, and social norms. Further, an explainable AI is one that we can interrogate to confirm that no privacy harms are inherent in the system itself, or the data that feeds it.

Explainable AI

“AI and Machine Learning” by [Mike MacKenzie](https://www.flickr.com/photos/152824664@N07/30212411048/) is licensed under CC-BY-2.0.

An explainable AI is one that behaves in a way that most users can understand. When the system produces an output, we should be able to say why and how. In order to be explainable, the AI must also be interpretable, meaning that we have an idea about how the internal technology operates. In other words, the system is not a “black box”, where data goes in and decisions come out, with no clear connection between the two. [1]

Consider an AI used by a bank to decide which customers qualify for a loan. If the AI is explainable, then we should be able to identify which variables, weights, and mechanisms it uses to make decisions. For example, the AI might take the customer’s savings account balance (SAB) and requested loan amount (RLA) as variables and make a decision based on the formula “if RLA is less than 2 * SAB, then approve the loan, otherwise deny the loan.”

Of course, the above example is a drastic oversimplification. Realistically, an AI for approving bank loans will consider a vast amount of data about bank customers, put the data through a neural network optimized for accuracy, and output decisions that even the system designers may not fully understand. This raises serious questions about fairness, bias, and transparency.

Potential for Harm

AI systems are now making decisions in a variety of fields that were previously the exclusive purview of expert human decision-makers, including justice, law enforcement, health, and commerce. This is problematic not only because we want such systems to be accurate, but also because they can have baked-in bias that perpetuates patterns of discrimination and inequity–even in cases when the designers and users are not themselves bad actors. [2]

The good news is that explainable AI can provide us with the means to identify the sources of bias in a system. Since an explainable AI system is transparent by nature, we can evaluate what data it is using, and how. This is an important mechanism for identifying privacy harms at the information processing stage of data usage. [3] Further, if we can see what data a system is using, we can ask follow-on questions about where the data came from, whether it’s accurate, and whether we feel it is sensitive and merits special protections. In other words, we can hold the AI and its designers accountable.

Never Trust, Always Verify

“Paying with your face” by [Eliza Galstyan](https://commons.wikimedia.org/wiki/File:Paying-with-your-face.jpg) is licensed under CC-BY-SA-4.0.

As much as we might like to live in a world where we can universally trust governments and corporations to use our data responsibly, that world doesn’t yet exist. Companies like Clearview AI are harvesting photos from social media and using them to feed a facial recognition system that is popular with law enforcement. [4] Similarly, in 2018 we learned that Cambridge Analytica had been improperly acquiring data from Facebook to build voter profiles for use in election campaigns. [5]

If Clearview AI and Cambridge Analytica had followed the principles of explainable AI, either by virtue of social norms or regulatory requirement, then we would have had earlier and more frequent opportunities to raise questions about possible abuse of our data. Not only could we have asked whether they had our consent to use our data, but we could also have evaluated the mechanisms their systems used to make decisions about us. As it stands now though, such companies are unaccountable to their data subjects.

In order to avoid such abuses, one argument is to employ robust privacy policies so that consumers can make informed choices about how and where they generate data. Although this is a worthy goal, it’s not enough on its own. In reality, regulations like the European Union’s General Data Protection Regulation (GDPR) have helped to drive an increase in the length and difficulty of privacy policies, making it even harder for the average consumer to understand them. [6] Explainable AI would provide additional insights into the ways in which systems are using our data, making it easier for us to identify and mitigate potential harms.

Trade Offs

Of course, explainable AI isn’t free–there are trade offs that we must consider, including with overall accuracy and performance. [1] “Black box” AI systems have become popular not because designers prefer opacity, but because they’re more effective than fully transparent systems. Somewhat paradoxically, explainable AI may also introduce new privacy risks by opening additional attack vectors for malicious actors to steal data from a system. [7]

As AI usage spreads to new areas of human life, it’s up to us to decide how such tools should be used. It may be that we don’t need to understand how many AI systems work, because the decisions they make may not be all that sensitive. In other cases though, as in justice and health, the trade-off between explainability and accuracy merits deeper consideration. The choices we make in these cases will have long-lasting implications, not only for our social values, but for how we live our lives.

Sources

1. Bundy, A., Crowcroft, J., Ghahramani, Z., Reid, N., Weller, A., McCarthy, N., & Montgomery, J. (2019). Explainable AI: the basics. The Royal Society. https://royalsociety.org/topics-policy/projects/explainable-ai/
2. Hoffman, Anna L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, Volume 22, No. 7, 900-915. https://doi.org/10.1080/1369118X.2019.1573912
3. Solove, Daniel. (2006). A Taxonomy of Privacy. University of Pennsylvania Law Review, Vol. 154, No. 3, p. 477, January 2006, GWU Law School Public Law Research Paper No. 129. https://ssrn.com/abstract=667622
4. Hill, Kashmir. (2020, January 18). The Secretive Company That Might End Privacy as We Know It. The New York Times. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
5. Confessore, Nicholas. (2018, April 4). Cambridge Analytica and Facebook: The Scandal and the Fallout So Far. The New York Times. https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
6. Amos, R., Acar, G., Lucherini, E., Kshirsagar, M., Narayanan, A., & Mayer, J. (2020). Privacy Policies over Time: Curation and Analysis of a Million-Document Dataset. arXiv. https://arxiv.org/abs/2008.09159
7. Zhao, X., Zhang, W., Xiao, X., & Lim, B. (2021). Exploiting Explanations for Model Inversion Attacks. arXiv. https://arxiv.org/abs/2104.12669
8. Kerry, Cameron F. (2020). Protecting privacy in an AI-driven world. Brookings. https://www.brookings.edu/research/protecting-privacy-in-an-ai-driven-world/

A Currently National Privacy Regulation in the U.S. – and Upcoming Plans to Improve It

A Currently National Privacy Regulation in the U.S. – and Upcoming Plans to Improve It
By Anonymous | March 2, 2022

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) regulates the interactions between healthcare patients and healthcare providers (doctors, hospitals, etc.), and contains an important provision on patient privacy referred to as “The Privacy Rule.” The Department of Health and Human Services describes this rule as follows:

The Rule requires appropriate safeguards to protect the privacy of protected health information and sets limits and conditions on the uses and disclosures that may be made of such information without an individual’s authorization. The Rule also gives individuals rights over their protected health information, including rights to examine and obtain a copy of their health records, to direct a covered entity to transmit to a third party an electronic copy of their protected health information in an electronic health record, and to request corrections.

Rather than follow Europe’s approach of adopting a holistic method of regulating privacy via such an act as the General Data Protection Regulation (GDPR), the United States’ privacy regulatory landscape operates on a patchwork of regulations like the California Consumer Privacy Act (CCPA). In that sense, HIPAA stands as a notable exception to U.S. regulatory norms concerning privacy.


Source: Delphix.com

With the advent of digital health – an area of growth for the U.S. health industry particularly as a result of the COVID-19 pandemic – at least one HIPAA blind spot has emerged. HIPAA does not currently regulate digital health companies’ ability to collect and utilize consumer health data. But that could change as a result of a new bi-partisan Congressional bill, the Health Data Use and Privacy Commission Act. Sponsored by Bill Cassidy (R-LA) and Tammy Baldwin (D-WI), the bill would create a commission that would advise both Congress and President Joe Biden how to modernize current health privacy laws. The commission would focus on “issues relating to the protection of individual privacy and the appropriate balance to be achieved between protecting individual privacy and allowing appropriate uses of personal health information” (Morgan Lewis).


Source: Pexels.com

Improvements to HIPAA could help address patient privacy concerns across a number of dimensions. Using legal scholar Daniel Solove’s Taxonomy of Privacy, it might appear as though privacy issues would emerge for digital health offerings during the collection of patient data. But Eric Wicklundof indicates in HealthLeaders Online that such offerings “opened the door to new ways that such data can be misused;” in that sense, under the terms of Solove’s framework, information processing and dissemination poses greater risks to patients under the current iteration of HIPAA. It is not difficult to imagine how identification of individuals or breaches of confidentiality can occur for a given health consumer in this landscape.


Source: Pexels.com

The probability that this bill passes Congress is unclear. Ultimately, should this bill be passed, consumers will likely be in a better position to enjoy the technical innovations and tantalizing potential benefits that digital health applications and other modern healthcare ventures have to offer, without having to sacrifice an undue level of personal privacy in return. As Helen Nissenbaum puts it in “A Contextual Approach to Privacy Online,” “contexts, not political economy, should determine constraints on the flow of information.” Because digital health applications are assuming a role similar to that of a doctor in society, they should be regulated under the same principles and penalties as doctors have been under HIPAA.

References
Hipaa explained. HIPAA Journal. (2021, June 14). Retrieved February 27, 2022, from https://www.hipaajournal.com/hipaa-explained/
Nissenbaum, H., A Contextual Approach to Privacy Online (2011). Daedalus 140 (4), Fall 2011: 32-48, Available at SSRN: https://ssrn.com/abstract=2567042
Office for Civil Rights. (2021, December 7). The HIPAA Privacy Rule. HHS.gov. Retrieved February 27, 2022, from https://www.hhs.gov/hipaa/for-professionals/privacy/index.html
Solove, D. A Taxonomy of Privacy. University of Pennsylvania Law Review, Vol. 154, No. 3, p. 477, January 2006, GWU Law School Public Law Research Paper No. 129, Available at SSRN: https://ssrn.com/abstract=667622
Swanson, S., & Hirsch, R. (n.d.). New legislation aims to upgrade HIPAA to account for New Healthcare Technologies. New Legislation Aims to Upgrade HIPAA to Account for New Healthcare Technologies – Health Law Scan – 02 | Morgan Lewis. Retrieved February 27, 2022, from https://www.morganlewis.com/blogs/healthlawscan/2022/02/new-legislation-aims-to-upgrade-hipaa-to-account-for-new-healthcare-technologies
Wicklund, E. (n.d.). New bill would update HIPAA to address new technology. HealthLeaders Media. Retrieved February 27, 2022, from https://www.healthleadersmedia.com/technology/new-bill-would-update-hipaa-address-new-technology

Alexa is always listening…

Alexa is always listening…
By Anonymous | March 2, 2022

We have seen a tremendous rise in the use of virtual personal assistants in the last few years. It is very common for families to have smart home devices installed throughout their homes, everything from doorbells to voice activated lights. Coupled with this new era of technology rapidly evolving is a new wave of concerns surrounding privacy and data protection of its consumers.

Of course, to provide a personalized experience, virtual personal assistants are collecting and storing some information about how you interact with them.

Let’s focus on Amazon’s Alexa smart home ecosystem, which has come into the spotlight over the years with privacy violations and bugs that have resulted in a loss of privacy for users.

As Shannon Flynn reports, [“Alexa is an always-on device, so it’s constantly listening for the wake word and will record anything that comes after it”](https://www.lifewire.com/can-alexa-record-conversations-5205324). The light on Alexa devices turns blue to indicate when they are recording, but people aren’t necessarily looking for these visual cues. There is an inherent risk of Alexa recording conversations accidentally when it should not. Amazon’s explanation for these instances has always been that Alexa was activated by a word that sounded like its wake word “Alexa” and consequently began recording. In one extreme case, a couple in [Portland had a private conversation recorded and forwarded to one of their contacts. They only found out because the person called them to tell them they had received that message](https://www.theguardian.com/technology/2018/may/24/amazon-alexa-recorded-conversation).

These accidentally recorded conversations are also uploaded to Amazon’s servers immediately, which poses an even larger risk because sensitive and personally identifiable information is now being transmitted and stored by Amazon. While the driving force behind Alexa is artificial intelligence and machine learning, there is also a [“human quality-control team that reviews user recordings to ensure Alexa’s accuracy” as reported by Robert Earl Wells III](https://www.lifewire.com/is-alexa-safe-to-use-4780145). So in addition to sensitive data being stored in Amazon’s servers, Amazon employees may end up with access to very precise information about a specific user or users. Users have a way to delete this information if accidentally recorded, but they have to also be aware that something is actually happening, like in the case of the Portland couple.

Amazon has recently introduced new restrictions on these human reviews, and given users the option to opt out completely. There are also ways to disable this “always listening” feature individually on each Alexa device, but this also results in the voice-activation feature being unusable. Users will manually have to activate the microphone for Alexa to listen and respond. While this is a safer option in terms of privacy, your speech and interaction data is still being sent off to Amazon’s servers.

At what point Alexa stops recording after being activated is not immediately clear. The intention behind recording conversations in the first place is so that Alexa can learn more about its users to provide more personalized experiences.

The most you can do to protect your privacy is ensure that recordings are being deleted regularly and, at the very least, muting Alexa so that your conversations are not being recorded and analyzed by Amazon. We don’t have a clear understanding of what exactly they do with the data beyond using it to further personalize experiences for their customers, but there is tremendous potential for the company to make assumptions about you based off of the voiceprints and potential biometrics that it is logging.

You are responsible for protecting your own privacy when it comes to smart devices like Alexa, so take advantage of the mechanisms that Amazon provides to increase your protection instead of sticking to the factory defaults.

References:
https://www.forbes.com/sites/tjmccue/2019/04/19/alexa-is-listening-all-the-time-heres-how-to-stop-it/?sh=f9bc6395e2d2
https://www.lifewire.com/is-alexa-safe-to-use-4780145
https://www.lifewire.com/can-alexa-record-conversations-5205324
https://www.theguardian.com/technology/2018/may/24/amazon-alexa-recorded-conversation

Going Beyond Data Literacy

Going Beyond Data Literacy
By Hassan Saad | March 2, 2022

Implementing a Data-Driven Approach

During the United States’ occupation of Afghanistan, several data-intensive tools have been developed to address problems plaguing the US-Afghan military coalition. Many have proven to be quite effective and have promised to bring an increased level of sophistication to systems that were previously relying on somewhat antiquated technology. In August of 2021, however, Taliban forces overtook Afghanistan upon the US military exit, and the tools meant to protect its citizens became a potential source of harm instead.

Ghost Soldiers

One of the tools developed consists of a database called the Afghan Personnel and Pay System (APPS). It hosts information about every member of the Afghan National Army and Afghan National Police, which is collected on the first day of their enlistment. In an effort to fight the common issue of “ghost soldiers,” the US military helped fund the development of the APPS (it’s important to note that it was not directly developed by a US-based organization/ subcontractor), to ensure that Afghan military and police salaries were being legitimately paid rather than lining the pockets of corrupt officials.
Curiously, the data is not limited to features that are relevant in the context of a payroll system. Data points such as “favorite fruit” and “uncle’s name” are combined with about 40 other features including salary, blood type, address, etc. The inclusion of unnecessary information makes it clear that data collection limitation was not a concern when developing the APPS. Furthermore, it establishes that there was no consideration of the risk associated with including secondary subjects within the dataset. Both these elements highlight the inexperience with which the APPS was developed despite the extremely sensitive nature of the underlying data.

The APPS is stored on an Afghan-managed database, which made it easier for the Taliban to access the information when they overran Kabul’s government buildings in 2021. To make matters worse, it’s not clear whether the payroll system considers any deletion or data retention protocols, which means it could contain records spanning back to the system’s creation in 2016. The Taliban has said that they will not use the data in retaliation against active and former coalition forces; however, many subjects still fear retribution for themselves and their family members whose information also resides in the APPS.

Is Data Literacy Enough?

As the world becomes more reliant on data-driven solutions and strategies, the concept of data literacy has never been more important. But the Afghan Personnel and Payroll System is just one situation that makes data literacy seem inadequate on its own. The cost savings associated with the implementation of the APPS were clear, and the power of using a data-driven solution was unquestionable. The resulting danger inflicted on the citizens of Afghanistan, however, begs the question of what the Afghani subcontractor could have done differently had the risks associated with a potential data breach been made more apparent to them upfront.

Currently only 71% of world nations have data privacy legislation in place, though there is virtually no hesitation to adopt data-intensive applications regardless of whether or not official protections exist. As well-intentioned as it may be, there is a high degree of risk associated with promoting the use of data-dependent tools and leaving them in the hands of those who may have had fewer opportunities to think about the underlying privacy implications. For developed nations, as we collectively march deeper into the information age, there may be a responsibility to educate and protect others against the potential risks inherent in data science procedures well before promoting the benefits in the foreground.

References:

Gregg, Aaron. “U.S. Taxpayers Paid Millions for Afghan Payroll System That Doesn’t Work as Intended, DOD Audit Says.” Washington Post, 23 Aug. 2019, www.washingtonpost.com/business/2019/08/23/us-taxpayers-paid-million-afghan-payroll-system-that-doesnt-work-intended-dod-audit-says.

Guo, Eileen. “This Is the Real Story of the Afghan Biometric Databases Abandoned to the Taliban.” MIT Technology Review, 31 Aug. 2021, www.technologyreview.com/2021/08/30/1033941/afghanistan-biometric-databases-us-military-40-data-points.

Provost, Claire. “Poorer Countries Need Privacy Laws as They Adopt New Technologies.” The Guardian, 15 Oct. 2020, www.theguardian.com/global-development/2013/dec/04/poorer-countries-privacy-laws-new-technology.

“Afghan Troop Numbers Down With Purge of Ghost Soldiers.” The National, 5 July 2021, www.thenationalnews.com/world/asia/afghan-troop-numbers-down-with-purge-of-ghost-soldiers-1.893252.

United Nations Conference on Trade and Development. “Data and Privacy Unprotected in One Third of Countries, Despite Progress.” UNCTAD.Org, unctad.org/news/data-and-privacy-unprotected-one-third-countries-despite-progress.

United Nations Conference on Trade and Development. “Data Protection and Privacy Legislation Worldwide.” UNCTAD.Org, unctad.org/page/data-protection-and-privacy-legislation-worldwide.

Self-Driving Cars or Surveillance on Wheels?

Self-Driving Cars or Surveillance on Wheels?
By MaKenzie Muller | March 2, 2022

Through the years, automotive safety has vastly improved with the help of new technology such as back-up cameras, driver assist functions, automatic lane detection, and self-driving modes. These new features require constant input from their surroundings – including the driver behind the wheel. From dash-cams and 360 degree sensors to infrared scans of driver head movement, our cars may be gathering more data on us than we think.

New and Improved Features

In March of 2020, Tesla announced a software update that would begin the use of it’s driver facing cameras in the Model Y and Model 3 vehicles. These rear-view mirror cameras existed in the cars for almost three years without use. While Elon Musk stated that the cameras were intended to prevent vandalism during Tesla’s taxi program, the release notes asked consumers to allow the camera to capture audio and video in order to “develop safety features and enhancements in the future”. While the software update and enabling the new camera were optional, the tactic of urging drivers to authorize the camera use for research and development casts a shadow on how the information may be used for business purposes.

Keeping passengers safe or putting them at risk?

Driver monitoring systems aren’t limited to just one brand. Trusted makers such as Ford and BMW also deliver driver assist features. In June 2020, Ford announced that it’s newest Mustang and F150 trucks would be equipped with hands-free driving technology on pre-mapped North American highways. To further limit distracted driving, Beverly Bower of JD Power writes, “an infrared driver-facing camera will monitor head positioning and eye movement, even if the driver is wearing sunglasses.” Ford delineates the information they collect about drivers in their vehicles in their Connected Vehicle privacy policy; they gather data about the car’s performance, driving behavior and patterns, audio and visual information, as well as media connections to the car itself. They do not specify how long recordings or other personal information may be stored. The policy specifically recommends that the driver “inform[s] passengers and other drivers of the vehicle that Connected Vehicle Information is being collected and used by us and our service providers.” The company also vaguely states that they retain data for as long as necessary to fulfill their services, essentially allowing them to keep it as long as it is useful for the business. Suggesting that a Ford owner divulge the use of data collection to passengers implores a look into exactly what information is being gathered and why.

Second-hand cars and second-hand data

On the surface, it appears that companies are following privacy guidelines and requirements, but have very little in the way of ensuring that consumers understand the impact of their decisions. Most of the driver assist policies reviewed for this article reiterate the optional use of these features, and that driver data often does not leave the vehicle. The vehicle manufacturers elicit consent from buyers in order to use the services, much in the same manner websites and mobile apps do. The policies also include information about how the data can be retained locally on a SIM card in the console, for example. To that end, owner to owner used car sales introduce a unique potential harm of inadvertently passing personal information onto the next buyer. Ford in particular recommends performing a master reset of the vehicle prior to selling second-hand. Continually, as cars become more and more advanced, it is becoming increasingly difficult to opt out of the many cutting-edge features. Paying premium for the latest models only to not use these pricy features leaves many buyers in a difficult spot.

References
https://www.enisa.europa.eu/news/enisa-news/cybersecurity-challenges-in-the-uptake-of-artificial-intelligence-in-autonomous-driving

https://www.jdpower.com/cars/shopping-guides/what-is-ford-active-drive-assist

Tesla releases new software update with bunch of new features

Optalert Drowsiness & Attentiveness Monitoring

https://news.ucar.edu/132828/favorable-weather-self-driving-vehicles

The Flawed Federal Expansion of Facial Recognition Software

The Flawed Federal Expansion of Facial Recognition Software
By Amar Chatterjee | March 2, 2022

The past few months at the I.R.S. have been mired in controversy over the partnership with a facial recognition technology company called ID.me. In November 2021, the agency made a major decision requiring all citizens to create an ID.me account in order to access basic online services such as applying for a payment plan or checking the payment status of child tax credits. Citing a desire to improve user experience, the I.R.S. plowed forward with the rollout clearly not having thought through possible side-effects. The arduous 13-step registration process is not for the tech illiterate, requiring photos of official documentation as well as a video selfie to be uploaded to the company’s servers for identity verification.

For the thousands of citizens who faced hurdles during the registration process due to inadequate technical skills, resources, or a myriad of other possible issues, the only option was to wait on hold for hours to speak with an ID.me “Trusted Referee”. It would be easy to repeatedly abandon or postpone the registration process for the average American balancing their daily responsibilities.

The use of facial recognition software by the government to this degree is also unprecedented, with far too much risk around how the data will be protected into the future. There are no federal regulations in existence today to govern facial recognition technology on a national scale, nor how that data might be shared externally. ID.me’s Privacy Terms do little to quell concerns on data usage and management, and registrants could easily find themselves the victim of Big Brother government tactics. There have been numerous issues associated with facial recognition inaccuracies that have disproportionately impacted certain communities, especially persons of color.

Finally, after months of horrible press due to the clunky registration process, poor customer service, and cries from civil rights groups to put a stop to the program, the I.R.S. finally walked back its strategy in early February 2022. A rare display of overwhelming bipartisan backlash put the final nail in the coffin, and the I.R.S. has stated they will “transition away” from using ID.me as an authentication service provider (Rappeport and Hill 2022).

So where do we go from here? For starters, let’s cease the use of facial recognition technology as a precursor to accessing essential services. Let’s also insist that our federal agencies think far more critically about these implementations to understand impacts prior to going live. The sad reality is that few employees in charge of devising and spearheading such programs are rarely ever in a position to need to use them, hampering their ability to meaningfully consider all perspectives. Additionally, if the federal government is serious about combatting identity theft, then it should invest appropriately in a robust government-sponsored program rather than a third-party, for-profit organization. It is worth noting that the $86 million dollar contract awarded to ID.me by the Treasury Department was not its first governmental contract, as it maintains active partnerships with the Social Security Administration, the Department of Veterans Affairs, as well as many state agencies (Rappeport and Hill 2022). Senate Finance Committee Chair Ron Wyden (D-Oregon) has suggested that the I.R.S. simply leverage Login.gov, an existing authentication system that is already used by millions of Americans for some federal services (Chu 2022).

The jury is still out on how the biometric data of millions whom have already registered for an ID.me account will be managed, or better yet, purged. Earlier this month, the company did publish a statement that “it will let anyone who created an account through the company to delete their selfies starting March 1”, but that process remains to be seen (Picchi and Ivanova 2022). While the I.R.S. has committed to helping have user data deleted, there have been no further details provided on how that will be accomplished. This is an extremely fluid situation with new information weekly, but hopefully we will see a swift and fair resolution soon.

In 1789 Ben Franklin famously said, “In this world, nothing is certain except death and taxes”. Let’s not add a violation of privacy to that list.

References:

1. Chu, K. (2022, February 7). Wyden calls on IRS to end use of facial recognition for online accounts: The United States Senate Committee on Finance. United States Senate Committee On Finance. Retrieved February 18, 2022, from https://www.finance.senate.gov/chairmans-news/wyden-calls-on-irs-to-end-use-of-facial-recognition-for-online-accounts

2. Harwell, D. (2022, January 27). IRS plan to scan your face prompts anger in Congress, confusion among taxpayers. The Washington Post. Retrieved February 18, 2022, from https://www.washingtonpost.com/technology/2022/01/27/irs-face-scans/

3. Joshi, N. (2019, November 9). Six reasons you should be worried about facial recognition. Allerin. Retrieved February 18, 2022, from https://www.allerin.com/blog/six-reasons-you-should-be-worried-about-facial-recognition

4. Krebs, B. (2022, January 19). IRS will soon require selfies for online access. Krebs on Security. Retrieved February 18, 2022, from https://krebsonsecurity.com/2022/01/irs-will-soon-require-selfies-for-online-access/

5. Picchi, A., & Ivanova, I. (2022, February 9). ID.me says users can delete selfies following IRS backlash. CBS News. Retrieved February 18, 2022, from https://www.cbsnews.com/news/irs-id-me-delete-facial-recognition-tax-returns-backlash/

6. Rappeport, A., & Hill, K. (2022, February 7). I.R.S. to end use of facial recognition for identity verification. The New York Times. Retrieved February 8, 2022, from https://www.nytimes.com/2022/02/07/us/politics/irs-idme-facial-recognition.html

7. Roth, E. (2022, January 29). The IRS is reportedly looking for ID.ME alternatives amid privacy concerns. The Verge. Retrieved February 8, 2022, from https://www.theverge.com/2022/1/29/22907853/irs-idme-facial-recognition-alternatives-privacy-concerns

Data Privacy and Shopping

Data Privacy and Shopping
By Joseph Issa | February 23, 2022

Data plays an essential role in our daily lives in the digital age. People shop online and provide several personal information such as email, name, address, and others. To be competitive in the data science world, we should take a deep look into users’ data privacy. For example, we are training a module on sensitive patient data to predict diabetes while keeping patients anonymous. Online social media websites (Facebook, Twitter, and others) are accustomed to collecting and sharing users’ data. In 2018, the European Union introduced the General Data Protection Regulation (GPPR), which includes a set of regulations to protect the data of European citizens. Any online service with servers in the EU must comply with this regulation. Several important key points from GDPR include having a private office in a company that serves more than 250 employees or dealing with sensitive data. Facebook faced massive penalties for not complying with GDPR.


Source: The Stateman

Everything about us, the users, is data; it is how we think, what we eat, dress, and own. Data protection laws are not going anywhere, and we will be seeing more laws in the coming years. The key is how to preserve users’ data while training the module on this sensitive data. For example, Apple started rolling out privacy techniques in their operating system. They can anonymously collect users’ data and train modules to improve users’ experience. Another example is Google, which collects anonymous data in chrome and in maps to help predict traffic jams. Numeria, for example, allows data scientist around the world to train their modules on encrypted financial data that keeps the client data private.

There are different techniques to develop prediction models while preserving users’ data privacy. Let’s first look at one of the most notorious examples of the potential of predictive analytics. It’s well known that every time you go shopping, retailers are taking note of what you buy and when you’re buying it. Your shopping habits are tracked and analyzed based on what time you go shopping if you use digital coupons vs. paper coupons, buy brand name or generic, and so much more. Your data is stored in internal databases where it’s being picked apart to find trends between your demographics and buying habits.

Stores keep the data for everything you buy; that is how registry stores know what coupons to send to customers. The shopping cart keeps a record of all the purchases made at a given shop. Target, for example, figures out when a teen was pregnant before her family even knew. Target sophisticated prediction algorithms were able to guess when a women shopper is pregnant based on a selection of 25 items that pregnant women buy, among the vitamins, zinc, magnesium, extra-large clothing, and others. Target can predict if a woman is pregnant before anyone else close to her based on this data. Target started targeting the lady with baby coupons at her home address, where she lived with her parents. Her father asked why his daughter was receiving baby coupons in the mail. It turned out that his daughter was pregnant and had told no one about it. The objective for Target store is to make future moms a third primary store, but in doing that, they violated the privacy of their customers.


Source: Youtube.com

The bottom line is target wants to figure out who is pregnant before they look pregnant, which is hard to distinguish them from other customers who are not pregnant. The reason behind that is that when someone is pregnant, they are potential goldmines in shopping for new items they don’t usually buy before they get pregnant. It is terrifying that a company knows what is going on inside your body or house without telling them. After this issue was broadcasted on different new media, Target decided to shut down the program, including the pregnancy prediction algorithm.

Target could have camouflaged the coupons with other regular coupons, so it won’t look clear to the person receiving the coupons in the house that their daughter is pregnant. Instead, they can include coupons or ads for wine-related products, for example, or other food items. This way, they purposely hide the baby-related coupons to slip the baby-related coupon to people’s homes in a way it is not suspected to them.

Another online shopping data breach incident happened with Amazon. Amazon’s technical error accidentally exposed users’ data, including names, emails, addresses, and payment data. The company denied that this incident was a breach or a hack, given the outcome is the same.

Conclusion
In a digital economy, data is of strategic importance. With many online activities such as social, governmental, economic, and shopping, the flow of personal data is expanding fast, raising the issue about data privacy and protection. Legal frameworks that include data protection, data gathering, and the use of data should be in place to protect users’ personal information and privacy.

Furthermore, companies should be held accountable for mishandling users’ data with confidentiality.

References:
Solove, Daniel J. 2006. “A Taxonomy of Privacy.” University of Pennsylvania Law Review 154:477–564. doi:10.2307/40041279
Castells, Manuel. 2010a. The Power of Identity, 2nd ed. Vol. 2, The Information Age: Economy, Society, and Culture. Malden, MA: Wiley-Blackwell.
Castells, Manuel. 2010b. The Rise of the Network Society, 2nd ed. Vol. 1, The Information Age: Economy, Society, and Culture. Malden, MA: Wiley-Blackwell.