Privacy Movements within the Tech Industry

Privacy Movements within the Tech Industry
By Jill Rosok | June 24, 2019

An increasing number of people have become fed up with major tech companies and are choosing to divest from companies that violate their ethical standards. There’s been #deleteuber, #deletefacebook and other similar boycotts of big tech companies that violated consumer trust.

In particular, five companies have an outsized influence on the technology industry and the economy in general, Amazon, Apple, Facebook, Google (now a unit of parent company, Alphabet) and Microsoft. Among numerous scandals, Facebook has insufficiently protected user data leading to Russian hacking and the Cambridge Analytica controversy. Amazon and Apple have been chastised for unsafe working conditions in their factories. Google contracts with the military and collects a massive amount of data on their users. Microsoft has been repeatedly involved in antitrust suits. Those who have attempted to eliminate the big five tech companies from their lives have found it nearly impossible. It’s one thing to delete your Facebook and Instagram accounts and stop ordering packages from Amazon. However, eliminating the big five companies from your life is actually much more complicated than that. The vast majority of smartphones have hardware and/or software built by Apple and Google. Furthermore, Amazon’s services run the backend of a huge number of websites, meaning stepping away from these companies would essentially mean to stop using the internet.

For a limited few, it might be possible to simply log off and never come back, but most people rely on tech companies in some capacity to provide them basic access to work, connection to friends and family, and the internet in general. As the big five acquire more and more services that encompass the entirety of people’s lives it is extremely difficult for an individual to participate in a meaningful boycott of all five companies.

In light of the dominance of these five companies, to what extent is the government responsible for some kind of intervention? And if the government were to intervene, what might this look like? Antitrust legislation is intended to protect the consumer from monopoly powers. Historically, the government’s focus has been ensuring that companies are not behaving in such a way that leads consumers to pay higher prices for goods and services. However, this doesn’t protect users where no cash is exchanged, as in the case of Facebook. It’s a great example of the classic adage, if you’re not paying money for a service, then that means the product is you. It also does not hold up in a circumstance where venture backing or other product lines in the business can enable companies to artificially deflate prices below cost for many years until all other competitors are wiped off the map. Senator and presidential candidate, Elizabeth Warren, recently proposed breaking up big tech. While her piece was received more symbolically than as a fully formed plan to regulate the tech industry, there were aspects that appear to have resonated strongly with the general public. In particular, the idea that mergers and acquisitions by large companies should undergo much deeper scrutiny and perhaps be banned entirely was well received by analysts.

Like with most complex problems in life, there are no easy solutions to simultaneously protect consumers and maximize technological innovation. However, it is vital to avoid becoming stagnant in response to the scale of the problem. Rather, as individuals, we must remain informed and put pressure on our political leaders to enact meaningful legislation to ensure the tech industry does not violate the basic rights of consumers.

Breast Cancer, Genetic Testing, and Privacy

Breast Cancer, Genetic Testing, and Privacy
By Anna Jacobson | June 24, 2019

5%-10% of breast cancer is believed to be hereditary, meaning that it results directly from a genetic mutation passed on from a parent. The most common known cause of hereditary breast cancer is an inherited mutation in the BRCA1 or BRCA2 gene; about 70% of women with these mutations will develop breast cancer before the age of 80. Identification of these mutations can determine a breast cancer patient’s course of treatment and post-treatment monitoring, inform decisions about if and how she has children, and raise awareness in her family members of their potentially higher risk.

Because of this, newly diagnosed breast cancer patients may be referred for genetic risk evaluation if they meet criteria laid out in the National Comprehensive Cancer Network (NCCN) genetic testing guidelines, including family medical history, tumor pathology, ethnicity, and age. These at-risk patients typically undergo multi-gene panel testing that looks for BRCA1 and BRCA2 mutations, as well as a handful of other less common gene mutations, some of which are associated with inherited risk for other forms of cancer as well as breast cancer.

Genetic testing for breast cancer is a complex issue that raises many concerns. One concern is that not enough patients have access to the testing; some recent studies have shown that the genetic testing guideline’s criteria are too restrictive, excluding many patients who in fact do carry hereditary gene mutations. Another concern is that the testing is not well-understood; for example, patients and even doctors may not be aware that there are many BRCA mutations that are not detected by current tests, including ones that are more common that those that are currently tested. Yet another set of concerns revolves around the value of predictive genetic testing of family members who do not have a positive cancer diagnosis, and whether the benefit of the knowledge of possible risk outweighs the potential harms.

To help a patient navigate this complexity, this genetic testing is ideally offered in the context of professional genetic expertise for pre- and post-test counseling. However, under a 2013 Supreme Court ruling which declared that genes are not patentable, companies like 23andMe now offer direct-to-consumer BRCA testing without professional medical involvement or oversight. And even at its best, genetic counseling comes at a time at which breast cancer patients and their caregivers may be least able to comprehend it. They may be suffering from the shock of their recent diagnoses. They may be overwhelmed by the vast amount of information that comes with a newly diagnosed illness. Most of all, they may only be able to focus on the immediate and urgent need to take the steps required to treat their disease. To many, it is impossible to think about anything other than whether the test results are positive, and if they are, what to do.

But to a breast cancer survivor, other concerns about her genetic testing may arise months or years later. One such concern may be about privacy. Genetic testing for breast cancer is not anonymous; as with all medical testing, the patient’s name is on the test order and the results, which then become part of the patient’s medical record. All medical records, including genetic test results, are protected under HIPAA (Health Insurance Portability and Accountability Act of 1996). However, the recent proliferation of health data breaches from cyberattacks and ransomware has given rise to growing awareness that the confidentiality of medical records can be compromised. This in turn leads to the fears that exposure of a positive genetic test result — one that suggests increased lifetime cancer risk — could lead to discrimination by employers, insurers, and others.

In the United States, citizens are protected against such discrimination by GINA (Genetic Information Nondiscrimination Act of 2008), which forbids most employers and health insurers from making decisions based on genetic information. However, GINA does not apply to small businesses (with fewer than 15 employees), federal and military health insurance, and other types of insurance, such as life, disability, and long-term care. It also does not address other settings of potential discrimination, such as in housing, social services, education, financial services and lending, elections, and legal disputes. Furthermore, in practice it could be very difficult to prove that discrimination prohibited by GINA took place, particularly in the context of hiring, in which it is not required that an employer give complete or truthful reasons – or sometimes, any reasons at all – to a prospective employee for why they were not hired. And perhaps the greatest weakness of GINA, from the standpoint of a breast cancer survivor, is that it only prohibits discrimination based on genetic information about someone who has not yet been diagnosed with a disease.

Though not protected by GINA, cancer survivors are protected by the Americans with Disabilities Act (ADA), which prohibits discrimination in employment, public services, accommodations, and communications based on a disability. In 1995, the Equal Employment Opportunity Commission (EEOC) issued an interpretation that discrimination based on genetic information relating to illness, disease, or other disorders is prohibited by the ADA. In 2000, the EEOC Commissioner testified before the Senate that the ADA “can be interpreted to prohibit employment discrimination based on genetic information.” However, these EEOC opinions are not legally binding, and whether the ADA protects against genetic discrimination in the workplace has never been tested in court.

Well beyond existing legislative and legal frameworks, genetic data may have implications in the future of which we have no conception today, more than perhaps any other health data. The field of genomics is rapidly evolving; it is possible that a genetic mutation that is currently tested because it signals an increased risk for ovarian cancer might in the future be shown to signal something completely different and possibly more sensitive. And unlike many medical tests which are relevant at the time of the test but have decreasing relevance over time, genetic test results are eternal, as true on the day of birth as on the day of death. Moreover, an individual’s genetic test results can provide information about their entire family, including family members who never consented to the testing and family members who did not even exist at the time the test was done.

The promise of genetic testing is that it will become a powerful tool for doctors to use in the future for so-called “precision prevention”, as well as personalized, targeted treatment. However, in our eagerness to prevent and cure cancer, we must remember to consider that as the area of our knowledge grows, so too grows its vulnerable perimeter – and so must our defenses against those who might wish to misuse it.


  • “Genetic Testing and Privacy.”, 28 Sept. 2016,
  • “Genetic Testing Guidelines for Breast Cancer Need Overhaul.”, 24 August 2018,–esid–&enl=true.
  • “Genetic Information Privacy.”
  • “Genetic Discrimination.”
  • “NCCN Guidelines Version 3.2109.”
  • “Understanding Genetic Testing for Cancer.”

Maintaining Data Integrity in an Enterprise

Maintaining Data Integrity in an Enterprise
By Keith Wertsching | June 21, 2019

Everyone suffers when an enterprise does not maintain the integrity of its data and the leaders employ that data to make important decisions for the enterprise. There are many roles involved in mitigating the risk of poor data integrity, which is defined by Digital Guardian as “the accuracy and consistency (validity) of data over its lifecycle.” But who should be responsible for making sure that the integrity of the data is preserved throughout collection, extraction, and use by the data consumers?
The agent who maintains data accuracy should ideally be someone who:

  • Understands where the data is collected from and how it is collected
  • Understands where and how the data is stored
  • Understands who is accessing the data and how they are accessing it
  • Has the ability to recognize when that data is not accurate and understands the steps required to correct it

Too often, the person responsible for maintaining data integrity is focused primarily on the second bullet point, with a casual understanding of the first and third bullet points. Take this job description for a data integrity analyst from Investopedia:
“The primary responsibility of a data integrity analyst is to manage a company’s computer data by way of monitoring its security…the data integrity analyst tracks records indicating who is accessing what information held by company computer systems at specific times.”

The job description demonstrates that someone working in data integrity should be an expert on where and how the data is stored, and be familiar with who should be accessing that information in order to make sure that company data is not stolen or used inappropriately. But who is ultimately responsible for making sure that the information is accurate in the first place, and for making sure that any changes needed are done in a timely fashion and tracked for future records?

In today’s world of enterprise database administrators, there is often a distinct separation between the person or team that understands how the data is stored and maintained and the person or team that has the ability to recognize when the data is not accurate. Let’s take the example of a configuration management database (CMDB) to highlight the potential issues from separation of data integrity responsibility. SearchDataCenter defines a CMDB as “a database that contains all relevant information about the hardware and software components used in an organization’s IT services and the relationships between those components.” The information stored in the CMDB is important because it allows the entire organization to refer to technical components in the same manner. In a larger organization, the team that is responsible for provisioning hardware and software components will often be responsible for also making sure that any information related to newly provisioned components makes its way into the CMDB. There is often an administrator or set of administrators that will maintain the information in the CMDB. The data will then be consumed by a large number of teams, including IT Support, Project Teams, and Finance.

When the accuracy of the data is not complete, the teams consuming the data do not have the ability to speak the same language regarding IT components. The Finance Team may allocate dollars based on the number of components or breakdown of types of components. If they do not have adequate information, they may fail to allocate the right budget for the project teams to complete their work on time. A different understanding of enterprise components may cause delays in assistance from the IT Support organization, which has the potential to push out timelines and delay projects.

One potential solution to this issue: make one team responsible for maintaining the accuracy of the data from collection to consumption. As mentioned before, this team needs to have an understanding of where the data comes from, how it is stored, how it is consumed, and the ability to recognize when the data is not accurate and the steps required to correct the information. The data integrity team must be accessible to the rest of the organization to correct data accuracy problems when they arise. As the team grows and matures, they should target developing proactive measures to test that data is accurate and complete so that they can solve data integrity issues before they impact the user. By assigning specific ownership over the entire data lifecycle to one team, the organization can enforce accountability and integrity and mitigate the risk that leaders make poor decisions based on false information.


[1] Digital Guardian:
[2] Investopedia:
[3] SearchDataCenter:

Using Social Media to Screen Job Candidates: Ethical and Future Implications

Using Social Media to Screen Job Candidates: Ethical and Future Implications
By Anonymous | June 24, 2019

Image Source:

Hiring qualified people is hard. Most of the time, the foundation of a hiring manager’s decision is built off of a 1-page resume, a biased reference or two (sometimes none), and a few hours of interviews with the candidate on their best behavior.

It’s no surprise that around 70% of employers have admitted to snooping on personal social media sites as a method for screening candidates [1]. Since hiring someone who isn’t the right fit can be expensive, it’s only natural for companies to turn to Facebook, Twitter, Instagram, or other social media sites to get a deeper glimpse into the personality they’re hiring. Unfortunately, there’s a lot that can go wrong for all parties involved due to the ethical implications.

What could go wrong?

Using social media to screen candidates doesn’t just weed out people who are vocal online about their criminal or illegal behavior. Doing this can lead to hiring managers screening out perfectly qualified candidates.

Recently, CIPD (an employee advocate group based in London) wrote a comprehensive pre-employment guide for organizations to follow, and included a section on using social media for job screening [2]. They outlined the risks of employers doing this, which included a case study about a company deciding not to hire a transgender candidate, even after indicating that the individual was suitable for the job prior to the social media check. This was considered an act direct discrimination under a protected characteristic, brought on by the company using social media to get more information on the candidate.

It doesn’t stop there. For some people, it’s common sense that employers review social media profiles, and they are able to keep their private thoughts secured. However, not everybody is a social media expert, and deciphering exactly is and isn’t private can be unwieldy. Many people are not aware that they are consenting to disclosing posts from 5+ years ago to potential employers. When companies don’t directly disclose that all content from personal social media sites are subject to review, this could be considered a breach of privacy for individuals who are unaware.

The Future of Social Media Screening

Manually reading through social media sites for potential issues with the candidate is time consuming. Why can’t someone your just create an algorithm that parses through social media content when it’s available, and labels attributes of your employees for you?

Image Source:

With the massive influx of artificial intelligence being leveraged within the job-hunting industry, it’s surprising that this isn’t already an industry norm. However, there are a myriad of potential ethical concerns around creating algorithms to do this.

It’s entirely possible that job candidates can fall victim to algorithmic bias, and be categorized as something they’re not because of an unperfected algorithm. If someone is new to social media and undergoes a screening like this, it’s possible the result will find no positive traits for the candidate, and the company will reject the candidate based on the algorithm’s decision.

Between the start-ups that continue to sprout up for the purpose of data mining to gain valuable insights on individuals and the “Social Credit Score” going live in Chine in 2020 [3], it’s hard to discount the possibility of algorithmic social media screenings that score how “hirable” a candidate is becoming prevalent. Because of this, all aspects of the hiring process should continually be subjected to ethical laws and frameworks to protect job candidates from unfair discrimination.