AI technology in my doctor’s office: Have I given consent?

AI technology in my doctor’s office: Have I given consent?

By Anonymous | October 28, 2022

Should patients have the right to know what AI technologies are being used to provide their health care and what their potential risks are? Should they be able to opt out of these technologies without being denied health care services? These questions become extremely critical over the next decade as policy makers start regulating this technology.

Have you ever wondered how doctors and nurses spend lesser and lesser time with you these days in the examination room but still can make a diagnosis faster than ever before? You can thank the proliferation of AI technology in the healthcare industry. To clarify, we are not referring to the Alexa or Google Home device your doctor may use to set their reminders. We refer to the wide array of AI tools and technology that powers every aspect of the healthcare system, be it for diagnosis and treatments, routine admin tasks, or insurance claims processing.

Image Source: Adobe Stock Image

As explained by Davenport et al. in their Future Heathcare Journal publication, AI technologies in healthcare has the potential to bring tremendous benefits to the healthcare community and patients over the next decade. It can help not only help automate several tasks that previously need intervention of trained medical staff but also significantly reduce costs. The technology clearly has so much potential and can be extremely beneficial to patients. But what are its risks? Will it be equally fair to all patients, or will the technology benefit some groups of people over others? In other words, is the AI technology unbiased and fair?

Even though full-scale adoption on AI in healthcare is expected to be 5-10 years away, there are already ample documented evidence to show lack of fairness in healthcare AI. According to this Postively Aware article, a 2019 study showed that an algorithm used by hospitals and insurers to manage 200 million people in US was less likely to refer black people for extra care than white people. Similarly, a Harvard School of Public Health article on algorithmic biases speaks about how the Framingham Heart Study performed much better for Caucasians than Black patients when calculating cardiovascular risk score.

Unsurprisingly there is very little regulation or oversight over these AI tools in healthcare. A recently published ACLU article on how algorithmic decision making in healthcare can deepen racial bias states that unlike medical devices that are regulated by FDA, “algorithmic decision-making tools used in clinical, administrative, and public health settings — such as those that predict risk of mortality, likelihood of readmission, and in-home care needs — are not required to be reviewed or regulated by the FDA or any regulatory body.”


Image Source: Adobe Stock Image

While we trust that the AI researchers and healthcare community will continue to reduce algorithmic biases and strive to make healthcare AI fair for all, the question we ask ourselves today is should the patient have the right to consent, contest, or opt out of AI technologies in their medical care? Given there is so little regulation and oversight about these technologies, we strongly believe that the right way forward is to create policies that treat AI technology’s use in healthcare in the same way as a surgical procedure or drug treatment. Educating customers about the risk of these technologies and getting their informed consent is critical. Heathcare providers should explain to patients clearly about what AI technologies are being used as part of their care, what are their potential risks/drawbacks, and seek informed consent from patients to opt-in. Specific data around algorithmic accuracy against various demographic groups should also be made available to the patient so that they can clearly assess the risk applicable to them. If the patient is not comfortable with these technologies, they should have the right to opt out and still be entitled to alternate course of treatment when possible.


Credit Scoring Apartheid in Post-Apartheid South Africa

Credit Scoring Apartheid in Post-Apartheid South Africa

By Anonymous | October 28, 2022

More than 20 years after the end of apartheid in South Africa access to credit is still skewed by race towards the white community of South Africa. Apartheid was a policy or system of segregation or discrimination with the ruling minority whites enjoying more access to resources at the expense of the majority black in South African prior to 1994. Nonwhites included blacks, Indians and coloureds with blacks or African as the majority. This meant that black South Africans were excluded from certain educational institutions, neighborhoods, type of education. This resulted in Africans earning less a third of white South Africans.

Financial institutions in South Africa now use Big Data to develop credit models or algorithms to make credit decisions. It is argued that the use of Big Data and related credit models eliminate human bias in their decision. This is critical in South Africa as such decisions are more likely to be made white person and so any system that purports to eliminate human bias will be seen in good light by Africans. The South Africa credit market shows that Africans are either denied mortgages or receive unfavorable terms like high interest rates or lower mortgage amounts. This is similar for African South African owned businesses who have access to fewer and lower amount of loans.

Access to credit enables investment in human capital and businesses and has the potential to reduce inequality in society and drive economic growth. To support growth for all, South Africa through the bill of rights enshrines the constitutional goals of achieving human dignity, equality for all races and freedom. Several laws have been passed to give effect to these constitutional goals and to enforce equality and prevent discrimination, such as, the Employment Equity Act and the Promotion of Equality and Prevention of Unfair Discrimination Act (PEPUDA). These laws do not apply to the data used in credit scoring and leaves a gap in the application of Big Data credit decisioning.

These credit scoring models are either build using generic statistical analysis or machine learning models. The models will essentially be based on historical credit related data to determine the credit worthiness of individuals. These would include aspects like the amount of credit that can be extended to the person, the period the loan can be granted for, the interest rate payable on the loan and the type and need of collateral for the loan. These models will place significant value on traits in the available Big Data universe using weights. These traits will be factors like education, level and source of income, employment type (formal or informal), assets owned, but all of this will be on a data set made up of predominantly white people given the history of the country. By implications the modelled traits resemble the way of life of white people and how they have structurally benefited from apartheid. Credit models used for all races will largely be a byproduct of apartheid.

The table below is an illustration of the change in labour force participation by race since 1993 a year before apartheid was formally abolished. There is a significant increase in labour force participation for Africans (Blacks) from 1993 to 2008.

The table below shows average income by race and its clear that Africans have continued to receive less than a third of the income received by white people 1993 – 2008 even though they have seen an increase income..

The racial implications of credit apartheid are observed in several areas of everyday life. White South Africans constitute less than 10% of the population but own more than 72% of the available land and 37% of residential property. 69% of white people have internet access, compared to just 48% of black. 53% of entrepreneurs are white and with only 38% African. Black South Africans owned only 28% of available listed shares.

To reduce this bias, part of the solution is to build models for credit scoring based on data from digital platforms and applications that resembles all consumer behavior. This be done by combining data from multiple sources, like airtime usage, mobile money usage, geolocation, bills payment history, and social media usage. This could help to eliminate Big Data bias by race as it would result in new models that are more inclusive. The models can also introduce different weights to different population groups and income level categories. This would improve the fairness of the credit scoring models. It is also an opportunity for regulators to look closely at these models to reduce racial bias which perpetuates apartheid and by implication racial inequality.

Such changes would ensure that the credit scoring models are not only based on data from credit bureaus which is mainly based on white South Africans who have always had access to credit.


I’m a black South African raised in a low-income home. At the beginning of my career, I was only awarded 50% mortgage because I had no credit history and probably because of my racial classification.




In a Digital Playground, Who Has Access To Our Children?

In a Digital Playground, Who Has Access To Our Children?
By Anonymous | October 27, 2022

On one particularly lazy Saturday afternoon, I heard my then-8 year old daughter call, “Hey dad! Someone’s talking to me on my iPad. Can I talk back?” Taking a look, I was curious to see that another unknown user’s avatar was initiating a freeform text chat in a popular educational app. I didn’t realize it even had a chat feature. My dad-spidey-sense kicked in and I asked my daughter what she thought. She replied, “I don’t know this person, they’re a stranger, I’ll block them.”


Most parents these days have a good general idea about how to moderate “pull content” (where users choose content to retrieve) for their kids as they use net-connected tablets at home, but for me this situation represents a different kind of concern — “how apps may allow the public at large access to communicate live and directly, and at any time, with our kids at home without our knowledge, either through text-based chat, or more concerning, video or audio.

Solove’s Privacy Taxonomy

In 2006, privacy law professor Daniel Solove published a taxonomy to help illustrate harms from privacy violations, modeled across a complete lifecycle flowing from information collection, to processing and dissemination, to invasion. My concern lies in “intrusion” within the invasion stage, which he describes as “invasive acts that disturb one’s tranquility or solitude”.


This intrusion makes children, already vulnerable due to lack of maturity and autonomy, even more vulnerable to a myriad of both online and physical harms, such as scams, phishing attacks, child predation, kidnapping, cyberbullying, and identity theft.

Fourth Amendment Rights

The Fourth Amendment constitutionally protects, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures”. The framers of the constitution, of course, didn’t have the Internet in mind in the 18th century, however a landmark case in 1967, Katz v. United States extended the interpretation of the Fourth Amendment to locations where a person would have a “reasonable expectation of privacy I would argue that the right to defend one’s household, and family within, from such intrusion would apply to the digital realm also.

Cleveland State University’s Online Test Practices

A recent example of the Fourth Amendment being applied within the digital realm involves a ruling by a federal judge against Cleveland State University. While administering a remote online test to student Aaron Ogletree, the university required Aaron to provide test proctors with live webcam footage of a wide scanning view of his bedroom. The judge ruled that the university violated Aaron’s constitutional right against unreasonable search and seizure due Aaron’s lack of choice on the matter. Similarly, younger kids subjected to invasive app access by default may not know how to prevent this access.

Children’s Online Privacy Protection Act (COPPA)

The COPPA rule became effective in 2020, in order to further protect the online privacy of children under the age of 13 by requiring verifiable parental consent (VPC) before any collection of personally identifiable information (PII) can take place. Enforced primarily by the FTC, this legislation is important in providing parents with a basic standard of protection and a workflow to proactively opt in and control the online experience. Despite the best efforts of app developers to follow COPPA and build safe communities, anyone could still be on the other end of unsolicited communication if bad actors falsify sign-up information or if accounts get hacked.

Internet Crimes Against Children Task Force

In 1998, the Office of Juvenile Justice and Delinquency Prevention (OJJDP) created the Internet Crimes Against Children Task Force Program (ICAC), in part due to “heightened online activity by predators seeking unsupervised contact with potential underage victims”. Their website shares, “Since 1998, ICAC Task Forces have reviewed more than 1,452,040 reports of online child exploitation, resulting in the arrest of more than 123,790 suspects.”


Fast forward to today, my younger child is now 6 and has asked for the same educational app. I’ve done enough due diligence to be comfortable installing it for him, but only because the app gives us the option to fully disable the “push chat” feature until he’s old enough to safely use it. If he’s like his older sister, he’ll outgrow the educational content well before then.




Is Immigration and Customs Enforcement above the law?

Is Immigration and Customs Enforcement above the law?

By Anonymous | October 28, 2022

ICE’s surveillance on the American population has gone under the radar and kept a secret.

The Immigration and Customs Enforcement or ICE is a government agency that might know more about you than you think. When the topic of government surveillance comes up ICE might not be the first government agency that comes to mind to the general population but it is the first agency that comes to mind among the undocument immigrant community. ICE may be the government agency that has access to more information on the population out of all the government agencies.

ICE was founded in 2003 and since then it has gathered more and more information on the population living in the United States on the basis of driving deportations. This gathering of information has led to mass surveillance on the population. ICE has accessed data sets containing information on the vast majority of the population in the US. These records come from DMVs in the states that have allowed undocumented immigrants to have a drivers license in the state. ICE has also accessed records from agencies that provide the essential services like electricity, gas and water. ICE has been able to access and purchase these data sets seemingly without any other government or public oversight. Another key access to data is the information gathered from unaccompanied minors trying to get to the US, the information provided by the children is then used to drive deportations of that child’s family.

Immigration and Customs Enforcement officer

This is not only a legal matter but also an ethical issue. There are privacy laws in place that should protect not only citizens but everyone living in the US and ICE seems to be above those laws and gathering information on the entire population. There does not seem to be any enforcement of the laws to hold ICE or agencies providing the data to ICE accountable on the data they have accessed. How can these laws be enforced when ICE is able to do all their surveillance in secret? States and especially government agencies should have to have the data they have stored strictly regulated. These are government agencies that the population is trusting to keep their data safe and this data is being shared without consent or notification.

ICE has access to drivers license data on roughly 75% of the US population. This means that the surveillance that ICE has is not only limited to undocumented immigrants but also extends to American citizens. ICE is also able to locate 75% of adults in the US based only on utility records. ICE has been using facial recognition technology as early as 2008 when they accessed the Rhode Island DMV database to do facial recognition on their database.

ICE takes advantage of the basic necessities that a person living in the US needs to access data on the US population with deportation as the leading reason for the data. The state governments that have allowed undocumented immigrants to have access to a drivers license in their state should also have a responsibility to protect the information of the individuals that have trusted their information to them. Having access to the basic necessities like utilities should not come with the cost of having your data shared with government agencies without your knowledge or consent.

ICE and the data the ICE has access to should be more strictly regulated to be compliant with state and federal privacy laws. While this is already a concern among the undocumented community in the US this should be a greater concern among the entire population because ICE has be able to access information on the entire population and not just the undocumented immigrant population.


Nina Wang, Allison McDonald, Daniel Bateyko & Emily Tucker, American Dragnet: Data-Driven Deportation in the 21st Century, Center on Privacy & Technology at Georgetown Law (2022).

My Browser is a Psychic

My Browser is a Psychic

By Savita Chari | October 28, 2022

The other day I accompanied a friend to the local drug store. While she was shopping, I was browsing. I was tempted to buy my favorite chocolate, but the calorie count made me put it back. Next day as I opened the facebook app, an ad for that very brand of chocolate popped up with a coupon. Was it just a coincidence or can facebook read my mind?

Then, one day I started seeing email deals and ads promoting cruise ship adventures. I was dumbfounded because the previous night on the dinner-table my family was discussing our next vacation. Is my home bugged? How did my browser know that I wanted a cruise vacation? We had not yet started searching on the internet. This left me very uncomfortable.

Like all smart people, I decided to check if other people had similar experiences. I whipped out my phone and asked my trusted ‘Hey Google’ to search. Then it struck me, if google can listen to my question, can it also listen to my other conversations? Could it be possible that without my knowledge, all my conversation was being tracked by my android devices? Turns out, I am not the only one experiencing this. Chat forums such as Reddit and Quora have many people describing their own experience of seeing ads based on private in-person or online conversations. Google vehemently denies that it eavesdrops and states that according to its content policy for app developers, apps cannot listen unless specifically allowed to do so and in active use.

As reported by BBC, two Cybersecurity experts Ken Munro and David Lodge of Pen Test Partners , performed an experiment1 to see if it was physically possible for an android phone to listen to conversation, when not in use. They used APIs provided by Android platform to write a small prototype app with mic enabled. Low and behold, it started recording entire conversation within the vicinity of the phone, even though the experimental app was not in use. There was no spike in data usage because the wi-fi was enabled and the battery did not drain much to raise any alarms.

A Bloomberg report5 cited people with inside knowledge that Facebook and Google hired many contractors to transcribe audio clippings of its users.
An article by USAtoday6 reported an investigation done by a Dutch broadcaster VRT where they proved how Google Smart Home devices eavesdrop and record all private conversations happening in our homes. Google contractors listen to these audio recordings and use them to enhance their offerings (aka serve targeted ads).
Yes, our homes are bugged!

Serving ads online can only go so far to entice us. Companies know that in order to increase revenue, ads should be served at the time when we are shopping. This is made possible with the advent of Bluetooth Beacon.
These small bluetooth devices are strategically placed in various locations in
a store or an enclosed mall and they communicate with our smartphone via bluetooth. Due to bluetooth technology they know our precise location and are able to serve us ads or promotional deals in realtime. No wonder, these days all stores encourage us to download their apps. These beacons communicate with the app and send information like, what are we browsing, where are we lingering etc. Armed with this knowledge the apps send us notifications related to promotions on those items to encourage us to buy them while we are in the store and are more likely to spend extra.
Even if we don’t have the store app installed, there is a stealth-industry of third-party location-marketing companies that collect data based on where we pause in an aisle to examine a product, which translates to our possible interest. These companies take their beacon tracking code and bundle it into a toolkit that is used by other app developers. They are paid in cash or kind ( free reports) to combine these toolkits with the apps. The idea is to get to as many mobile phones as possible.
Mostly these third-party companies are very small and go bankrupt at the drop of a lawsuit and another one crops up in its place. The collected data is sold to big consumers like Facebook, Google etc. who already have a plethora of data on us from various sources. In their privacy statements, they mention these “thirdparty” partners, who help them serve targeted ads and promotion to us.

Picture Credit NyTimes11
Now I know how I got the ad for that chocolate.
So much for clairvoyance!

Games are Dangerous! – Robinhood and Youth Gambling

Games are Dangerous! – Robinhood and Youth Gambling

By John Mah | October 28, 2022

Disclaimer: This post mentions an instance of suicide.

Learning to invest and understand the stock market are fundamental in wealth building. Unfortunately, only 58% of Americans reported owning a stock in 2022 [1]. Not surprising considering it seems incredibly terrifying to enter the stock market world due to indecipherable indexes, mind boggling charts, and confusing financial jargon. However, with gamification all these barriers of entry are thrown out the window due to the addictive psychological mechanism of variable rewards [2]. Teenagers and young adults are especially susceptible and are entering the financial world without proper risk control and developing an addiction, akin to gambling, towards excessive trading for quick profits. Robinhood, a commission free trading platform, is leading the charge for this abuse and making a conscious decision to disregard the well-being of the youth for their own gains.


What is Gamification?

Gamification refers to the development of features on applications that make the user experience more visually appealing and intuitive. For Robinhood, the purpose is to make stock trading more fun for the average consumer, it is like playing a video game where it’s appealing to see fun visual graphics when performing an action [3]. These techniques are dangerous as they go beyond just reinforcing a “fun” action to take; instead, they create addictive habits for users.

So why are technology companies like Robinhood using these techniques? It’s simple – to retain users on their platform, which leads to increased revenue and opportunities to procure data. These techniques ultimately tie into the “hooked model” [4], a pattern that allows companies to hook users into their applications as a habit in their daily lives rather than as a tool. The hooked model isn’t just apparent in Robinhood, it is deeply embedded into many of today’s most popular applications like Instagram and Uber.


 So Why Target the Youth?

Gamification techniques are particularly effective on the youth because they are more familiar with game like interfaces. As such, it’s not a surprise that roughly 40% of Robinhood’s userbase are between the ages of 19 and 34, in fact the median age of their users is 31 [3]. It’s important to understand that Robinhood is a broker who makes money from trades executed on their platform. In essence, the more trades done on their platform, the more money they make. Robinhood’s use of gamification is now very clear. Teenagers and young adults have little to no experience with investing. When you combine this lack of knowledge with extremely high-risk tolerance that comes from youth brashness and years of playing video games, you get the perfect targets for brokers. Robinhood is not in the business of education, they violate any aspect of justice as specified in the Belmont Report by exploiting the youth for profit.


 Inadequate Disclaimers and Disclosures

It’s without a doubt that the financial sector is extremely regulated. A lot of this regulation has to do with information and data that providers are generally transparent with customers. Unfortunately, with an application like Robinhood with heavily gamified features, these disclaimers and disclosures are extremely easy to overlook. The game-like designs leave little room for any disclaimers or disclosures about the risks users are taking when performing trades. Users essentially lose their ability to make rational decisions and we can argue their ability to “consent” sensibly is taken away. There are clear dangers to this. One such example is the case of Alexander Kearns, a 20-year-old from Naperville Illinois, who committed suicide after mistakenly believing he had lost $730,000 dollars [5]. His lack of knowledge on how options worked made him believe he had lost money and, in the end, took his own life. It’s not farfetched to say this will likely not be a rare occurrence anymore in this age of technology.


Addiction is a real problem. As companies continue to develop new applications, it’s only logical that they are going to want to rope in as many users as they can, and gamified features fit perfectly. The youth are the biggest victims as they grew up with technology. So the next time you open your favorite application, try to see if you’ve been hooked into a gamified model.



[1] Dhawan, S. (2022, August 11). Financialexpress. The Financial Express Stories. Retrieved October 28, 2022, from

[2] Eyal, N. (2022, August 25). Variable rewards: Want to hook users? Drive them crazy. Nir and Far. Retrieved October 28, 2022, from

[3] Gallo, N. (2022, June 29). Robinhood and the gamification of investing. FinMasters. Retrieved October 28, 2022, from

[4] Eyal, N. (2022, September 11). The hooked model: How to manufacture desire in 4 steps. Nir and Far. Retrieved October 28, 2022, from

[5] CNBC. (2021, February 9). Robinhood sued by family of 20-year-old trader who killed himself after believing he racked up huge losses. CNBC. Retrieved October 28, 2022, from



Data Privacy in the Metaverse: Real Threats in a Virtual World

Data Privacy in the Metaverse: Real Threats in a Virtual World
By Alexa Coughlin | October 24, 2022

The metaverse promises to revolutionize the way we interact with each other and the world – but at what cost?

Imagine having the ability to connect, work, play, learn, and even shop all from the comfort of your own home. Now imagine doing it all in 3D with a giant pair of goggles strapped to your face. Welcome to the metaverse.

Dubbed ‘the successor to the mobile internet’, the metaverse promises to revolutionize the way we engage with the world around us through a network of 3D virtual worlds. While Meta (formerly Facebook) is leading the charge into this wild west of virtual reality, plenty of other companies are along for the ride. Everyone from tech giants like Microsoft and NVIDIA, to retailers like Nike and Ralph Lauren is eager to try their hand at navigating cyberspace.×384.jpeg

Boundless as the promise of this new technology may seem, there’s no such thing as a free lunch when it comes to an industry where people (and their data) have historically been the product. The metaverse is no exception.

Through the core metaverse technology of VR and AR headsets, users will provide companies like Meta with access to new categories of personal information that have historically been extremely difficult, if not totally impossible, to track. Purveyors of this cyberworld will have access to extremely sensitive biometric data such as facial expressions, eye movements, and even a person’s gait at their fingertips. Medical conditions, emotions, preferences, mental states, and even subconscious thoughts – it’s all there. All just inferences waiting to be extracted and monetized. Having recently lost $10 billion in ad revenue to changes in Apple’s App Tracking Transparency feature, it’s not hard to fathom what plans a company like Meta might have in store for this new treasure trove of prized data. 

Given the extremely personal and potentially compromising nature of this data, some users have already started thinking about how they might combat this invasion of privacy. Some have chosen to rely on established data privacy concepts like differential privacy (i.e. adding noise to different VR tracking measures) to obfuscate their identities in the metaverse. Others still have turned to physical means of intervention, like privacy shields, to prevent eye movement tracking.

Creative as these approaches to privacy might be, users should not have to rely on the equivalent of statistical party tricks or glorified sunglasses to protect themselves from exploitation in the metaverse. That said, given the extreme variance in robustness of data regulations across the world, glorified sunglasses may not in fact be the worst option for some. For example, while most of this biometrically derived data may classify as ‘special category’ under the broad categorization of the EU’s GDPR regulation, it may not warrant special protections under the narrower definition of Illinois’ BIPA (Biometric Information Privacy Act) for instance. And that’s to say nothing of the 44 U.S. states with no active data protection laws at all.

With the metaverse still in its fledgling stages, awaiting mass market adoption, it is crucial that regulators take this opportunity to develop new laws that will protect users from these novel and specific threats to their personal data and safety. Until then, consumer education on data practices in the metaverse remains hugely important. It’s essential that users stay informed about what’s at stake for them in the real world when they enter the virtual one.

Do the financial markets care about data privacy?

Do the financial markets care about data privacy?
By Finnian Meagher | October 24, 2022

While long-term outperformance of large tech companies may indicate that privacy policy mishaps aren’t priced into stock prices, recent stock performances and relevant pieces of research may suggest otherwise – companies may be forced by the demands of the market and shareholders into adopting more rigid privacy policy practices.
The data privacy practices and policies of companies, especially large tech and social media organizations, are often scrutinized by the public, academics, and government regulators. However, as these companies have seen years of significant outperformance over the rest of the market, as exemplified by the below chart of the FAANG ETF (Facebook, Apple, Amazon, Netflix, and Google), it begs the question: do the financial markets care, and if so, what is the impact of negative press about privacy on market sentiment and stock prices?


Paul Bischoff of Comparitech believes that there exists a relationship between data breaches and falls in company share prices, including an average share price fall of -3.5% and underperformance of -3.5% compared to the NASDAQ in the months following a data breach. Bischoff observes that “in the long term, breached companies underperformed the market,” with share prices falling -8.6% on average one year after a data breach. Bischoff also notes that more severe data breaches (i.e., leaking highly sensitive information) see “more immediate drops in share price performance.” (Bischell)

However, not every data breach is equally punished. As an example of a deviation from the observations that Bischoff made, Facebook outperformed the NASDAQ after their April 2019 data breach of over 533 million people’s data:

Similarly, Harvard Business Review found that “a good corporate privacy policy can shield firms from the financial harm posed by a data breach… while a flawed policy can exacerbate the problems caused by a breach” (Martin et al.). HBS notes that firms that have high control and transparency in terms of data are “buffered from stock price damage during data breaches,” but “only 10% of Fortune 500 firms fit this profile” (Martin et al.). Additionally, it is observed that data breaches in neighboring companies within one’s industry can have a negative effect on your own company’s stock prices, but those that have strong control and transparency weather those storms better than others.

More recently, Apple announced that their privacy features could amount to billions of dollars in costs for the firm, and this rippled out into their neighboring competitors such as Twitter, Meta, Pinterest, and Snapchat. Zuckerberg, of Meta, noted that Apple’s newly introduced privacy features “could cost… $10 billion in lost sales [to Meta] this year” (Conger and Chen) which contributed to a drop of 26% of the company’s stock. Given that “people can’t really be targeted the way they were before,” (Conger and Chen) companies like Meta will have to go through a comprehensive rebuild of their business plan which could cause financial distress and ultimately share prices to drop. While there have been many factors in the general macroeconomic environment that have contributed to a pull back in share prices of large tech companies this year, implications of evolving privacy policies and practices have contributed to a sharp pullback in stock prices in the sector:

While looking at the long-term outperformance and growth of large tech companies’ financials and stock prices may lead one to take a cynical capitalist view of ‘profit over anything’ even at the expense of user privacy, research in the space and the recent exemplification of stock prices dropping amidst Apple’s change in privacy features shows that maybe these tech giants are not immune. Taking the other view, as ‘money makes the world go around,’ if markets signal the need for adoption of more strict privacy practices and policies, even without regulatory pressure, then companies could be influenced to have a more comprehensive adoption of the principals of data privacy practices.

Works Cited:

  • Bischoff, Paul. “How Data Breaches Affect Stock Market Share Prices.” Comparitech, 25 June 2021,
  • Conger, Kate, and Brian X. Chen. “A Change by Apple Is Tormenting Internet Companies, Especially Meta.” The New York Times, The New York Times, 3 Feb. 2022,
  • “Faang Portfolio.” PortfoliosLab,
  • Martin, Kelly D, et al. “Research: A Strong Privacy Policy Can Save Your Company Millions.” Harvard Business Review, 30 Aug. 2021,

Is the New Internet Prone to Old School Hacks?

Is the New Internet Prone to Old School Hacks?
By Sean Lo | October 20, 2022

The blockchain is commonly heralded as the future of the internet, however, Opensea’s email phishing incident in June 2022 proved that we may still be far away from true online safety. We are currently in the midst of one of the biggest technological shifts in the past few decades. What many people are referring to as Web3.0; blockchain is the main technology that is helping build the future of the internet. Underneath this shift, is the idea that the new age internet will be decentralized. In other words, the internet should be owned by the collective group of people that actually uses it, v.s what we have today.

In June 2022, Opensea, one of the largest Non-fungible token (NFT) marketplaces got hacked and lost 17 of their customers their entire NFT collection. It was reported that the value of the combined stolen assets was north of $2 million dollars. The question is then, how is it possible that the blockchain got hacked? Wasn’t the security of the blockchain the main feature that is promised in the new age internet? These are 2 very valid questions as a flaw in the blockchain would ultimately highlight the potential flaws of using the blockchain entirely. What was interesting about this specific incident was that the hack was actually a simple phishing scam, a type of scam that existed since the beginning of email. Opensea reported that an employee at, the email automation and lifecycle marketing platform Opensea uses, downloaded their email database to send a phishing email. In the attached image below reads the email that was sent to all Opensea customers. This was a couple months after the long awaited Ethereum merge, and the email used this opportunity to trick some users into signing malicious contract.

Opensea phishing email

As mentioned before, social engineering and phishing hacks have always been part of the internet. In fact, the “Nigerian prince” email scam still ranks in roughly $700K a year in lost funds. What makes this specific phishing incident so interesting is because it was done through a Web3.0 native company, and the stolen funds were all stolen directly on the blockchain. By pretending to be Opensea, they were able to get customers to sign a smart contract which the contract proceeded to drain the signers digital wallet. For context, smart contracts are a set of instructions that are binded by the blockchain, think of it as a set of instructions for the computer program to run. Smart contracts are written in the coding language called Solidity, so unless you can read that language, its highly likely that you aren’t aware of what you are signing.

Fake smart contract message

As we venture into the world of Web3.0 where blockchain is the underlying technology that is central to many types of online transactions, there comes a question of how liability and security should be governed in this new world. We’re still very early in the innings around Web3.0 adoption, and I truly believe we’re still likely half a decade away from true mass adoption. On top of all the existing Web2.0 regulations that companies need to follow, the government must also step up to create new laws to keep the regular citizen from malicious online acts. The anonymity of the blockchain does pose potential risks to the entire ecosystem, which is why I believe there must be federal laws around the technology to push us towards mass adoption. It’s really a matter of when rather than if, as there is a pretty clear increase in uses across the entire tech industry.

Anonymity as a Means of Abusing Privacy

Anonymity as a Means of Abusing Privacy
By Mima Mirkovic | October 20, 2022

It’s spooky season, and what’s spookier than the Dark Web?

Where’d All the Web Go?
Traditional search engines like Google, Yahoo, and Bing make up 4% of the “surface web”… but where’s the remaining 96%???

At the intersection of anonymity and privacy exists the Dark Web, an elusive section of the internet not indexed by web crawlers and home to 3,000 hidden sites. The market share of the dark web is 6%, serving as a secret marketplace notorious for illicit drugs, arms dealing, human trafficking, major fraud, and more.

This brings me to an ethical, head-scratching conundrum that I’ve been mulling over for years: how is any of this legal?

It isn’t, but it is.

When privacy originated in the 14th century, I don’t think it ever expected the internet to exist. The arrival of the internet mutated common definitions of privacy, but the arrival of the Dark Web completely obliterated these definitions because it offered a means through which privacy could be abused: anonymity.

Dark Web Origins

Time for a history lesson!

In the late 1990s, the US Department of Defense developed an encrypted network using “onion routing” to protect sensitive communications between US spies. This network was intended to protect dissidents, whistleblowers, journalists, and advocates for democracy in authoritarian states.

In the early 2000s, a group of computer scientists used onion routing to develop the Tor (“The Onion Router”) Project, a nonprofit software organization whose mission is to “advance human rights and defend your privacy online through free software and open networks”. By simply downloading the Tor browser, anyone – ANYONE – can access the dark web. The Tor browser works to anonymize your location and protect your data from hackers and web trackers.

In short, the Tor browser offers users an unmatched level of security and protects your human right to privacy via anonymity, but not all who lurk in the shadows are saints.

Ethics, Schmethics
Privacy is malleable. Its definition is groundless. As Solove would say, “privacy suffers from an embarrassment of meanings”. Privacy is bound to whichever context it is placed in, which, conjoined with anonymity, invites the opportunity for violation.

Through a critical multi-dimensional analytic lens, privacy suffers from its own internal complexity. In the context of onion routing, the malleable nature of privacy allows for it to be used for harm despite its objectives, justifications, and applications being intended for good:

  • Objective – Provide an encrypted, anonymized network
  • Justification – Privacy is a human right
  • Application – A secure network for individuals to avoid censorship and scrutiny from their authoritarian regimes

From the “good guy” perspective, the Tor Project was created to uphold an entity we value the most. You could even argue that it was an ethical approach to protecting privacy. In fact, the Tor Project upholds the central tenets of The Belmont Report: users are given full autonomy over their own decisions, users are free from obstruction or legal harm, and every user is given access to the same degree of privacy.

On the flip side, the “bad guys” quickly learned that their malicious actions online could be done without trace or consequence. Take these stats for example: 50,000 terrorist groups operate on the dark web, 8.1% of listings on darknet marketplaces are for illicit drugs, and illegal financing takes up around 6.3% of all dark web markets. You can purchase someone’s credit card number for as little as $9 on the dark web – how is any of this respectful, just, or fair?

Think about it this way…

In 2021, a hacker posted 700M LinkedIn records on the dark web, exposing 92% of LinkedIn users. Your data, the one you work hard to protect, was probably (if not almost certainly) exposed in that breach. That means that your phone number, geolocation, and connected social media accounts were posted for sale by hackers on the dark web. The “bad guys” saw an opportunity to exploit your privacy, my privacy, your friends’ privacy, and your family’s privacy in exchange for a profit – yet their actions were permissible under the guise of privacy and anonymity.

Let’s look at this example through the lens of the Belmont Report:

* Respect for Persons – Hacking is clearly detrimental to innocent users of the web, yet hacking is a repeatable offense and difficult to prevent from occurring
* Beneficence – Hackers don’t consider the potential risks that would befall on innocent people, only the benefits they would stand to gain from exposing these accounts
* Justice – 700M records were unfairly exposed, and the repercussions were not evenly distributed nor was there appropriate remediation

There are thousands of more examples (some much more horrifying), where we could apply these frameworks to show how anonymity enables and promotes the abuse of our human right to privacy. The main takeaway is that no, these actions do not reflect a respect for persons approach, they’re not just in nature, and they’re certainly not fair.

Privacy is a fundamental part of our existence and it deserves to be protected – to an extent. The Tor browser originally presented itself as a morally righteous platform for users to evade censorship, but the dark deeds that occur on darknets nowadays defeat the purpose of privacy entirely. With that in mind, the Belmont Report is a wonderful framework for assessing data protection, but I believe it requires some (major) tweaks to encompass more extreme scenarios.

At the end of the day, your privacy is not nearly as protected as the privacy of criminals on the dark web. Criminals are kept safe because privacy is a human right, yet they are permitted to abuse this privacy in a way that exploits innocent people, harms society, and provides a hub for lawbreaking of the highest degree. At the same time, the law enforcement and government agencies that work to uphold privacy are the same ones breaking this human right in order to catch these “bad guys”. If you ever find yourself scouring through the dark web, proceed with caution, because even in the most private of locations, you’re always being watched!

Like I said earlier – an ethical, head-scratching conundrum that I will continue to mull over for years.


[1] Dark Web and Its Impact in Online Anonymity and Privacy: A Critical Analysis and Review
[2] How Much of the Internet is the Dark Web in 2022?
[3] The Truth About The Dark Web – IMF F&D.
[4] Taking on the Dark Web: Law Enforcement Experts ID Investigative Needs | National Institute of Justice