AI technology in my doctor’s office: Have I given consent?

AI technology in my doctor’s office: Have I given consent?

By Anonymous | October 28, 2022

Should patients have the right to know what AI technologies are being used to provide their health care and what their potential risks are? Should they be able to opt out of these technologies without being denied health care services? These questions become extremely critical over the next decade as policy makers start regulating this technology.

Have you ever wondered how doctors and nurses spend lesser and lesser time with you these days in the examination room but still can make a diagnosis faster than ever before? You can thank the proliferation of AI technology in the healthcare industry. To clarify, we are not referring to the Alexa or Google Home device your doctor may use to set their reminders. We refer to the wide array of AI tools and technology that powers every aspect of the healthcare system, be it for diagnosis and treatments, routine admin tasks, or insurance claims processing.


Image Source: Adobe Stock Image

As explained by Davenport et al. in their Future Heathcare Journal publication, AI technologies in healthcare has the potential to bring tremendous benefits to the healthcare community and patients over the next decade. It can help not only help automate several tasks that previously need intervention of trained medical staff but also significantly reduce costs. The technology clearly has so much potential and can be extremely beneficial to patients. But what are its risks? Will it be equally fair to all patients, or will the technology benefit some groups of people over others? In other words, is the AI technology unbiased and fair?

Even though full-scale adoption on AI in healthcare is expected to be 5-10 years away, there are already ample documented evidence to show lack of fairness in healthcare AI. According to this Postively Aware article, a 2019 study showed that an algorithm used by hospitals and insurers to manage 200 million people in US was less likely to refer black people for extra care than white people. Similarly, a Harvard School of Public Health article on algorithmic biases speaks about how the Framingham Heart Study performed much better for Caucasians than Black patients when calculating cardiovascular risk score.

Unsurprisingly there is very little regulation or oversight over these AI tools in healthcare. A recently published ACLU article on how algorithmic decision making in healthcare can deepen racial bias states that unlike medical devices that are regulated by FDA, “algorithmic decision-making tools used in clinical, administrative, and public health settings — such as those that predict risk of mortality, likelihood of readmission, and in-home care needs — are not required to be reviewed or regulated by the FDA or any regulatory body.”

 


Image Source: Adobe Stock Image

While we trust that the AI researchers and healthcare community will continue to reduce algorithmic biases and strive to make healthcare AI fair for all, the question we ask ourselves today is should the patient have the right to consent, contest, or opt out of AI technologies in their medical care? Given there is so little regulation and oversight about these technologies, we strongly believe that the right way forward is to create policies that treat AI technology’s use in healthcare in the same way as a surgical procedure or drug treatment. Educating customers about the risk of these technologies and getting their informed consent is critical. Heathcare providers should explain to patients clearly about what AI technologies are being used as part of their care, what are their potential risks/drawbacks, and seek informed consent from patients to opt-in. Specific data around algorithmic accuracy against various demographic groups should also be made available to the patient so that they can clearly assess the risk applicable to them. If the patient is not comfortable with these technologies, they should have the right to opt out and still be entitled to alternate course of treatment when possible.

 

Credit Scoring Apartheid in Post-Apartheid South Africa

Credit Scoring Apartheid in Post-Apartheid South Africa

By Anonymous | October 28, 2022

More than 20 years after the end of apartheid in South Africa access to credit is still skewed by race towards the white community of South Africa. Apartheid was a policy or system of segregation or discrimination with the ruling minority whites enjoying more access to resources at the expense of the majority black in South African prior to 1994. Nonwhites included blacks, Indians and coloureds with blacks or African as the majority. This meant that black South Africans were excluded from certain educational institutions, neighborhoods, type of education. This resulted in Africans earning less a third of white South Africans.

Financial institutions in South Africa now use Big Data to develop credit models or algorithms to make credit decisions. It is argued that the use of Big Data and related credit models eliminate human bias in their decision. This is critical in South Africa as such decisions are more likely to be made white person and so any system that purports to eliminate human bias will be seen in good light by Africans. The South Africa credit market shows that Africans are either denied mortgages or receive unfavorable terms like high interest rates or lower mortgage amounts. This is similar for African South African owned businesses who have access to fewer and lower amount of loans.

Access to credit enables investment in human capital and businesses and has the potential to reduce inequality in society and drive economic growth. To support growth for all, South Africa through the bill of rights enshrines the constitutional goals of achieving human dignity, equality for all races and freedom. Several laws have been passed to give effect to these constitutional goals and to enforce equality and prevent discrimination, such as, the Employment Equity Act and the Promotion of Equality and Prevention of Unfair Discrimination Act (PEPUDA). These laws do not apply to the data used in credit scoring and leaves a gap in the application of Big Data credit decisioning.

These credit scoring models are either build using generic statistical analysis or machine learning models. The models will essentially be based on historical credit related data to determine the credit worthiness of individuals. These would include aspects like the amount of credit that can be extended to the person, the period the loan can be granted for, the interest rate payable on the loan and the type and need of collateral for the loan. These models will place significant value on traits in the available Big Data universe using weights. These traits will be factors like education, level and source of income, employment type (formal or informal), assets owned, but all of this will be on a data set made up of predominantly white people given the history of the country. By implications the modelled traits resemble the way of life of white people and how they have structurally benefited from apartheid. Credit models used for all races will largely be a byproduct of apartheid.

The table below is an illustration of the change in labour force participation by race since 1993 a year before apartheid was formally abolished. There is a significant increase in labour force participation for Africans (Blacks) from 1993 to 2008.

The table below shows average income by race and its clear that Africans have continued to receive less than a third of the income received by white people 1993 – 2008 even though they have seen an increase income..

The racial implications of credit apartheid are observed in several areas of everyday life. White South Africans constitute less than 10% of the population but own more than 72% of the available land and 37% of residential property. 69% of white people have internet access, compared to just 48% of black. 53% of entrepreneurs are white and with only 38% African. Black South Africans owned only 28% of available listed shares.

To reduce this bias, part of the solution is to build models for credit scoring based on data from digital platforms and applications that resembles all consumer behavior. This be done by combining data from multiple sources, like airtime usage, mobile money usage, geolocation, bills payment history, and social media usage. This could help to eliminate Big Data bias by race as it would result in new models that are more inclusive. The models can also introduce different weights to different population groups and income level categories. This would improve the fairness of the credit scoring models. It is also an opportunity for regulators to look closely at these models to reduce racial bias which perpetuates apartheid and by implication racial inequality.

Such changes would ensure that the credit scoring models are not only based on data from credit bureaus which is mainly based on white South Africans who have always had access to credit.

Positionality

I’m a black South African raised in a low-income home. At the beginning of my career, I was only awarded 50% mortgage because I had no credit history and probably because of my racial classification.

 

References

  • https://www.oecd.org/employment/emp/45282868.pdf
  • https://www.marklives.com/2017/09/media-future-internet-access-in-south-africa-has-many-divides/#:~:text=%E2%80%9COver%20two%2Dthirds%20of%20white,%2C8%25%20of%20coloured%20users.
  • https://www.gov.za/sites/default/files/gcis_document/201903/national-action-plan.pdf
  • https://mg.co.za/article/2011-12-09-who-owns-what-by-race/
  • https://www.news24.com/citypress/personal-finance/does-your-race-affect-your-interest-rate-20190315

In a Digital Playground, Who Has Access To Our Children?

In a Digital Playground, Who Has Access To Our Children?
By Anonymous | October 27, 2022

On one particularly lazy Saturday afternoon, I heard my then-8 year old daughter call, “Hey dad! Someone’s talking to me on my iPad. Can I talk back?” Taking a look, I was curious to see that another unknown user’s avatar was initiating a freeform text chat in a popular educational app. I didn’t realize it even had a chat feature. My dad-spidey-sense kicked in and I asked my daughter what she thought. She replied, “I don’t know this person, they’re a stranger, I’ll block them.”


Source: https://www.adventureacademy.com

Most parents these days have a good general idea about how to moderate “pull content” (where users choose content to retrieve) for their kids as they use net-connected tablets at home, but for me this situation represents a different kind of concern — “how apps may allow the public at large access to communicate live and directly, and at any time, with our kids at home without our knowledge, either through text-based chat, or more concerning, video or audio.

Solove’s Privacy Taxonomy

In 2006, privacy law professor Daniel Solove published a taxonomy to help illustrate harms from privacy violations, modeled across a complete lifecycle flowing from information collection, to processing and dissemination, to invasion. My concern lies in “intrusion” within the invasion stage, which he describes as “invasive acts that disturb one’s tranquility or solitude”.


Source: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=667622

This intrusion makes children, already vulnerable due to lack of maturity and autonomy, even more vulnerable to a myriad of both online and physical harms, such as scams, phishing attacks, child predation, kidnapping, cyberbullying, and identity theft.

Fourth Amendment Rights

The Fourth Amendment constitutionally protects, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures”. The framers of the constitution, of course, didn’t have the Internet in mind in the 18th century, however a landmark case in 1967, Katz v. United States extended the interpretation of the Fourth Amendment to locations where a person would have a “reasonable expectation of privacy I would argue that the right to defend one’s household, and family within, from such intrusion would apply to the digital realm also.

Cleveland State University’s Online Test Practices

A recent example of the Fourth Amendment being applied within the digital realm involves a ruling by a federal judge against Cleveland State University. While administering a remote online test to student Aaron Ogletree, the university required Aaron to provide test proctors with live webcam footage of a wide scanning view of his bedroom. The judge ruled that the university violated Aaron’s constitutional right against unreasonable search and seizure due Aaron’s lack of choice on the matter. Similarly, younger kids subjected to invasive app access by default may not know how to prevent this access.

Children’s Online Privacy Protection Act (COPPA)

The COPPA rule became effective in 2020, in order to further protect the online privacy of children under the age of 13 by requiring verifiable parental consent (VPC) before any collection of personally identifiable information (PII) can take place. Enforced primarily by the FTC, this legislation is important in providing parents with a basic standard of protection and a workflow to proactively opt in and control the online experience. Despite the best efforts of app developers to follow COPPA and build safe communities, anyone could still be on the other end of unsolicited communication if bad actors falsify sign-up information or if accounts get hacked.

Internet Crimes Against Children Task Force

In 1998, the Office of Juvenile Justice and Delinquency Prevention (OJJDP) created the Internet Crimes Against Children Task Force Program (ICAC), in part due to “heightened online activity by predators seeking unsupervised contact with potential underage victims”. Their website shares, “Since 1998, ICAC Task Forces have reviewed more than 1,452,040 reports of online child exploitation, resulting in the arrest of more than 123,790 suspects.”

Conclusion

Fast forward to today, my younger child is now 6 and has asked for the same educational app. I’ve done enough due diligence to be comfortable installing it for him, but only because the app gives us the option to fully disable the “push chat” feature until he’s old enough to safely use it. If he’s like his older sister, he’ll outgrow the educational content well before then.


Source: https://pbs.twimg.com/media/FSV4TiPWYAEdWXx?format=jpg

References

[1] https://www.techopedia.com/definition/21548/push-media
[2] https://constitution.congress.gov/constitution/amendment-4/
[3] https://www.pewresearch.org/internet/2020/07/28/parenting-approaches-and-concerns-related-to-digital-devices/
[4] https://ssrn.com/abstract=667622
[5] https://www.americanbar.org/groups/judicial/publications/judges_journal/2016/spring/telephone_technology_versus_the_fourth_amendment/
[6] https://www.law.cornell.edu/constitution-conan/amendment-4/fourth-amendment-historical-background
[7] https://www.theverge.com/2022/8/23/23318067/cleveland-state-university-online-proctoring-decision-room-scan
[8] https://fpf.org/verifiable-parental-consent-the-state-of-play/
[9] https://ojjdp.ojp.gov/programs/internet-crimes-against-children-task-force-program

Is Immigration and Customs Enforcement above the law?

Is Immigration and Customs Enforcement above the law?

By Anonymous | October 28, 2022

ICE’s surveillance on the American population has gone under the radar and kept a secret.

The Immigration and Customs Enforcement or ICE is a government agency that might know more about you than you think. When the topic of government surveillance comes up ICE might not be the first government agency that comes to mind to the general population but it is the first agency that comes to mind among the undocument immigrant community. ICE may be the government agency that has access to more information on the population out of all the government agencies.

ICE was founded in 2003 and since then it has gathered more and more information on the population living in the United States on the basis of driving deportations. This gathering of information has led to mass surveillance on the population. ICE has accessed data sets containing information on the vast majority of the population in the US. These records come from DMVs in the states that have allowed undocumented immigrants to have a drivers license in the state. ICE has also accessed records from agencies that provide the essential services like electricity, gas and water. ICE has been able to access and purchase these data sets seemingly without any other government or public oversight. Another key access to data is the information gathered from unaccompanied minors trying to get to the US, the information provided by the children is then used to drive deportations of that child’s family.

Immigration and Customs Enforcement officer

This is not only a legal matter but also an ethical issue. There are privacy laws in place that should protect not only citizens but everyone living in the US and ICE seems to be above those laws and gathering information on the entire population. There does not seem to be any enforcement of the laws to hold ICE or agencies providing the data to ICE accountable on the data they have accessed. How can these laws be enforced when ICE is able to do all their surveillance in secret? States and especially government agencies should have to have the data they have stored strictly regulated. These are government agencies that the population is trusting to keep their data safe and this data is being shared without consent or notification.

ICE has access to drivers license data on roughly 75% of the US population. This means that the surveillance that ICE has is not only limited to undocumented immigrants but also extends to American citizens. ICE is also able to locate 75% of adults in the US based only on utility records. ICE has been using facial recognition technology as early as 2008 when they accessed the Rhode Island DMV database to do facial recognition on their database.

ICE takes advantage of the basic necessities that a person living in the US needs to access data on the US population with deportation as the leading reason for the data. The state governments that have allowed undocumented immigrants to have access to a drivers license in their state should also have a responsibility to protect the information of the individuals that have trusted their information to them. Having access to the basic necessities like utilities should not come with the cost of having your data shared with government agencies without your knowledge or consent.

ICE and the data the ICE has access to should be more strictly regulated to be compliant with state and federal privacy laws. While this is already a concern among the undocumented community in the US this should be a greater concern among the entire population because ICE has be able to access information on the entire population and not just the undocumented immigrant population.

Citation:

Nina Wang, Allison McDonald, Daniel Bateyko & Emily Tucker, American Dragnet: Data-Driven Deportation in the 21st Century, Center on Privacy & Technology at Georgetown Law (2022).

My Browser is a Psychic

My Browser is a Psychic

By Savita Chari | October 28, 2022

The other day I accompanied a friend to the local drug store. While she was shopping, I was browsing. I was tempted to buy my favorite chocolate, but the calorie count made me put it back. Next day as I opened the facebook app, an ad for that very brand of chocolate popped up with a coupon. Was it just a coincidence or can facebook read my mind?

Then, one day I started seeing email deals and ads promoting cruise ship adventures. I was dumbfounded because the previous night on the dinner-table my family was discussing our next vacation. Is my home bugged? How did my browser know that I wanted a cruise vacation? We had not yet started searching on the internet. This left me very uncomfortable.

Like all smart people, I decided to check if other people had similar experiences. I whipped out my phone and asked my trusted ‘Hey Google’ to search. Then it struck me, if google can listen to my question, can it also listen to my other conversations? Could it be possible that without my knowledge, all my conversation was being tracked by my android devices? Turns out, I am not the only one experiencing this. Chat forums such as Reddit and Quora have many people describing their own experience of seeing ads based on private in-person or online conversations. Google vehemently denies that it eavesdrops and states that according to its content policy for app developers, apps cannot listen unless specifically allowed to do so and in active use.

As reported by BBC, two Cybersecurity experts Ken Munro and David Lodge of Pen Test Partners , performed an experiment1 to see if it was physically possible for an android phone to listen to conversation, when not in use. They used APIs provided by Android platform to write a small prototype app with mic enabled. Low and behold, it started recording entire conversation within the vicinity of the phone, even though the experimental app was not in use. There was no spike in data usage because the wi-fi was enabled and the battery did not drain much to raise any alarms.

A Bloomberg report5 cited people with inside knowledge that Facebook and Google hired many contractors to transcribe audio clippings of its users.
An article by USAtoday6 reported an investigation done by a Dutch broadcaster VRT where they proved how Google Smart Home devices eavesdrop and record all private conversations happening in our homes. Google contractors listen to these audio recordings and use them to enhance their offerings (aka serve targeted ads).
Yes, our homes are bugged!

Serving ads online can only go so far to entice us. Companies know that in order to increase revenue, ads should be served at the time when we are shopping. This is made possible with the advent of Bluetooth Beacon.
These small bluetooth devices are strategically placed in various locations in
a store or an enclosed mall and they communicate with our smartphone via bluetooth. Due to bluetooth technology they know our precise location and are able to serve us ads or promotional deals in realtime. No wonder, these days all stores encourage us to download their apps. These beacons communicate with the app and send information like, what are we browsing, where are we lingering etc. Armed with this knowledge the apps send us notifications related to promotions on those items to encourage us to buy them while we are in the store and are more likely to spend extra.
Even if we don’t have the store app installed, there is a stealth-industry of third-party location-marketing companies that collect data based on where we pause in an aisle to examine a product, which translates to our possible interest. These companies take their beacon tracking code and bundle it into a toolkit that is used by other app developers. They are paid in cash or kind ( free reports) to combine these toolkits with the apps. The idea is to get to as many mobile phones as possible.
Mostly these third-party companies are very small and go bankrupt at the drop of a lawsuit and another one crops up in its place. The collected data is sold to big consumers like Facebook, Google etc. who already have a plethora of data on us from various sources. In their privacy statements, they mention these “thirdparty” partners, who help them serve targeted ads and promotion to us.

Picture Credit NyTimes11
Now I know how I got the ad for that chocolate.
So much for clairvoyance!

Games are Dangerous! – Robinhood and Youth Gambling

Games are Dangerous! – Robinhood and Youth Gambling

By John Mah | October 28, 2022

Disclaimer: This post mentions an instance of suicide.

Learning to invest and understand the stock market are fundamental in wealth building. Unfortunately, only 58% of Americans reported owning a stock in 2022 [1]. Not surprising considering it seems incredibly terrifying to enter the stock market world due to indecipherable indexes, mind boggling charts, and confusing financial jargon. However, with gamification all these barriers of entry are thrown out the window due to the addictive psychological mechanism of variable rewards [2]. Teenagers and young adults are especially susceptible and are entering the financial world without proper risk control and developing an addiction, akin to gambling, towards excessive trading for quick profits. Robinhood, a commission free trading platform, is leading the charge for this abuse and making a conscious decision to disregard the well-being of the youth for their own gains.

 

What is Gamification?

Gamification refers to the development of features on applications that make the user experience more visually appealing and intuitive. For Robinhood, the purpose is to make stock trading more fun for the average consumer, it is like playing a video game where it’s appealing to see fun visual graphics when performing an action [3]. These techniques are dangerous as they go beyond just reinforcing a “fun” action to take; instead, they create addictive habits for users.

So why are technology companies like Robinhood using these techniques? It’s simple – to retain users on their platform, which leads to increased revenue and opportunities to procure data. These techniques ultimately tie into the “hooked model” [4], a pattern that allows companies to hook users into their applications as a habit in their daily lives rather than as a tool. The hooked model isn’t just apparent in Robinhood, it is deeply embedded into many of today’s most popular applications like Instagram and Uber.

 [6]

 So Why Target the Youth?

Gamification techniques are particularly effective on the youth because they are more familiar with game like interfaces. As such, it’s not a surprise that roughly 40% of Robinhood’s userbase are between the ages of 19 and 34, in fact the median age of their users is 31 [3]. It’s important to understand that Robinhood is a broker who makes money from trades executed on their platform. In essence, the more trades done on their platform, the more money they make. Robinhood’s use of gamification is now very clear. Teenagers and young adults have little to no experience with investing. When you combine this lack of knowledge with extremely high-risk tolerance that comes from youth brashness and years of playing video games, you get the perfect targets for brokers. Robinhood is not in the business of education, they violate any aspect of justice as specified in the Belmont Report by exploiting the youth for profit.

 [7]

 Inadequate Disclaimers and Disclosures

It’s without a doubt that the financial sector is extremely regulated. A lot of this regulation has to do with information and data that providers are generally transparent with customers. Unfortunately, with an application like Robinhood with heavily gamified features, these disclaimers and disclosures are extremely easy to overlook. The game-like designs leave little room for any disclaimers or disclosures about the risks users are taking when performing trades. Users essentially lose their ability to make rational decisions and we can argue their ability to “consent” sensibly is taken away. There are clear dangers to this. One such example is the case of Alexander Kearns, a 20-year-old from Naperville Illinois, who committed suicide after mistakenly believing he had lost $730,000 dollars [5]. His lack of knowledge on how options worked made him believe he had lost money and, in the end, took his own life. It’s not farfetched to say this will likely not be a rare occurrence anymore in this age of technology.

 Conclusion

Addiction is a real problem. As companies continue to develop new applications, it’s only logical that they are going to want to rope in as many users as they can, and gamified features fit perfectly. The youth are the biggest victims as they grew up with technology. So the next time you open your favorite application, try to see if you’ve been hooked into a gamified model.

 

Sources

[1] Dhawan, S. (2022, August 11). Financialexpress. The Financial Express Stories. Retrieved October 28, 2022, from https://www.financialexpress.com/investing-abroad/featured-stories/58-of-americans-reported-owning-stock-in-april-2022-with-an-ownership-rate-of-89-for-adults-earning-100000-or-more/2626249/

[2] Eyal, N. (2022, August 25). Variable rewards: Want to hook users? Drive them crazy. Nir and Far. Retrieved October 28, 2022, from https://www.nirandfar.com/want-to-hook-your-users-drive-them-crazy/

[3] Gallo, N. (2022, June 29). Robinhood and the gamification of investing. FinMasters. Retrieved October 28, 2022, from https://finmasters.com/gamification-of-investing/

[4] Eyal, N. (2022, September 11). The hooked model: How to manufacture desire in 4 steps. Nir and Far. Retrieved October 28, 2022, from https://www.nirandfar.com/how-to-manufacture-desire/

[5] CNBC. (2021, February 9). Robinhood sued by family of 20-year-old trader who killed himself after believing he racked up huge losses. CNBC. Retrieved October 28, 2022, from https://www.cnbc.com/2021/02/08/robinhood-sued-by-family-of-alex-kearns-20-year-old-trader-who-killed-himself-.html

[6] https://edtechnology.co.uk/dashboard2/wp-content/uploads/2021/09/3726585-scaled.jpg

[7] https://forexillustrated.com/wp-content/uploads/2022/03/Biggest-trading-losses-from-Wallstreetbets-800×533.png