Neurotechnologies, Privacy, and the need for a revised Belmont Report

Neurotechnologies, Privacy, and the need for a revised Belmont Report
By Filippos Papapolyzos | June 18, 2021

Ever since Elon Musk’s launch of his futuristic venture Neuralink in 2016, the popularity of the field of neurotechnology – which sits at the intersection of neuroscience and engineering – has skyrocketed. Neuralink’s goal has been to develop skull-implantable chips that interface one’s brain with their devices, a technology that the founder said, in 2017, is 8 to 10 years away. This technology would have tremendous potential for individuals with disabilities and brain-related disorders, such as by allowing speech-impaired individuals to regain their voice or paraplegics to control prosthetic limbs. The company’s founder, however, has largely focused on advertising a relatively more commercial version of the technology, that would allow everyday users to control their smartphones or even communicate with each other using just their thoughts. To this day, the project is still very far from reality and MIT Technology Review has dubbed it neuroscience theatre aimed at stirring excitement and attracting engineers while other experts have called it bad science fiction. Regardless of the eventual successfulness of brain-interfacing technologies, the truth remains that we are still very underprepared from a legal and ethical standpoint, due to their immense privacy implications as well as the philosophical questions they pose.

The Link
The Link

Implications for Privacy

Given that neurotechnologies would have direct real-time access to one’s most internal thoughts, many consider them to be the last frontier of privacy. All our thoughts, including our deepest secrets and perhaps even ideas that we are not consciously aware of, would be digitized and transmitted to our smart devices or the cloud for processing. Not unlike other data we casually share today, this data could be passed on to third parties such as advertisers and law enforcement. By processing and storing this information on the cloud, it would be exposed to all sorts of cybersecurity risks, that would put individuals’ most personal information and even dignity at risk. If data breaches today expose proxies of our thoughts – i.e the data we produce – data breaches on neural data would be exposing our innermost selves. Law enforcement could surveil and arrest individuals simply for thinking of committing a crime and malicious hackers could make us think and do things against our will or extort money for our thoughts.

A slippery slope argument that tends to be made in regards to such potential data sharing is that, in some ways, it already happens. Smartphones already function as extensions of our cognition and, through them, we tend to disclose all sorts of information through social media posts or apps that monitor our fitness, for instance. A key difference between neurotechnologies and smartphones, however, is the voluntariness of our data sharing today. A social media post, for instance, constitutes an action, i.e a thought that has been manifested, and is both consensual and voluntary in the clear majority of cases. Instagram may be processing our every like or engagement but we still maintain the option of not performing that action. Neuralink would be tapping into thoughts, which have not yet been manifested into action as we have not yet applied our decision making skills to judge the appropriateness of performing said action. Another key difference is the precision of these technologies. Neurotechnologies would not be humanity’s first attempt at influencing one’s course of action but they would definitely be the most refined. Lastly, the mechanics of how the human brain functions still remain vastly unexplored and what we call consciousness may simply be the tip of the iceberg. If Neuralink were to expose what lies underneath, we would likely not be positively surprised.

Brain Privacy
Brain Privacy

Challenges to The Belmont Report

Since 1976, The Belmont Report has been a milestone for ethics in research and is still largely consulted by ethical committees, such as Institutional Review Boards in the context of academic research, having set the principles of Respect for Persons, Beneficence, and Justice. With the rise of neurotechnologies, it is evident that these principles are not sufficient. The challenges brain-interfacing poses have led many experts to plead for a globally-coordinated effort to draft a new set of rules guiding neurotechnologies.

The principle of Respect for Persons is rooted in the idea that individuals act as autonomous agents, which is an essential requirement for informed consent to take place. Individuals with diminished autonomy, such as children or prisoners, are entitled to special protections to ensure  they are not taken advantage of. Neurotechnologies could, potentially, influence one’s thoughts and actions, thereby undermining their agency and, as a result, also their autonomy. The authenticity of any form of consent provided after an individual has interfaced with neurotechnology would be subject to doubt; we would not be able to judge the extent to which third parties might have participated in one’s decision making. 

From this point onwards, the Beneficence principle would also not be guaranteed. Human decision-making is not essentially rational, which may often lead to one’s detriment, but when it is voluntary it is seen as part of one’s self. When harm is not voluntary, it is inflicted. When consent is contested, autonomy is put under question. This means that any potential harm suffered by a person using brain-interfacing technologies could be seen as a product of said technologies and therefore as a form of inflicted harm. Since Beneficence is founded on the “do not harm” maxim, neurotechnologies pose a serious challenge to this principle.

Lastly, given that the private company would, realistically, most likely sell the technology at a high price point, it would be accessible only to those who can afford it. If the device were to augment users’ mental or physical capacities, this would severely magnify pre-existing inequalities on the basis of income as well as willingness to use the technology. This poses a challenge to the Justice principle as non-users could receive an asymmetric burden as a result of the benefits received by users.

In the frequently quoted masterpiece 1984, George Orwell speaks of a dystopian future where “nothing was your own except the few cubic centimeters inside your skull”. Will this future soon be past?

References

Can privacy coexist with technology that reads and changes brain activity?

https://news.columbia.edu/content/experts-call-ethics-rules-protect-privacy-free-will-brain-implants-and-ai-merge

https://www.cell.com/cell/fulltext/S0092-8674(16)31449-0

https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/index.html

https://www.csoonline.com/article/3429361/what-are-the-security-implications-of-elon-musks-neuralink.html

https://www.georgetowntech.org/publications-blog/2017/5/24/elon-musks-neuralink-privacy-protections-for-merged-biological-machine-intelligence-sa3lw

https://www.technologyreview.com/2020/08/30/1007786/elon-musks-neuralink-demo-update-neuroscience-theater/

https://www.sciencetimes.com/articles/31428/20210528/neuralink-brain-chip-will-end-language-five-10-years-elon.htm

https://www.theverge.com/2017/4/21/15370376/elon-musk-neuralink-brain-computer-ai-implant-neuroscience

https://www.inverse.com/science/neuralink-bad-sci-fi