Brain-Machine Interfaces and Neuralink: privacy and ethical concerns

Brain-Machine Interfaces and Neuralink: privacy and ethical concerns
By Anonymous | July 9, 2021

Brain-Machine Interfaces

As the development of microchips furthers and advances in neuroscience occur, the possibility for seamless brain-machine interfaces, where a device decodes inputs from the user’s brain to perform functions, becomes more of a reality. These various forms of these technologies already exist. However, technological advances have made implantable and portable devices possible. Imagine a future where humans don’t need to talk to each other, but rather can transmit their thoughts directly to another person. This idea is the eventual goal of Elon Musk, the founder of Neuralink. Currently, Neuralink is one of the main companies involved in the advancement of this type of technology. Analysis of the Neuralink’s technology and their overall mission statement provide an interesting insight into the future of this type of human-computer interface and the potential privacy and ethical concerns with this technology.

Diagram of brain-computer interface

Brain-machine interfaces have actually been in existence for over 50 years. Research on these interfaces began in the 70’s at UCLA. However, with recent developments in wireless technologies, implanting devices, computational power and electrode design, a world where a device can be implanted to read the motor movements of a brain is now possible. In fact, Neuralink has already achieved this in a chimpanzee. The company successfully allowed this chimpanzee to control a game of pong with its mind. Their current goal is to advance the prosthetic space by allowing prosthetic devices to directly read input from the user’s motor cortex. However, the applications of this technology are vast and Musk has mentioned other ideas about the use of this technology such as downloading languages into brain, essentially allowing the device to write onto the brain. For now, this remains out of the realm of possibility, as our current understanding of the brain is insufficiently advanced. Yet, we are making advances in this direction every year. A paper was just published in Nature that allowed for high performance decoding of motor cortex signals into handwriting using a recurrent neural network.

Picture of chimpanzee controlling game from neuralink

Privacy

As this technology further develops, several privacy and ethical concerns come into question. To begin, using Solove’s Taxonomy as a privacy framework, many areas of potential harm are revealed. In the realm of information collection, there is much risk. Brain-computer interfaces, depending on where they are implanted, could have access to people’s most private thoughts and emotions. This information would need to be transmitted to another device for processing. The collection of this information by companies such as advertisers would represent a major breach of privacy. Additionally, there is risk to the user from information processing. These devices must work concurrently with other devices and often wirelessly. Given the widespread importance of cloud computing in much of today’s technology, offloading information from these devices to the cloud would be likely. Having the data stored in a database puts the user at the risk of secondary use if proper privacy policies are not implemented. The trove of information stored within the information collected from the brain is vast. These datasets could be combined with existing databases such as browsing history on Google to provide third parties with unimaginable context on individuals. Lastly, there is risk for information dissemination, more specifically, exposure. The information collected and processed by these devices would need to be stored digitally. Keeping such private information, even if anonymized, would be a huge potential for harm, as the contents of the information may in itself be re-identifiable to a specific individual. Lastly there is risk for invasions such as decisional interference. Brain-machine interfaces would not only be able to read information in the brain but also write information. This would allow the device to make potential emotional changes in its users, which be a major example of decisional interference. Similar devices are already present in major depression treatment devices that implant electrodes for deep brain stimulation.

Ethics

One of the most common ethical principles for guiding ethical behavior in research and science is the Belmont Principles, which include respect for persons, beneficence and justice. Future brain-machine interfaces present challenges to all three guiding principles. In order to protect the respect for persons, people’s autonomy must be respected. However, with these devices, the emotions of the users could physically be altered by the device, and thus affecting their autonomy. Beneficence involves doing no harm to the participants. However, as mentioned with the privacy potential harms, there is likely to be harm towards the first adopters of the technology. In regards to justice, these devices may also be lacking. The first iterations of the devices are extremely expensive and not attainable to most people. However, the potential cognitive benefits of such devices would be vast. This could further emphasize the already wide wealth inequality gap. The benefits of such a device would not be spread fairly across all participants and would mostly benefit those who could afford the devices.

Based on other respected neuroscientists invested in brain-machine interfaces, these sort of devices with the abilities purported by Elon are quite far away. We currently still lack fundamental knowledge about the details of the brain and its inner workings. Yet, our existing guidelines for privacy and ethics fail to encompass the potential of such advances in brain-machine interfaces, which is why further thought is needed to provide polices and frameworks to properly guide the development of the technology.