top of page

L&E Analysis: What is Neural Privacy, and Why is it Important?

More US States are Regulating it

By Christina Catenacci

Mar 14, 2025

Legal & Ethical Analysis: Issue 1°


Key Points 


  • Neural data is very sensitive and can reveal a great deal about a person 


  • The law is starting to catch up to the tech and the ethicists’ concerns  


  • In North America, California and Colorado are leading the way when it comes to creating mental privacy protections in relation to neurotechnology 


This is a hot topic, but what is it? Generally speaking, neural data is information that is generated by measuring activity in a person's central or peripheral nervous systems, including brain activity (seen in EEGs, fMRIs, or implanted devices); signals from the peripheral nervous system (such as nerves that extend from the brain and spine); and data that can be used to infer mental states, emotions, and cognitive processes. Interestingly, this data has been used to create artificial neural networks. For instance, machine vision can be used to identify a person's emotions by analyzing their facial expressions.  


Some may be surprised to know that there are many types of neurotechnology (neurotech) in existence today, but what is that? Neurotechnology bridges the gap between neuroscience, the scientific study of the nervous system, and technology. The goal of neurotech is to understand how the brain can be enhanced by technological advancements to create applications that improve both brain function and overall human health. In fact, some may characterize this growing area as “a thrilling glimpse into the potential of human ingenuity to transform lives”. Others have noted that neurotechnology, combined with the explosion of AI, opens up a world of infinite possibilities.  


One simple way of explaining neurotech is to divide it into two categories: invasive (such as implants), and non-invasive (such as wearables). More specifically, invasive neurotech is mostly used in the medical area to deal with conditions such as neurological disorders like Parkinson’s disease. 


Neural privacy has to do with being confident that we have control over the access to our own neural data and to the information about our mental processes. This article delves into the law of neural privacy and the ethics of neurotech. 


The Law Involving Neural Privacy 


In the United States, there has been a flurry of activity in this regard. Why is neural privacy important? Essentially, this type of data is very sensitive personal data as it can reveal thoughts, emotions, and intentions. Certain companies have a lot to gain if they are privy to this information—think about employers, insurers, or law enforcement—this could affect how workers are able to work, individuals apply for insurance coverage, or citizens engage with their societies. Another aspect is data ownership: who owns one’s thoughts? Some may believe that this question is for the distant future, but it might be worth mentioning that Neuralink has already had its first human patient using a brain implant to play online chess.  


It is here already! This may be why the UN Special Rapporteur on the right to privacy has recently set out the foundations and principles for the regulation of neurotechnologies and the processing of neurodata from the perspective of the right to privacy. More precisely, the UN Report deals with key definitions and establishes fundamental principles to guide regulation in this area, including the protection of human dignity, the safeguarding of mental privacy, the recognition of neurodata as highly sensitive personal data, and the requirement of informed consent for the processing of this data. Emphasis is placed on the inclusion of ethical values and the protection of human rights in the design.   


While Canada has not yet legislated on mental privacy, we note that the United States has in the following jurisdictions: 


  • California: the California Consumer Privacy Act (CCPA) was amended with SB 1223 that included “neural data” in the definition of sensitive personal information, and defined “neural data” as “information that is generated by measuring the activity of a consumer’s central or peripheral nervous system, and that is not inferred from nonneural information”. Governor Newsom has already approved this amendment. California also has two new bills, SB-44 (Neural Data Protection Act) that would deal with brain-computer interfaces and govern the disclosure of medical information by an employer, a provider of health care, a health care service plan, or a contractor—to include new protections for neural data; and SB-7 (Automated Decision Systems (ADS) in the Workplace) that would require an employer, or a vendor engaged by the employer, to provide a written notice that an ADS, for the purpose of making employment-related decisions, is in use at the workplace to all workers that will be directly or indirectly affected by the ADS 

 

  • Colorado: the Colorado Privacy Act was also amended with HB 24-1058 that defines “neural data” as “information that is generated by the measurement of the activity of an individual's central or peripheral nervous systems and that can be processed by or with the assistance of a device”, and adds neural data to the definition of biological data and sensitive data. This has already been signed into law  

 

  • ConnecticutSB 1356 (An Act Concerning Data Privacy, Online Monitoring, Social Media, and Data Brokers), is a bill that would amend the Connecticut Data Privacy Act, define “neural data” as any information that is generated by measuring the activity of an individual's central or peripheral nervous system”, and include it in the definition of sensitive data   

 

  • IllinoisHB 2984 is a bill that would amend the Biometric Information Privacy Act, define “neural data” as “information that is generated by the measurement of activity of an individual's central or peripheral nervous system, and that is not inferred from non-neural information”, and add neural data to the definition of biometric identifier  

 

  • MassachusettsHD 4127 (Neural Data Privacy Protection Act) is a bill that would define “neural data” as “information that is generated by measuring the activity of an individual’s central or peripheral nervous system, and that is not inferred from non-neural information” and include it in the definition of sensitive covered data. This is a significant step since there is no comprehensive consumer privacy law at this point 

 

  • MinnesotaSF 1240 is a bill that would not amend the consumer privacy legislation, but would rather be a standalone piece of legislation that provides the right to mental data and sets out neurotech rights concerning brain-computer interfaces. It would begin to apply in August, 2025 

 

  • Vermont: there are three bills involving neural data protection: H210 (Age-Appropriate Design Code Act), H208 (Data Privacy and Online Surveillance Act), and H366 (An Act Relating to Neurological Rights). In a nutshell, these bills would define “neural data” as “information that is collected through biosensors and that could be processed to infer or predict mental states”, provide individuals with the right to mental or neural data privacy, protect minors specifically, and create a comprehensive consumer privacy bill that includes specific protections for neural data  


Clearly, it is becoming more important to enact mental or neurological privacy protections when it comes to neurotech and automated decision-making systems. In North America, these States are leading the way and could influence the direction of legislation for both Canada and the entire United States. That is, they are adding provisions to their consumer privacy legislation or creating standalone statutes. 


Ethics of Neurotechnology 


Let us begin this discussion with the question, Why is neural data unique? Simply put, neural data is not just a phone number or a person’s age. It is very sensitive and can reveal much more about a person. This is why  Cooley lawyers refer to it as a kind of digital “source code” for an individual, potentially uncovering thoughts, emotions and even intentions: 


“From EEG readings to fMRI scans, neural data allows insights into neural activity that could, in the future, decode neural data into speech, detect truthfulness or create a digital clone of an individual’s personality” 


Several thinkers have asked about what needs to be protected. For example, the Neurorights Foundation tackles the issue of human rights for the age of neurotech. It advocates for promoting innovation, protecting human rights, and ensuring the ethical development of neurotech. 


The foundation has created a number of research reports, including Safeguarding Brain Data: Assessing the Privacy Practices of Consumer Neurotechnology Companies, which analyzed the data practices and user rights of consumer neurotechnology products. In this report, there were several areas of concern: access to information, data collection and storage, data sharing, user rights, as well as data safety and security. 


The conclusion was that the consumer neurotechnology space is growing at a rate that has outpaced research and regulation. Further, most existing neurotechnology companies do not adequately inform consumers or protect their neural data from misuse and abuse. The report was created so that companies and investors can appreciate the kinds of specific further measures that are needed to responsibly expand neurotechnology into the consumer sphere. 


Additionally, UNESCO has pointed out that there are several innovative neurotechnology techniques such as brain stimulation or neuroimaging techniques, which have changed the face of our understanding of the nervous system. Neurotechnology has helped us to address many challenges, especially in the context of neurological disorders; however there are also ethical issues and problems particularly with its use of non-invasive interventions. For example, neurotechnology can directly access, manipulate, and emulate the structure of the brain—it can produce information about our identities, our emotions, our fears. If you combine this neurotech with AI, there can be a threat to notions of human identity, human dignity, freedom of thought, autonomy, (mental) privacy and well-being. 

UNESCO states that the fast-developing field of neurotechnology is promising, but we need a solid governance framework: 


“Combined with artificial intelligence, these techniques can enable developers, public or private, to abuse of cognitive biases and trigger reactions and emotions without consent. Consequently, this is not a technological debate, but a societal one. We need to react and tackle this together, now!” 


In fact, UNESCO has drafted a Working Document regarding the Ethics of Neurotechnology, and includes a discussion of the following ethical principles and human rights: 


  • Beneficence, proportionality, and do no harm: Neurotechnology should promote   health, awareness, and well-being, empower individuals to make informed decisions  about their brain and mental health while fostering a better understanding of themselves. That said, restrictions on human rights need to adhere to legal principles, including legality, legitimate aim, necessity, and proportionality 

 

  • Self-determination and freedom of thought: Throughout  the  entire  lifecycle  of  neurotechnology, the protection and promotion of freedom of thought, mental self-determination, and mental privacy must be secured. That is, neurotechnology should  never be used to exert undue influence or manipulation, whether through force,  coercion, or other means that compromise cognitive liberty 

 

  • Mental privacy and the protection of neural data: With neural data, there is a risk of stigmatization/discrimination, and revealing neurobiological correlates of diseases, disorders, or general mental states without the authorization of the person from whom data is collected. Mental privacy is fundamental for the protection of human dignity, personal identity, and agency. The collection, processing, and sharing of neural data must be conducted with free and informed consent, in ways that respect the ethical and human rights principles outlined by UNESCO, including safeguarding against the misuse  or unauthorized access of neural and cognitive biometric data, particularly in contexts where such data might be aggregated with other sources 

 

  • Trustworthiness: Neurotechnology systems for human use should always ensure trustworthiness across their entire lifecycle to guarantee the respect, promotion and  protection of human rights and fundamental freedoms. This requires, that these systems do not replicate or amplify biases; are transparent, traceable and explainable; are grounded on solid scientific evidence; and define clear conditions for responsibility and accountability 

 

  • Epistemic and global justice: Public awareness of brain and mental health and understanding of neurotechnology and the importance of neural data should be promoted through open and accessible education, public engagement, training, capacity-building, and science communication 

 

  • Best interests of the child and protection of future generations: It is important to balance against the potential benefits of enhancing cognitive function through neurotechnology for early diagnosis, instruction, education, and continuous learning with a commitment to the holistic development of the child. This includes nurturing  their  social life, fostering meaningful relationships, and promoting a healthy lifestyle  encompassing nutrition and physical activity 

 

  • Enjoying the benefits of scientific-technological progress and its applications:  Access to neurotechnology that contributes to human health and wellbeing should be equitable. The benefits of these technologies should be fairly distributed across individuals and communities globally 


The document also touches on areas outside health such as employment. For instance, as neurotechnology evolves and converges with other technologies in the workplace, they present unique opportunities and risks in labour settings. It is necessary to develop policy frameworks that protect employees’ mental privacy and the right to self-determination but also promote their health and wellbeing to balance the potential for human flourishing with the imperative to safeguard against practices that could infringe on mental privacy and dignity. 


In Four ethical priorities for neurotechnologies and AI, the author discussed AI and brain-computer interfaces and explored four ethical concerns with respect to neurotech: 


  1. Privacy and consent: it is trite to say that an extraordinary level of personal information can already be obtained from people's data trails, however, this is how the concern is framed. The author stresses that individuals should have the ability and right to keep their neural data private—the default choice needs to be “opt out” 

 

  1. Agency and identity: the author asserts that as neurotechnologies develop and corporations, governments and others need to start striving to endow people with new capabilities, individual identity (bodily and mental integrity) and agency (the ability to choose our actions) must be protected as basic human rights 

 

  1. Augmentation: there will be pressure to enhance ourselves, such as adopting enhancing neurotechnologies like those that allow people to radically expand their endurance or sensory or mental capacities. This will likely change societal norms, raise issues of equitable access, and generate new forms of discrimination. And the author notes that outright bans of certain technologies could simply push them underground. Thus, decisions must be made within a culture-specific context, while respecting universal rights and global guidelines 

 

  1. Bias: a major concern is that biases could become embedded in neural devices, and therefore, it is necessary to develop countermeasures to combat bias and include probable user groups (especially those who are already marginalized) to add their input into the design of algorithms and devices as another way to ensure that biases are addressed from the first stages of technology development 


The paper also touched on the need for responsible neuroengineering: there was a call for industry and academic researchers to take on the responsibilities that came with devising these devices and systems. The authors suggested that researchers draw on existing frameworks that have been developed for responsible innovation. 


In Philosophical foundation of the right to mental integrity in the age of neurotechnologies, the author has equated neurorights such as mental privacy, freedom of thought, and mental integrity to basic human rights. The author created philosophical foundation to a specific right, the right to mental integrity. It included both the classical concepts of privacy and non-interference in our mind/brain.  


In addition, the author considered a philosophical foundation with certain features of the mind that could not be reached directly from the outside: intentionality, first-person perspective, personal autonomy in moral choices and in the construction of one's narrative, and relational identity. The author asserted that a variety of neurotechnologies or other tools, including AI, alone or in combination, could by their very availability, threaten our mental integrity.  


To that end, the author proposed philosophical foundations for a right to mental integrity that encompassed both privacy and protection from direct interference in mind/brain states and processes. Such foundations focused on aspects that were well known within philosophy of mind, but not commonly considered in the literature on neurotechnology and neurorights.  Intentionality, the first-person perspective, moral choice, and the construction of one’s identity were concepts and processes that needed as precise a theoretical definition as possible. The author stated: 


“In our perspective, such a right should not be understood as a guarantee against malicious uses of technologies, but as a general warning against the availability of means that potentially endanger a fundamental dimension of the human being. Therefore, the recognition of the existence of the right to mental integrity takes the form of a necessary first step, even prior to its potential specific applications” 


In Neurorights – Do we Need New Human Rights? A Reconsideration of the Right to Freedom of Thought, the author stated that the progress in neurotechnology and AI provided unprecedented insights into the human brain. Likewise, there were increasing opportunities to influence and measure brain activity. These developments raised several legal and ethical questions. 


The author argued that the right to freedom of thought could be coherently interpreted as providing comprehensive protection of mental processes and brain data, which could offer a normative basis regarding the use of neurotechnologies. Moreover, an evolving interpretation of the right to freedom of thought was more convincing than introducing a new human right to mental self-determination. 


What Can We Take from These Developments? 


Undoubtedly, ethicists have spent a considerable amount of time thinking about mental privacy in the age of neurotech, and exactly what is at risk if there are no privacy and AI protections in place. 


Fortunately, the law is starting to catch up to the tech and the ethicists’ concerns. For example, California and Colorado have already enacted provisions to add to their consumer privacy statutes, and more bills have been introduced to address the issues. 

bottom of page