We will soon need ‘neuro-rights’ to protect our brains and thoughts from technology | Technology news

In today’s digital world, nothing you do and nothing you say is private. Not only do the walls have ears, but they are also connected to the internet. There is only one space in the world that is truly private to you, and that is your mind. But even that won’t be true for long.

Elon Musk’s Neuralink may seem like it borders on science fiction. But the day is not far when there is a machine that can read and maybe even change one’s mind. Some advocates of neurorights, or human rights specifically aimed at protecting the brain, want to introduce regulations before this becomes a reality.

Jack Gallant, a cognitive scientist at UC Berkeley, and other researchers published a paper detailing a rudimentary way to “read minds.” Volunteers in one study were asked to watch hours of video clips while their heads were inside an MRI machine. The researchers then trained a neural network on a dataset that linked recorded brain activity to each corresponding video frame. Then, the researchers asked the volunteers to watch new videos while still recording MRI data. They then fed the data into the AI ​​model they trained earlier. The model was able to generate a very vague but identifiable reconstruction of some of the images that the volunteers saw. The newspaper was also published in 2011.

In 2021, Chile’s Senate approved a bill, the first of its kind in the world, to amend the constitution to protect “neuro-rights,” or brain rights. This made Chile the first country in the world to enshrine neuro-rights in its constitution. But did the South American country jump in too soon?

Guido Girardi, a former Chilean senator who played an important role in the legislation, compared neurotechnology to something else lawmakers might have been slow to respond to — social media. Chile did not want to be late again. Neurotechnology, when it spreads more widely, may have greater implications for society than social media. The argument here is that it might be wise to get ahead of the technology for once.

But going early can also have its drawbacks. Especially when we are not quite sure what the technology will be capable of in the future.

“It’s quite difficult to regulate now, and the reason for that is that it’s not entirely clear what the most widespread applications will be. On the one hand, you can’t wait too long, because then the technology takes off too quickly . There will be problems and nobody will have thought about them and it will be too late. On the other hand, going too soon can create its own problems,” Allan McCay, a prominent neurorights advocate, told indianexpress.com McCay is Deputy Director of The Sydney Institute of Criminology and an Academic Fellow at the University of Sydney’s Law School.

According to McCay, legal systems around the world must strike a delicate balance. They must not “let the horse beat” and close the stable door afterwards. But on the flip side, they shouldn’t regulate it so tightly that they destroy the chances of the technology doing any good. And neurotechnology has great potential to do good.

READ MORE  Deutsche Bank tests artificial intelligence to spot fake traders' phone calls - Bloomberg

From paralysis to opportunity and back again

Ian Burkhart suffered a spinal cord injury when he was 19, which left him a quadriplegic, unable to move his legs or arm. In 2014, he signed up for a groundbreaking trial testing a brain-computer interface designed to control muscle stimulation. He received an implant in his brain that transmitted movement signals to a sheath of electrodes worn on his arm. It meant he could move his fingers just by thinking about it.

But he eventually had to have the device removed in 2021, long after the trial had ended. “When I first had my spinal cord injury, everyone said, ‘You’ll never be able to move anything from your shoulders down again,'” he says. “I was able to recover that function and then lose it again. It was tough,” MIT Technology Review quoted Burkhart as saying.

Therapeutic experiments are just one of the potentially positive uses of brain-computer interfaces and other neurotechnology. Many companies are working on technology that can help treat a variety of conditions from paralysis to therapy. California-based Neuropace, for example, has an FDA-approved epilepsy device. The “RNS device” detects unusual electrical activity in the brain and stimulates the brain with electronic impulses to prevent the epileptic seizure from happening.

The danger to guard against

Neurotechnology is promising and the possibilities it offers are truly astounding. To crack down on it too hard and risk taking away the wonders of this technology from people like Burkhart would be cruel. However, the technology presents some human rights challenges.

McCay is particularly concerned about the potential misuse of technology in the criminal justice system. Remember, for example, the device that can treat epilepsy? Imagine if someone used similar neurotechnology to develop a device aimed at preventing convicted criminals from committing crimes by “predicting criminal behavior” and applying some form of stimulation to the brain.

The possibility of someone developing such dystopian technology exists outside of Black Mirror episodes. Massachusetts-based company Brainwave Science already advertises that its “iCognative” product can “reveal hidden information in a suspect’s mind.” This is a passage from their website: “This cutting-edge technology can reveal a person’s plans and intentions, as well as all past actions related to national security, criminal activities such as fraud and corporate theft, giving an investigative and intelligence service – gathering edge as never before.”

Also, many parts of the world already use technologies such as electronic ankle bracelets to limit the mobility of prisoners. Once the technology is available, it is not a far-fetched leap to imagine that criminal justice systems around the world will attempt to monitor the minds of convicted criminals using neurotechnology.

But neurotechnology will not remain in the realm of therapeutic use (and eventual criminal justice use) for long. It is almost an eventuality that brain-computer interfaces will become mass-market devices. Elon Musk has previously admitted that the goal of Neuralink is to “merge humans with AI.”

In short, a technology that can read and maybe even change your thoughts could find its way into therapeutic medicine, the criminal justice system, and the world at large.

READ MORE  AuntMinnie recognizes Rad AI Omni Reporting as the “Best New Radiology Software” of 2023

Privacy and transparency

It is still difficult to predict exactly what the future of neurotechnology will look like, but many have an idea of ​​what neurorights should be based on. McCay believes that neurorights should ensure both privacy and transparency—privacy for the user and transparency about how the technology works.

“Neurotech is increasingly a subset of artificial intelligence, almost. Kind of like humans merging with AI. There have already been extensive discussions about AI ethics and questions about the opacity of black box systems and how things like bias can creep in,” McCay explained.

A study published in October by Stanford HAI (human-centered artificial intelligence) found that foundational models like those built by OpenAI, Google, Meta and others are becoming less and less transparent. This lack of transparency is nothing new in the tech industry. From opaque content moderation systems on social media platforms to misleading ads to murky pay practices in aggregator apps, transparency issues have been a mainstay of tech companies for a long time. And soon some new tech companies will be able to read your brain and maybe even control it.

Neuro Rights in India

The concerns are real. The future can be scary. But for now, no additional legislation is needed to control or prevent some of the dangerous scenarios we touched on. Some existing legal provisions in India already protect citizens from some neurotechnical hazards, technically speaking.

Most read

1
South Africa vs Australia Highlights, WC 2023 semi-finals: Australia to play India in WC final after close shave against South Africa in Eden
2
Tiger 3 box office collection day 4: Salman Khan starrer records 50% drop, earns Rs 169.5 cr in India

“Following the Supreme Court’s decisions in Puttaswamy (2017) and Selvi (2010), it is fair to say that the right to privacy in one’s thoughts is protected under the Indian Constitution. In the Selvi case, the court specifically found that a coercive intervention in a person’s mental processes is a violation of freedom. It held that drug testing, polygraph examination and similar techniques could not be administered by force, as this would violate individuals’ right to privacy, and specifically in the criminal context, their right against self-incrimination,” technology lawyer Jaideep Reddy wrote in an email interview to Indianexpress. com.

According to Reddy, who focuses on the interaction between law and disruptive technologies, the Digital Personal Data Protection Act of 2023 could also play an important role in helping to manage such technology once it comes into force. “Under this law, personal data can generally only be collected or processed based on consent or other specifically permitted legitimate use. While the state is given quite a lot of leeway under this law, any neural interference by private parties would be subject to multiple safeguards,” Reddy added.

Perhaps existing legislation and the country’s common law system can protect Indian citizens from the dangers posed by neurotechnology. But even if that is the case, it is important that the stakeholders; citizens, regulators and lawmakers need to have a conversation about the technology and whether our laws are enough to control it.