Philippine Canadian Inquirer (National)

Our neurodata can reveal our most private selves. As brain implants become common, how will it be protected?

- BY CHRISTINA MAHER, University of Sydney requiremen­ts outside of a research context are unclear.

“Hello world!”

On December 2021, these were the first words tweeted by a paralysed man using only his thoughts and a brain-computer interface (BCI) implanted by the company Synchron.

For millions living with paralysis, epilepsy and neuromuscu­lar conditions, BCIs offer restored movement and, more recently, thought-to-text capabiliti­es.

So far, few invasive (implanted) versions of the technology have been commercial­ised. But a number of companies are determined to change this.

Synchron is joined by Elon Musk’s Neuralink, which has documented a monkey playing the computer game Pong using its BCI – as well as the newer Precision Neuroscien­ce, which recently raised US$41 million towards building a reversible implant thinner than a human hair.

Eventually, BCIs will allow people to carry out a range of tasks using their thoughts. But is this terrific, or terrifying?

How do BCIs work?

BCIs can be non-invasive (wearable) or invasive (implanted). Electrical activity is the most commonly captured “neurodata”, with invasive BCIs providing better signal quality than non-invasive ones.

The functional­ity of most BCIs can be summarised as passive, active and reactive. All BCIs use signal processing to filter brain signals. After processing, active and reactive BCIs can return outputs in response to a user’s voluntary brain activity.

Signals from specific brain regions are considered a combinatio­n of many tiny signals from multiple regions. So BCIs use pattern recognitio­n algorithms to decipher a signal’s potential origins and link it to an intentiona­l event, such as a task or thought.

One of the first implanted BCIs treated drug-resistant seizures in some of the 50 million people with epilepsy. And ongoing clinical trials signal a new era for neurologic­ally and physically impaired people.

Outside the clinical realm, however, neurodata exist in a largely unregulate­d space.

An unknown middleman

In human interactio­n, thoughts are interprete­d by the person experienci­ng and communicat­ing them, and separately by the person receiving the communicat­ion. In this sense, allowing algorithms to interpret our thoughts could be likened to another entity “speaking” for us.

This could raise issues in a future where thought-to-text is widespread. For example, a BCI may generate the output “I’m good”, when the user intended it to be “I’m great”. These are similar, but they aren’t the same. It’s easy enough for an able-bodied person to physically correct the mistake – but for people who can only communicat­e through BCIs, there’s a risk of being misinterpr­eted.

Moreover, implanted BCIs can provide rich access to all brain signals; there is no option to pick and choose which signals are shared.

Brain data are arguably our most private data because of what can be inferred regarding our identity and mental state. Yet private BCI companies may not need to inform users about what data are used to train algorithms, or how the data are linked to interpreta­tions that lead to outputs.

In Australia, strict data storage rules require that all BCI-related patient data are stored on secure servers in a de-identified form, which helps protect patient privacy. But

What’s at risk if neurodata aren’t protected?

BCIs are unlikely to launch us into a dystopian world – in part due to current computatio­nal constraint­s. After all, there’s a leap between a BCI sending a short text and interpreti­ng one’s entire stream of consciousn­ess.

That said, making this leap largely comes down to how well we can train algorithms, which requires more data and computing power. The rise of quantum computing – whenever that may be – could provide these additional computatio­nal resources.

Cathy O’Neil’s 2016 book, Weapons of Math Destructio­n, highlights how algorithms that measure complex concepts such as human qualities could let predatory entities make important decisions for the most vulnerable people.

Here are some hypothetic­al worst-case scenarios.

1. Third-party companies might buy neurodata from BCI companies and use it to make decisions, such as whether someone is granted a loan or access to health care.

2. Courts might be allowed to order neuromonit­oring of individual­s with the potential to commit crimes, based on their previous history or socio-demographi­c environmen­t.

3. BCIs specialise­d for “neuroenhan­cement” could be made a condition of employment, such as in the military. This would blur the boundaries between human reasoning and algorithmi­c influence.

4. As with all industries where data privacy is critical, there is a genuine risk of neurodata hacking, where cybercrimi­nals access and exploit brain data.

Then there are subtler examples, including the potential for bias. In the future, bias may be introduced into BCI technologi­es in a number of ways, including through:

• the selection of homogeneou­s training data

• a lack of diversity among clinical trial participan­ts (especially in control groups)

• a lack of diversity in the teams that design the algorithms and software.

If BCIs are to cater to diverse users, then diversity will need to be factored into every stage of developmen­t.

How can we protect neurodata?

The vision for “neuroright­s” is an evolving space. The ethical challenges lie in the balance between choosing what is best for individual­s and what is best for society at large.

For instance, should individual­s in the military be equipped with neuroenhan­cing devices so they can better serve their country and protect themselves on the front lines, or would that compromise their individual identity and privacy? And which legislatio­n should capture neuroright­s: data protection law, health law, consumer law, or criminal law?

In a world first, Chile passed a neuroright­s law in 2021 to protect mental privacy, by explicitly classifyin­g mental data and brain activity as a human right to be legally protected. Though a step in the right direction, it remains unclear how such a law would be enforced.

One US-based patient group is taking matters into its own hands. The BCI Pioneers is an advocate group ensuring the conversati­on around neuroethic­s is patient-led.

Other efforts include the Neuroright­s Foundation, and the proposal of a “technocrat­ic oath” modelled on the Hippocrati­c oath taken by medical doctors. An Internatio­nal Organisati­on for Standardis­ation committee for BCI standards is also under way. ■

This article is republishe­d from The Conversati­on under a Creative Commons license.

 ?? ??

Newspapers in English

Newspapers from Canada