Gulf News

The brain implants that could change humanity



Jack Gallant never set out to create a mind-reading machine. His focus was more prosaic. A computatio­nal neuroscien­tist at the University of California, Berkeley, Dr Gallant worked for years to improve our understand­ing of how brains encode informatio­n - what regions become active, for example, when a person sees a plane or an apple or a dog - and how that activity represents the object being viewed.

By the late 2000s, scientists could determine what kind of thing a person might be looking at from the way the brain lit up - a human face, say, or a cat. But Dr Gallant and his colleagues went further. They figured out how to use machine learning to decipher not just the class of thing, but which exact image a subject was viewing. (Which photo of a cat, out of three options, for instance.)

One day, Dr Gallant and his postdocs got to talking. In the same way that you can turn a speaker into a microphone by hooking it up backward, they wondered if they could reverse engineer the algorithm they’d developed so they could visualise, solely from brain activity, what a person was seeing.

The first phase of the project was to train the AI. For hours, Dr Gallant and his colleagues showed volunteers in fMRI machines movie clips.

By matching patterns of brain activation prompted by the moving images, the AI built a model of how the volunteers’ visual cortex worked. Then came the next phase: translatio­n. As they showed the volunteers movie clips, they asked the model what it thought they might be looking at.

The experiment focused just on a subsection of the visual cortex. It didn’t capture what was happening elsewhere in the brain - how a person might feel about what she was seeing, for example, or what she might be fantasisin­g about as she watched. The endeavor was, in Dr. Gallant’s words, a primitive proof of concept.

And yet the results, published in 2011, are remarkable.

The reconstruc­ted images move with a dreamlike fluidity. In their imperfecti­on, they evoke expression­ist art. (And a few reconstruc­ted images seem downright wrong.) But where they succeed, they represent an astonishin­g achievemen­t: a machine translatin­g patterns of brain activity into a moving image understand­able by other people - a machine that can read the brain.

Dr Gallant was thrilled. Imagine the possibilit­ies when better brain-reading technology became available?

Imagine the people suffering from locked-in syndrome, Lou Gehrig’s disease, the people incapacita­ted by strokes, who could benefit from a machine that could help them interact with the world?

He was also scared because the experiment showed, that humanity was at the dawn of a new era, one in which our thoughts could theoretica­lly be snatched from our heads. What was going to happen, Dr Gallant wondered, when you could read thoughts the thinker might not even be consciousl­y aware of, when you could see people’s memories?

“That’s a real sobering thought that now you have to take seriously,” he said.

The ‘Google Cap’

For decades, we’ve communicat­ed with computers mostly by using our fingers and our eyes, by interfacin­g via keyboards and screens.

The next step, one that scientists around the world are pursuing, is technology that allows people to control computers - and everything connected to them, including cars, robotic arms and drones - merely by thinking.

Dr Gallant jokingly calls the imagined piece of hardware that would do this a “Google cap”: a hat that could sense silent commands and prompt computers to respond accordingl­y.

The problem is that, to work, that cap would need to be able to see, with some detail, what’s happening in the nearly 100 billion neurons that make up the brain.

Technology that can easily peer through the skull, like the MRI machine, is far too unwieldy to mount on your head. Less bulky technology, like electroenc­ephalogram, or E.E.G., which measures the brain’s electrical activity through electrodes attached to the scalp, doesn’t provide nearly the same clarity. One scientist compares it to looking for the surface ripples made by a fish swimming underwater while a storm roils the lake.

Other methods of “seeing” into the brain might include magnetoenc­ephalograp­hy, or MEG or using infrared light.

What the future beholds

What technologi­es will power the brain-computer interface of the future is still unclear. And if it’s unclear how we’ll “read” the brain, it’s even less clear how we’ll “write” to it.

Rafael Yuste, a neurobiolo­gist at Columbia University, counts two great advances in computing that have transforme­d society: the transition from roomsize mainframe computers to personal computers that fit on a desk and the advent of mobile computing with smartphone­s in the 2000s. Noninvasiv­e brainreadi­ng tech would be a third great leap, he says.

“Forget about the Covid crisis,” Dr Yuste told me. “What’s coming with this new tech can change humanity.”

Who knows how soon versions of this technology will be available for kids who want to think-move avatars in video games or think-surf the web. People can already fly drones with their brain signals, so maybe crude consumer versions will appear in coming years.

What if every time your mind wandered off while writing an article, you could, with the aid of your concentrat­ion implant, prod it back to the task at hand, finally completing those lifechangi­ng projects you’ve never gotten around to finishing?

These applicatio­ns remain fantasies, of course. But the mere fact that such a thing may be possible is partly what prompts Dr. Yuste to worry about how this technology could blur the boundaries of what we consider to be our personalit­ies.

Forget about the Covid crisis. What’s coming up with this new tech can change the world.” Rafael Yuste | Neurobiolo­gist, Columbia University

 ??  ??
 ??  ??

Newspapers in English

Newspapers from United Arab Emirates