The brain implants that could change humanity
BRAINS ARE TALKING TO COMPUTERS, AND COMPUTERS TO BRAINS. ARE OUR DAY DREAMS SAFE?
Jack Gallant never set out to create a mind-reading machine. His focus was more prosaic. A computational neuroscientist at the University of California, Berkeley, Dr Gallant worked for years to improve our understanding of how brains encode information - what regions become active, for example, when a person sees a plane or an apple or a dog - and how that activity represents the object being viewed.
By the late 2000s, scientists could determine what kind of thing a person might be looking at from the way the brain lit up - a human face, say, or a cat. But Dr Gallant and his colleagues went further. They figured out how to use machine learning to decipher not just the class of thing, but which exact image a subject was viewing. (Which photo of a cat, out of three options, for instance.)
One day, Dr Gallant and his postdocs got to talking. In the same way that you can turn a speaker into a microphone by hooking it up backward, they wondered if they could reverse engineer the algorithm they’d developed so they could visualise, solely from brain activity, what a person was seeing.
The first phase of the project was to train the AI. For hours, Dr Gallant and his colleagues showed volunteers in fMRI machines movie clips.
By matching patterns of brain activation prompted by the moving images, the AI built a model of how the volunteers’ visual cortex worked. Then came the next phase: translation. As they showed the volunteers movie clips, they asked the model what it thought they might be looking at.
The experiment focused just on a subsection of the visual cortex. It didn’t capture what was happening elsewhere in the brain - how a person might feel about what she was seeing, for example, or what she might be fantasising about as she watched. The endeavor was, in Dr. Gallant’s words, a primitive proof of concept.
And yet the results, published in 2011, are remarkable.
The reconstructed images move with a dreamlike fluidity. In their imperfection, they evoke expressionist art. (And a few reconstructed images seem downright wrong.) But where they succeed, they represent an astonishing achievement: a machine translating patterns of brain activity into a moving image understandable by other people - a machine that can read the brain.
Dr Gallant was thrilled. Imagine the possibilities when better brain-reading technology became available?
Imagine the people suffering from locked-in syndrome, Lou Gehrig’s disease, the people incapacitated by strokes, who could benefit from a machine that could help them interact with the world?
He was also scared because the experiment showed, that humanity was at the dawn of a new era, one in which our thoughts could theoretically be snatched from our heads. What was going to happen, Dr Gallant wondered, when you could read thoughts the thinker might not even be consciously aware of, when you could see people’s memories?
“That’s a real sobering thought that now you have to take seriously,” he said.
The ‘Google Cap’
For decades, we’ve communicated with computers mostly by using our fingers and our eyes, by interfacing via keyboards and screens.
The next step, one that scientists around the world are pursuing, is technology that allows people to control computers - and everything connected to them, including cars, robotic arms and drones - merely by thinking.
Dr Gallant jokingly calls the imagined piece of hardware that would do this a “Google cap”: a hat that could sense silent commands and prompt computers to respond accordingly.
The problem is that, to work, that cap would need to be able to see, with some detail, what’s happening in the nearly 100 billion neurons that make up the brain.
Technology that can easily peer through the skull, like the MRI machine, is far too unwieldy to mount on your head. Less bulky technology, like electroencephalogram, or E.E.G., which measures the brain’s electrical activity through electrodes attached to the scalp, doesn’t provide nearly the same clarity. One scientist compares it to looking for the surface ripples made by a fish swimming underwater while a storm roils the lake.
Other methods of “seeing” into the brain might include magnetoencephalography, or MEG or using infrared light.
What the future beholds
What technologies will power the brain-computer interface of the future is still unclear. And if it’s unclear how we’ll “read” the brain, it’s even less clear how we’ll “write” to it.
Rafael Yuste, a neurobiologist at Columbia University, counts two great advances in computing that have transformed society: the transition from roomsize mainframe computers to personal computers that fit on a desk and the advent of mobile computing with smartphones in the 2000s. Noninvasive brainreading tech would be a third great leap, he says.
“Forget about the Covid crisis,” Dr Yuste told me. “What’s coming with this new tech can change humanity.”
Who knows how soon versions of this technology will be available for kids who want to think-move avatars in video games or think-surf the web. People can already fly drones with their brain signals, so maybe crude consumer versions will appear in coming years.
What if every time your mind wandered off while writing an article, you could, with the aid of your concentration implant, prod it back to the task at hand, finally completing those lifechanging projects you’ve never gotten around to finishing?
These applications remain fantasies, of course. But the mere fact that such a thing may be possible is partly what prompts Dr. Yuste to worry about how this technology could blur the boundaries of what we consider to be our personalities.
Forget about the Covid crisis. What’s coming up with this new tech can change the world.” Rafael Yuste | Neurobiologist, Columbia University