Baltimore Sun Sunday

Device taps brain waves to help paralyzed man communicat­e

- By Lauran Neergaard

In a medical first, researcher­s harnessed the brain waves of a paralyzed man unable to speak — and turned what he intended to say into sentences on a computer screen.

It will take years of additional research, but the recently reported study marks an important step toward one day restoring more natural communicat­ion for people who can’t talk because of injury or illness.

“Most of us take for granted how easily we communicat­e through speech,” said Dr. Edward Chang, a neurosurge­on at the University of California, San Francisco, who led the work. “It’s exciting to think we’re at the very beginning of a new chapter, a new field” to ease the devastatio­n of patients who lost that ability.

People who can’t speak or write because of paralysis have very limited ways of communicat­ing. For example, the man in the experiment, who was not identified to protect his privacy, uses a pointer attached to a baseball cap that lets him move his head to touch words or letters on a screen. Other devices can pick up patients’ eye movements. But it’s a frustratin­gly slow and limited substituti­on for speech.

In recent years, experiment­s with mind-controlled prosthetic­s have allowed paralyzed people to shake hands or take a drink using a robotic arm — they imagine moving and those brain signals are relayed through a computer to the artificial limb.

Chang’s team built on that work to develop a “speech neuroprost­hetic” — decoding brain waves that normally control the vocal tract, the tiny muscle

movements of the lips, jaw, tongue and larynx that form each consonant and vowel.

Volunteeri­ng to test the device was a man in his late 30s who 15 years ago suffered a brainstem stroke that caused widespread paralysis and robbed him of speech. The researcher­s implanted electrodes on the surface of the man’s brain, over the area that controls speech.

A computer analyzed the patterns when he attempted to say common words such as “water” or “good,” eventually becoming able to differenti­ate between 50 words that could generate more than 1,000 sentences.

Prompted with such questions as “How are you today?” or “Are you thirsty?”, the device eventually enabled the man to answer “I am very good” or “No I am not thirsty” — not voicing the words but translatin­g them into text, the team reported in the New England Journal of Medicine.

It takes about three to four seconds for the word to appear on the screen after the man tries to say it, said lead author David Moses, an engineer in Chang’s lab. That’s not

nearly as fast as speaking but quicker than tapping out a response.

In an accompanyi­ng editorial, Harvard neurologis­ts Leigh Hochberg and Sydney Cash called the work a “pioneering demonstrat­ion.” They suggested improvemen­ts but said if the technology pans out it eventually could help people with injuries, strokes or illnesses like Lou Gehrig’s disease whose “brains prepare messages for delivery but those messages are trapped.”

Chang’s lab has spent years mapping the brain activity that leads to speech.

How did the researcher­s know the device interprete­d the man’s words correctly? They started by having him try to say specific sentences such as, “Please bring my glasses,” rather than answering open-ended questions until the machine translated accurately most of the time.

Next steps include ways to improve the device’s speed, accuracy and vocabulary size — and maybe allow a computer-generated voice rather than text on a screen — while testing additional volunteers.

 ?? TODD DUBNICOFF/UCSF 2020 ?? Researcher David Moses works with a clinical trial participan­t who suffered a brain-stem stroke 15 years ago to record brain activity.
TODD DUBNICOFF/UCSF 2020 Researcher David Moses works with a clinical trial participan­t who suffered a brain-stem stroke 15 years ago to record brain activity.

Newspapers in English

Newspapers from United States