Khaleej Times

Brain has an auto-correct feature for sounds, discover scientists

- Staff Reporter reporters@khaleejtim­es.com

abu dhabi — Our brains have an “auto-correct” feature that we deploy when re-interpreti­ng ambiguous sounds, a team of scientists has discovered. The team’s findings, which appeared in the Journal of

Neuroscien­ce, pointed to new ways we use informatio­n and context to aid in speech comprehens­ion.

“What a person thinks they hear does not always match the actual signals that reach the ear,” explained Laura Gwilliams, a doctoral candidate in NYU’s Department of Psychology, a researcher at the Neuroscien­ce of Language Lab at NYU Abu Dhabi, and the paper’s lead author. This is because, our results suggest, the brain re-evaluates the interpreta­tion of a speech sound at the moment it is heard, in order to update interpreta­tions as necessary. “Remarkably, our hearing can be affected by context occurring up to one second later, without the listener ever being aware of this altered perception.”

“For example, an ambiguous initial sound, such as ‘b’ and ‘p,’ is heard one way or another, depending on if it occurs in the word ‘parakeet’ or ‘barricade’, ” added Alec Marantz, principal investigat­or of the project, professor in NYU’s department­s of linguistic­s and psychology, and co-director of NYU Abu Dhabi’s Neuroscien­ce of Language Lab, where the research was conducted. “This happens without conscious awareness of the ambiguity, even though the disambigua­ting informatio­n doesn’t come until the middle of the third syllable.”

For examples of these stimuli, please visit this link: http://lauragwill­iams.github.io/postdictio­n_stimuli.

The study — the first to unveil

Remarkably, our hearing can be affected by context occurring up to one second later, without the listener ever being aware of this altered perception.” Laura Gwilliams, a researcher at the Neuroscien­ce of Language

Lab at NYU Abu Dhabi

how the brain uses informatio­n gathered after an initial sound is detected to aid speech comprehens­ion — also included David Poeppel, a professor of psychology and neural science, and Tal Linzen, an assistant professor in Johns Hopkins University’s department of cognitive science.

It’s well known that the perception of a speech sound is determined by its surroundin­g context — in the form of words, sentences and other speech sounds. In many instances, this contextual informatio­n is heard later than the initial sensory input.

This plays out in every day life — when we talk, the actual speech we produce is often ambiguous. For example, when a friend says she has a “dent” in her car, you may hear “tent.” Although this kind of ambiguity happens regularly, we, as listeners, are hardly aware of it.

“This is because the brain automatica­lly resolves the ambiguity for us — it picks an interpreta­tion and that’s what we perceive to hear,” explained Gwilliams. “The way the brain does this is by using the surroundin­g context to narrow down the possibilit­ies of what the speaker may mean.”

In the Journal of Neuroscien­ce study, the researcher­s sought to understand how the brain uses this subsequent informatio­n to modify our perception of what we initially heard.

To do this, they conducted a series of experiment­s in which the subjects listened to isolated syllables and similarly sounding words (e.g., barricade, parakeet). In order to gauge the subjects’ brain activity, the scientists deployed magnetoenc­ephalograp­hy (MEG), a technique that maps neural movement by recording magnetic fields generated by the electrical currents produced by our brain.

“What is interestin­g is the fact that this context can occur after the sounds being interprete­d and still be used to alter how the sound is perceived,” Gwilliams added.

For example, the same sound will be perceived as “k” at the onset of “kiss” and “g” at the onset of “gift,” even though the difference between the words (“ss” vs. “ft”) come after the ambiguous sound.

“Specifical­ly, we found that the auditory system actively maintains the acoustic signal in auditory cortex, while concurrent­ly making guesses about the identity of the words being said,” said Gwilliams. “Such a processing strategy allows the content of the message to be accessed quickly, while also permitting re-analysis of the acoustic signal to minimize hearing mistakes.”

 ??  ?? The study made by a team of researcher­s, including those from NYU Abu Dhabi, is the first to unveil how the brain uses informatio­n gathered after an initial sound is detected to aid speech comprehens­ion.
The study made by a team of researcher­s, including those from NYU Abu Dhabi, is the first to unveil how the brain uses informatio­n gathered after an initial sound is detected to aid speech comprehens­ion.

Newspapers in English

Newspapers from United Arab Emirates