Latest Hearing News
WEDNESDAY, May 15, 2019 (HealthDay News) -- Chances are if you're over 60 it's already happened to you: You're in a crowded room and finding it tough to understand what your partner is saying a couple of feet away.
It's a longstanding hearing-loss issue known as the "cocktail party" problem. Conventional hearing aids still aren't able to fix it -- to separate out the talk you do want to hear from the background chatter you don't.
But scientists may be developing a device that can do just that.
The device would rely on an emerging technology called "auditory attention decoding" (AAD). AAD cracks the cocktail party problem by simultaneously monitoring a person's brainwaves and the sound around them.
With that data in place, the new hearing device would triangulate which voice or sound the person is focused on -- and then give it an extra sonic boost.
"The cocktail party problem refers to a hearing condition where there is more than one speaker talking at the same time," explained Nima Mesgarani, who led a group that published their new findings May 15 in Science Advances.
"Because hearing-impaired listeners have reduced sensitivity to different frequencies, they are not able to pick out the right voice," explained Mesgarani. He's associate professor of electrical engineering with the Zuckerman Mind Brain Behavior Institute, part of Columbia University in New York City.
Conventional hearing aids -- which simply raise overall sound levels -- don't help much in a crowded room.
"Increasing the volume doesn't help hearing-impaired listeners, because it amplifies everyone, and not just the 'target speaker,'" Mesgarani said.
AAD works differently.
"[It] works by first automatically separating the sound sources in the acoustic environment," he said. "The separated sounds are then compared to the brain waves of a listener. And the source that is most similar is chosen and amplified relative to other speakers to assist the listener."
But this research is still in its early stages, so crowd-addled seniors shouldn't expect to order the technology anytime soon.
For the moment, the technology requires an invasive surgical procedure and isn't portable. Any practical application is at least five-to-10 years off, Mesgarani said.
Still, the research illustrates yet again the amazing versatility of the human brain. As Mesgarani noted, neural networks in the brain's hearing center are remarkably adept at pinpointing which voice a person wants to pay attention to, even with lots of competing noise.
Digging deeper into that phenomenon, the Columbia team enlisted a group of people with epilepsy (who were already undergoing surgical care) to listen to a massed group of several speakers. None of the patients had hearing difficulties.
By means of electrodes directly implanted into their brains, researchers were then able to monitor how brain waves responded to the various sounds. That data was fed into a computer, which quickly learned to automatically raise the volume of the "target" speaker's voice.
Preliminary results suggest that the technology does work as intended. But to date, testing has been confined to a controlled indoor setting, and it remains to be seen whether it would work as well among those with actual hearing impairment, the researchers said.
And, of course, it will take time to convert the technology into something that could be worn as an external hearing aid.
Tricia Ashby-Scabis is director of audiology practices with the American Speech-Language-Hearing Association, in Rockville, Md. She reviewed the new study and said the work "sounds highly promising."
"Artificial intelligence certainly sounds like a great option in terms of focused listening and setting precedence on which speaker the listener wants to hear," Ashby-Scabis said.
But questions remain.
"The difficulty is, communication is dynamic," said Ashby-Scabis. "It is ever-changing. People jump in and out of conversations, and that is a lot of processing for a device to do, and a lot of knowledge it needs to have. I am surprised if this is something we are close to having researchers solving [or] developing, but it is certainly a promising area to be studying."
This video, supplied byMesgarani,illustrates how the technology mimics the brain, separating out preferred speech:
Copyright © 2019 HealthDay. All rights reserved.
SOURCES: Nima Mesgarani, Ph.D., associate professor of electrical engineering, Zuckerman Mind Brain Behavior Institute, Columbia University, New York City; Tricia Ashby-Scabis, Au.D., CCC-A, director, audiology practices, American Speech-Language-Hearing Association, Rockville, Md.; May 15, 2019, Science Advances