University of Washington’s Proactive Hearing AI Headphones Quiet Crowds Without a Word

Photo credit: University of Washington
Researchers at the University of Washington have solved the age-old challenge of hearing the person directly in front of you in a noisy environment using a new pair of AI headphones. This is essentially the “cocktail party problem,” and it presents a significant challenge for anyone with hearing loss. Noise-cancelling headphones are frequently ineffective because they either cover the entire area or allow everything through in ‘transparency mode,’ neither of which helps you identify specific conversations.
However, lead doctoral student Guilin Hu and professor Shyam Gollakota were motivated to perform better. They and their team have developed a method for modified over-ear headphones that activates automatically when you start talking. It detects the natural rhythm of communication and can determine who has entered the exchange in two to four seconds, whether there are one or four other people.
Sale

SHOKZ OpenSwim Pro – Open-Ear Bluetooth & MP3 Bone Conduction Sport Headphones, Swimming Headphones, IP68…
- MP3 + Bluetooth (Bluetooth for land use only) + Shokz App – Stream music and podcasts via Bluetooth on land, or switch to MP3 mode for phone-free…
- IP68 Waterproof Rating – With a triple-sealed design, OpenSwim Pro is fully sweatproof and can be submerged in water up to two meters deep for two…
- Open-Ear Comfort and Connection – Our headphones are designed for enhanced safety and extended listening. Remain engaged with your surroundings while…
The software then isolates those voices, cleans them up, and mutes all others. It all happens on the headphone, so there is no delay or lag in the music, and in tests with 11 people, it proved to be quite effective. They evaluated the clarity and noise reduction to be more than twice as good as the unprocessed sound, and even when individuals were talking over them other or someone jumped in late, the headphones still picked it up well.
Previous attempts at doing this in the same lab required you to look at the person you were listening to and manually lock the software onto their voice or set a distance bubble. This version is far more clever: it can determine who you’re listening to based solely on the pattern of their voice, without the need for a brain implant or hand signals.
As it happens, the current prototypes make use of off-the-shelf noise-cancelling headphones, with the addition of binaural mics and electronics for processing. The company has already demonstrated that they can pack this type of technology onto a tiny chip small enough to fit in a hearing aid.

Fortunately, the code is all open-source, so anybody can try developing and refining the system; thus far, they’ve tested it with English, Mandarin, and Japanese chats, and it’s likely to be adaptable to other languages as well. Gollakota says he was surprised by how smoothly everything functioned, even in the most chaotic real-world talks. Of course, the next stage will be to integrate them into earphones, smart glasses, or even hearing aids, transforming even the most raucous gatherings into clear and straightforward conversations.
[Source]
University of Washington’s Proactive Hearing AI Headphones Quiet Crowds Without a Word
#University #Washingtons #Proactive #Hearing #Headphones #Quiet #Crowds #Word