- University of Washington researchers unveil “semantic hearing” technology for noise-canceling headphones.
- Deep-learning algorithms empower users to pick and filter sounds in real time.
- The system, presented at UIST ’23, allows users to choose from 20 sound classes through voice commands or a smartphone app.
In a groundbreaking development, researchers at the University of Washington have introduced a revolutionary approach to AI noise-canceling headphones, allowing wearers to curate their auditory experience in real time. Termed “semantic hearing,” this system utilizes deep-learning algorithms to provide users with the unprecedented ability to select specific sounds from a diverse range of options, ranging from sirens and baby cries to speech and bird chirps.
The birth of semantic hearing – A game-changer in audio technology
In their pursuit of refining noise-canceling technology, the team at the University of Washington has unveiled the concept of “semantic hearing.” Unlike conventional noise-canceling headphones that indiscriminately block out environmental sounds, this innovative system empowers users to exercise control over their auditory environment. The headphones seamlessly stream captured audio to a connected smartphone, which, through advanced algorithms, cancels out all ambient noises.
This sets the stage for users to, either via voice commands or a dedicated smartphone app, cherry-pick sounds from a pool of 20 distinct classes. From the wail of sirens to the soothing hum of bird chirps, wearers can now tailor their auditory landscape according to their preferences. Only the selected sounds make their way through the headphones, offering a personalized and immersive audio experience.
Senior author Shyam Gollakota, a professor in the Paul G. Allen School of Computer Science & Engineering, underscores the significance of real-time intelligence in this process. Shyam Gollakota, emphasizes the challenge of real-time intelligence in discerning specific sounds from the environment, stating that contemporary noise-canceling headphones fall short in achieving this capability (source). The challenge lies in syncing the sounds with visual stimuli, demanding neural algorithms that process sounds in under a hundredth of a second.
Given this time constraint, the semantic hearing system processes sounds on a connected smartphone rather than relying on more powerful cloud servers. Also, it must preserve delays and spatial cues associated with sounds from different directions to ensure wearers can meaningfully perceive their environment.
Tested in various settings such as offices, streets, and parks, the system has demonstrated its prowess in extracting target sounds like sirens, bird chirps, and alarms, while effectively eliminating all other unwanted noises. Participant feedback reveals an overall improvement in sound quality compared to the original recordings, signifying a promising leap forward in noise-canceling technology.
Refining the semantic hearing experience
While the semantic hearing system marks a significant leap in audio technology, the researchers acknowledge challenges in distinguishing between sounds that share similar properties, such as vocal music and human speech. Instances where the system struggled highlight the need for further training on diverse real-world data to enhance its capabilities.
Co-authors, including UW doctoral students Bandhav Veluri and Malek Itani, as well as Justin Chan from Carnegie Mellon University and Takuya Yoshioka from AssemblyAI, collaborated on the project, bringing diverse expertise to the table.
As the research heads towards a commercial release, the team envisions a future where users can seamlessly integrate semantic hearing into their daily lives. The ability to fine-tune one’s auditory experience holds promise not only for leisure but also for individuals with specific auditory needs, such as those with hearing impairments or sensory sensitivities.
In a world where personalization is key, the advent of semantic hearing in noise-canceling headphones raises intriguing possibilities. How might this technology evolve beyond consumer applications? Could it find applications in addressing specific auditory challenges or enhance the auditory experiences of those with unique needs? As we step into the future of audio technology, the ability to sculpt our sonic environment beckons a new era in personalized listening.
Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.