Deep Learning Reinvents the Hearing Aid

Posted: 

a_spectrumcover2_002.jpg
In its March 2017 Issue, IEEE Spectrum, the official monthly magazine of the Institute of Electrical and Electronics Engineers (IEEE), highlights CSE Professor Leon Wang’s contribution to solving the cocktail party problem in its cover story (posted at http://spectrum.ieee.org/consumer-electronics/audiovideo/deep-learning-reinvents-the-hearing-aid). With more than 420,000 members, IEEE is the largest technical professional organization in the world.

The cocktail party problem, or the problem of separating target speech from background interference, is the greatest challenge facing hearing aid wearers. Hearing loss is one of the most prevalent chronical conditions affecting 37.5 million Americans, and more than 10% of the world’s population. Although the cocktail party problem has been tackled for decades in signal processing and related fields, no system or algorithm managed to help hearing-impaired listeners better understand speech in noisy environments.

Wang’s breakthrough was based on a completely new formulation of the speech separation problem. Through his unique insights into perceptual mechanisms underlying human analysis of the acoustic scene, Wang and his students formulated speech separation as a classification problem. This reformulation has a profound consequence: the cocktail party problem could be treated as a form of supervised learning. Furthermore, Wang’s group was the first to introduce deep learning to the field of speech separation or enhancement. With the powerful capacity of deep neural networks to model large training data, his team finally succeeded in substantially elevating speech recognition performance of listeners with hearing loss (as well as listeners with normal hearing) in noisy backgrounds.

Prof. Wang is a University Distinguished Scholar, and Co-Editor-in-Chief of Neural Networks. He is also an IEEE Fellow.