Modern hearing aid technology is nearly as impressive as that of wireless devices; digital hearing aids today offer highly specialized sound processing, water-resistant or even waterproof nanotech coating, near invisibility, wireless connection to other devices, and even surgical implants that reflect sound to the eardrum. The possibilities seem endless, so what’s next?
Dr. Brent Edwards, an expert in audiology at the Starkey Hearing Research Center, has popularly identified four fields of emerging innovation related to hearing aids: wireless technology, digital chip technology, hearing science, and cognitive science. In the future, a heading aid’s connectivity with other devices will improve significantly, as will methods of fitting a hearing aid to an individual wearer.
Wireless technology is everywhere, and its usefulness has driven down the cost of research as well as hearing aid prices. Not only is wireless everywhere, it is all transmitting on the same system: Bluetooth. Because so many consumer products now transmit wirelessly – phones, TVs, music players, computers – a simple receiver within the hearing aid allows a wearer to stay connected, in more ways than one. In the future, accessories will transmit audio from a mobile phone directly to a hearing aid and use a hands-free microphone to transmit the wearer’s voice to the phone. A companion will be able to wear a small transmitter to send his or her voice directly to the hearing aid to cut down on background noise. It isn’t hard to imagine a day when even public broadcast systems transmit on Bluetooth.
As the market for smaller, faster, more powerful gadgets grows, hearing aid technology can expect to benefit, specifically from improved user interface options. One area of research is creating wireless communication between two hearing aids so they operate like a pair of ears to recreate binaural perception. Wireless technology may even be able to use text-to-speech programs to send emails as audio to a hearing aid.
Information on how sound is processed by the ears and brain has been waiting for technology to catch up. As DSPs improve, hearing aids will be able to filter sound in much the same way that ears do, using cochlear models that simulate the filtering and suppression of background noise and temporal-spectral models to help people make sense of complex environments. These algorithms would use physiological data from a specific impaired auditory nerve to modify a device to work with an individual’s distinct form of hearing loss, which can be identified by taking an audiologist-qualified hearing test. The concept behind new DSP research is that hearing aids should restore psychoacoustic and physiological responses to as close as possible to normal by determining the difference between healthy and unhealthy models and applying these changes to the signal.
The Digital Horizon
Digital signal processors, or DSPs, have been used in hearing aids since the late 1980s, but only recently have they shown a significant improvement over analog processors. DSPs run algorithms responsible for multiband compression, feedback cancellation, noise reduction, and more. Currently, they are limited by the amount of power and memory available in a chip small enough to fit in a contemporary hearing aid; the technology exists for far more complex coding, but not yet in a workable size. For example, chips in modern hearing aids use a few hundred MHz of processing power, compared to those in other electronics that run at several thousand MHz.
The industry is developing ever smaller and more powerful DSPs for other applications, however. Mobile phones already run more sophisticated noise reduction algorithms than can fit on a hearing aid DSP. Adaptive, or intelligent, algorithms currently in use are another possibility for the future of hearing aids, allowing them to “learn” environments. This will obviously provide a better experience for the wearer, but a device that improves its function with use will also be easier and faster to fit.
A recent focus on the role of cognitive science may lead to new approaches in improving auditory function. Instead of simply trying to amplify sound, new models are taking into account how the brain processes hearing loss and auditory input. Understanding speech requires multiple, complex processes, and the effort is increased with heightened background noise or hearing impairment. Thus, listening can be exhausting for a person with hearing loss. The human brain, much like a DSP, has a limited processing function, and when it is taxed in one area (auditory processing) then other areas (language comprehension) will suffer. New developments suggest that improving signal-to-noise ratios can reduce the processing burden, thereby easing the effort required to make sense of what is heard.
Technology develops along both linear and quantum paths, and these are only a few possibilities of where research will lead in the coming years. The main obstacle to practical application and further studies is the limitation of the hardware, but hearing aid tech will likely ride the wake of other electronic devices as far as processor innovation goes. Meanwhile, breakthroughs in auditory and cognitive processing will provide new algorithms to refine functionality and improve the quality of life for the hearing impaired – and that is a wonderful thing!