‘Seeing’ hearing aid is on the way
A next-generation hearing aid which can ‘see’ is to be developed by a team of Stirling University researchers and clinicians.
Designed to help users in noisy environments, the device will use a mini camera that can lip read, process information in real time, and seamlessly switch between audio and visual cues.
There are more than 10 million people in the UK – one in six of the population - with some form of hearing loss. By 2031, this is estimated to rise to 14.5 million.
Professor Amir Hussain is leading the ambitious joint research project, which has received nearly £500,000 from the UK Government’s Engineering and Physical Sciences Research Council (EPSRC) and industry.
Funding will enable two threeyear postdoctoral research fellows to work under Professor Hussain’s lead supervision.
Professor Hussain said: “This exciting world- first project has the potential to significantly improve the lives of millions.
“The next- generation audio-visual model we want to develop will intelligently track the target speaker’s face for visual cues, like lip reading. These will further enhance the audio sounds picked up and amplified by conventional hearing aids. The 360-degree approach to our software design is expected to open up more every day environments to device users, enabling them to confidently communicate in noisier settings, with a potentially reduced listening effort.
“In addition to people with hearing loss, the unique lip reading capabilities of this device could also prove potentially valuable to those communicating in very noisy places where ear defenders are worn, such as in factories, and in emergency response scenarios.”
Professor Hussain’s team has been working on a prototype and the research investment will be put towards tackling the key challenge of blending and enhancing appropriately selected audio and visual cues. Speed is crucial, with time delays for hearing aids to be less than 10 milliseconds.
Stirling psychologist Professor Roger Watt will work with Professor Hussain and help develop new computing models of human vision for real-time tracking of facial features.
Once developed, the software prototype will be available to other researchers worldwide, opening up the opportunity for further work. Future hardware prototyping research will explore aspects of the mobile mini camera attachment, such as whether to fit it into a pair of ordinary glasses, wearable brooch, necklace or even an earring.
Professor Hussain is also collaborating with Dr Jon Barker, at Sheffield University, on ways of separating speech to complement the audio-visual techniques pioneered at Stirling.