About the project
This project will address the research questions how can audio-visual fusion be used to improve the performance of wearable devices for people with hearing loss?
Wearable assistive hearing devices have improved our daily lives over the past two decades. Personal sound amplifiers (like the Apple Airpod Pro) are now able to increase speech intelligibility for people with hearing loss in noisy situations. Combining sound amplifiers with visual devices promises future breakthroughs for the next generation of wearable assistive devices.
Audio-visual information integration is a rapidly growing field with the potential to revolutionise a wide range of applications, from hearing aid design to augmented reality. We aim to research and develop audio-visual models to improve the sound enhancement capabilities of wearables by adding the ability to localise sound sources and understand the sound environment better. We also want to combine audio input with visual input to enhance perception using multiple modalities.
The project can focus on:
- investigation of fundamental psychophysical measurements to describe the capability and limits of audio-visual integration
- development of new algorithms for audio-visual fusion, scene understanding, and user interaction in a complex realistic sound environment
- implementation of real-time technology on newly designed wearable devices and testing in real-world environments
- exploring the socio-technical issues associated with audio-visual integration, such as privacy, security, and user acceptance