ISVR Research seminars Seminar
- Time:
- 16:00 - 17:00
- Date:
- 19 July 2022
- Venue:
- Microsoft Teams
For more information regarding this seminar, please email Vanui Mardanyan at isvr@southampton.ac.uk .
Event details
Part of the ISVR seminar series
A Model Framework for Simulating Spatial Hearing of Bilateral Cochlear Implants Users by Dr. Hongmei Hu
Bilateral cochlear implants (CIs) greatly improve the spatial hearing of CI users compare to those with only one implant. However substantial gaps still exist between bilateral CI users and normal hearing listeners in different aspects, such as their lateralization and localization performance. Computer models are expected to partially reveal the possible reasons behind such gaps, by dissecting the specific contribution of each stage along the whole processing chain. In order to provide a model framework for predicating bilateral CI users’ lateralization or localization performance in a variety of experimental scenarios, here a recently published electrically stimulated single excitation-inhibition (EI) model neuron (Hu et al. 2022, JARO) was extend to population-EI-model neurons. The proposed model framework includes (a) parameter initialization, (b) binaural signal generating, (c) switchable CI processing, (d) auditory nerve (AN)- and EI- neuron processing, and (e) decision model stages. For demonstration purposes, several bilateral CIs experiments were simulated, such as pulse rate limitation of ITD sensitivity; left/right discrimination or lateralization; free-field localization with and without two independent automatic gain controls (AGCs). In general, the model framework could capture the average performance of all selected experiments even with the same EI-model units and a very simplified decision model.
Virtual acoustic environments in hearing research by Dr Stephan D. Ewert
Today’s audio and video technology allows us to create immersive virtual environments. While the underlying technology is driven by computer games and film production, it has applications in hearing research, where the function of, e.g., hearing aid algorithms, or the behavior of humans in complex and realistic situations can investigated. Thus classical psychophysical experiments can be transformed to and compared with life-like lab-based experiences, offering a high degree of ecological validity. For this, convincing virtual acoustic environments have to be created, requiring computationally efficient as well as perceptually and technically evaluated methods. The fast and perceptually plausible room acoustics simulator [RAZR, see Wendt et al., JAES, 62, 11 (2014)] approaches this demand by drastic simplifications with respect to physical accuracy while still accomplishing perceptual plausibility. Recent advancements consider effects of edge diffraction and scattering. An overview of the underlying methods and applications of virtual acoustic environments in connection with psychoacoustic measurements and computational auditory models is provided.
Speaker Information
Speaker 1: Dr Hongmei Hu of Carl von Ossietzky Universität Oldenburg, Germany
Speaker 2: Dr Stephan D. Ewert of Medizinische Physik at the Universität Oldenburg, Germany
Dr Hongmei Hu
After I achieved my PhD in Information and Signal Processing in Southeast University (China), I moved to United Kingdom and worked as a Marie Curie senior researcher in Southampton University in 2010. Then I was promoted from a lecture to a tenured associate professor in Jiangsu University (China) in 2012 where I received both my Bachelor and Master degrees. For personal interests, I moved to Carl von Ossietzky-Universität Oldenburg (Germany) in 2013 to continue my researches in the field of the binaural hearing, especially focus on further developing new techniques for bilateral cochlear implants (CIs). My current main research threads are: 1) subjective perception (psychoacoustic) and objective (Electroencephalography, EEG) measurements based fitting strategies for CI users; 2) advanced signal processing and electric modelling for CIs; 3) speech in noise for tonal and non-tonal languages using our newly developed Mandarin and the exist Matrix sentence tests.
Dr Stephan D. Ewert
Stephan D. Ewert studied physics and received his Ph.D. degree in 2002 from the Universität Oldenburg, Oldenburg, Germany. During his Ph.D project he spent a three-month stay as visiting scientist at the Research Lab of Electronics at the Massachusetts Institute of Technology (MIT), Cambridge, MA, USA. From 2003 to 2005 he was Assistant Professor at the Centre of Applied Hearing Research at the Technical University of Denmark (DTU), Lyngby, Denmark. He re-joined Medizinische Physik at the Universität Oldenburg in 2005 and there heads the Psychoacoustic and Auditory Modeling Group since 2008. Dr. Ewert started his interest in hearing and audio engineering by developing loudspeakers during his early undergraduate years. His field of expertise is psychoacoustics and acoustics with a strong emphasis on perceptual models of hearing and virtual acoustics. Dr. Ewert has published various papers on spectro-temporal processing, binaural hearing, and speech intelligibility. More recently, he also focused on perceptual consequences of hearing loss, hearing-aid algorithms, instrumental audio quality prediction and room acoustics simulation.