Skip to main navigation Skip to main content
The University of Southampton
Engineering

Research project: Processing and perception of low frequency speech cues for hearing impairment - Dormant

Currently Active:
No
Project type:
Archived

The aim of the project is to determine how coding of low frequency speech cues, particularly fundamental frequency (F0), first formant (F1) and nasality information can be optimised for hearing-impaired listeners, particularly those with a cochlear implant or a cochlear implant combined with a hearing aid for amplification of low frequencies.

Interest in the perception and processing of low frequency speech cues has been increased by the advent of "electro-acoustic hearing" (EAS), whereby an acoustic hearing aid which amplifies low frequencies within the residual hearing range is combined with a cochlear implant for stimulation of higher frequency regions of the hearing system. There is also some recent evidence from the speech perception literature that low frequencies may be more important for the overall success of speech perception than was thought to be the case by earlier models of speech intelligibility. The broad aim of the project is to evaluate the relative importance of different low frequency speech cues, and to determine what is the best way to convey of process such cues for those with hearing impairment (but particularly EAS users) to optimse speech perception abilities. We are currently undertaking work in simulation of EAS devices to compare the relative benefits of F0 and F1 in determining possible benefit from the addition of low frequency acoustic information to a cochlear implant-processed signal.

Current CI processing systems convey voice pitch (fundamental frequency, or F0) information only very weakly. Because F0 perception is important for a number of perceptual tasks, including speech perception in a background of competing speakers, benefit can be obtained by providing F0 information through an acoustic hearing aid for CI users with residual low-frequency hearing in either the implanted or non-implanted ear. Moreover, a number of authors have attempted to code F0 information through the CI itself, albeit with limited success. The aim of the present project is to determine how F0 coding can be optimized, either through presentation through an acoustic hearing aid or directly via the CI itself. This broad question leads to a number of more specific questions: how important is mean speaker F0 compared to F0 variations over time; how important are F0 variations over time in signaling acoustic landmarks in speech; how closely time-aligned does the FO signal need to be compared with the CI-processed signal; what is the optimal signal processing method to derive F0 information (a number of methods, such as autocorrelation, exist), assuming that a “simplified” F0 signal may be needed either through CI or hearing aid presentation. Current work is focusing on using simulations in normal hearing listeners to evaluate the importance of preserving FO time characteristics. Work is also underway in the closely related area of other low-frequency speech cues, in particular cues to nasality perception, as coding of these cues may also impact on speech perception overall, but may be weakly coded via CI systems.

Share Share this on Facebook Share this on Twitter Share this on Weibo
Privacy Settings