Skip to main navigationSkip to main content
The University of Southampton
Engineering

The ear segregates sound by frequency but (how) does the brain recombine it?  Seminar

Time:
16:00
Date:
17 March 2018
Venue:
Building 13/3017

Event details

ISVR Engineering Research Seminar Series 2017-2018

Abstract              

The cochlear is tonotopicly arranged, segregating sound information by frequency.  A number of auditory phenomena rely on the idea that information is processed in (relatively) independent frequency channels (e.g. the power spectrum model), though a number of other phenomena demonstrate violations of this idea (e.g. Comodulation masking release, spatial processing).  While it well known we can A) integrate sound across-frequency to form a single percept and B) sounds across a broad range of frequencies can interact, we know very little about how this might occur in the brain.  

This talk focusses on the processing of sound, across-frequency, in (human) behaviour and the (animal) brain demonstrating that the auditory cortex may play a critical role in the combination of frequency information.  I will present data that shows: A) Our inability to compare spatial cues (interaural level differences, ILD) across broad frequency ranges can be explained by transformations in binaural processing between the midbrain and auditory cortex, this has implications for ILD processing for people with cochlea implants. B) Prolonged exposure to a given noise source allows us to better ignore it (for example the buzzing of a fridge) and improves our ability to detect other sounds.  Data will be presented that demonstrates we can rapidly adapt to across-frequency statistics of noise and use this information to improve detection of other sounds.  Optogenetic experiments allowing for brain regions to be “switched-off” during sound processing suggest auditory cortex may be critical to this rapid acquisition of across-frequency sound statistics.

Speaker information

Joe Sollini , UCL. Dr. Joseph Sollini is an auditory psychophysicist and physiologist working at the UCL Ear Institute. Joseph graduated from the University of Sheffield with a BSc in Psychology and MSc in Computational and Systems Neuroscience (received in 2006 and 2007, respectively). He then studied for his PhD at the Institute of Hearing Research with Dr Chris Sumner receiving a doctorate in Biomedical Science while investigating frequency tuning and spatial processing in animals (ferrets and guinea pigs). Subsequently Joseph held post-doc positions at Imperial College London with Dr Paul Chadderton (2012-2016) and currently at University College London Ear Institute with Dr Jenny Bizley (2016-present) working on the processing of spectro-temporally complex sounds in the auditory cortex.

Privacy Settings