Skip to main navigation Skip to main content
The University of Southampton
Engineering

Bayesian Learning for Robot Audition Seminar

Time:
16:00 - 17:00
Date:
3 October 2017
Venue:
Building 13 room 3017

Event details

ISVR Engineering Research Seminar Series 2017-2018

Recent advances in robotics and autonomous systems are rapidly leading to the evolution of machines that assist humans across the industrial, healthcare, and social sectors. For intuitive interaction between humans and machines, spoken language is a fundamental prerequisite. However, in realistic environments, speech signals are typically distorted by reverberation, noise, and interference from competing sound sources. Acoustic signal processing is therefore necessary in order to provide machines with the ability to learn, adapt and react to stimuli in the acoustic environment. The processed, anechoic speech signals are naturally time varying due to fluctuations of air flow in the vocal tract. Furthermore, motion of a human talker’s head and body lead to spatio-temporal variations in the source positions and orientation, and hence time-varying source-sensor geometries. Therefore, in order to listen in realistic, dynamic multi-talker environments, robots need to be equipped with signal processing algorithms that recognize and exploit constructively the spatial, spectral, and temporal variations in the recorded signals. Bayesian inference provides a principled framework for the incorporation of temporal models capturing prior knowledge of physical quantities, such as the acoustic channel. This talk therefore explores the theory and application of Bayesian learning for robot audition, addressing novel advances in acoustic Simultaneous Localization and Mapping (aSLAM), sound source localization and tracking.

Speaker information

Christine Evers , Imperial College London. Christine Evers is an EPSRC Fellow at Imperial College London. She received her PhD from the University of Edinburgh, UK, in 2010, after having completed her MSc degree in Signal Processing and Communications at the University of Edinburgh in 2006, and BSc degree in Electrical Engineering and Computer Science at Jacobs University Bremen, Germany in 2005. After a position as a research fellow at the University of Edinburgh between 2009 and 2010, she worked until 2014 as a senior systems engineer on RADAR tracking systems at Selex ES, Edinburgh, UK. She returned to academia in 2014 as a research associate in the Department of Electrical and Electronic Engineering at Imperial College London, focusing on acoustic scene mapping for robot audition. As of 2017, she is awarded a fellowship by the UK Engineering and Physical Sciences Research Council (EPSRC) to advance her research on acoustic signal processing and scene mapping for socially assistive robots. Her research focuses on Bayesian inference for speech and audio applications in dynamic environments, including acoustic simultaneous localization and mapping, sound source localization and tracking, blind speech dereverberation, and sensor fusion. She is an IEEE Senior Member and a member of the IEEE Signal Processing Society Technical Committee on Audio and Acoustic Signal Processing.

Privacy Settings