Robot Audition Seminar
- Time:
- 13:00 - 13:40
- Date:
- 13 January 2021
- Venue:
- Via Micosoft teams
Event details
HABC Seminar
Abstract
Audio signals encapsulate a wealth of semantic information, including cues about nearby sound events and the surrounding environment. As such, sound is used in nature to communicate, to detect salient events, to navigate, and to self-localise. Audition – the ability to make sense of sounds – is therefore a fundamental prerequisite for robots. However, in practice, robots are deployed in complex, acoustic scenes that are subject to uncertainties arising from the ego-motion of the robot, as well as to ambiguities due to multiple, active sound sources. This talk will focus on the challenges and current achievements in the area of robot audition. Following an outline of the challenges affecting dynamic, acoustic scenes, the talk will highlight recent advances at the intersection of acoustic signal processing and machine learning, equipping robots with the ability to make sense of life in sound.
Speaker information
Christine Evers , Lecturer in Computer Science. Her research focuses on Bayesian learning for machine listening. Her research is on the intersection of robotics, machine learning, statistical signal processing, and acoustics.