Skip to main navigationSkip to main content
The University of Southampton
MusicPart of Humanities
(023) 8059 3188

Dr Richard Polfreman MA (Cantab), MSc, PhD, FHEA

Associate Professor in Music

Dr Richard Polfreman's photo

My research explores the interface between musicians and technology, looking for novel ways in which to enhance the experience for performers, composers and musicologists.

I'm an Associate Professor in Music at Southampton, and music technology is at the heart of my teaching, research and professional activities.

A £20,000 synthesizer from the 1980s, is now a £40 app on my phone! Technology keeps redefining what is possible in music, and finding new ways to deploy it is incredibly exciting.

Music technology is at the heart of my teaching, research and professional activity, and it has been a passionate interest of mine for over 30 years. I teach a number of technology-based modules on the BA Music and supervise undergraduate and postgraduate projects in this area, as well as managing the recording studios. Previously I have been MMus Coordinator and Director of Programmes for Music.

I usually teach: Introduction to Music Technology, a second-year module that explores historical, technical and practical aspects of music technology; Music and Sound Production 1, a second-year module that applies practical and technical ideas to studio recording and production; Music and Sound Production 2, a final-year project-based module allowing students to pursue further studio and production techniques.

After studying Natural Sciences at the University of Cambridge and Digital Music Technology at Keele, my doctorate at the University of Hertfordshire focused on the needs of composers in the design of computer software for sound synthesis and manipulation. After a number of years as a research fellow, I joined the Music Department at Southampton in 2004.

I have collaborated with IRCAM (Paris) and INA-GRM (Paris) on a number of projects involving the design of innovative music software, and have taken part in various research council funded projects exploring applications of computing and music, such as Re:Wired (AHRC), musicSpace (AHRC-EPSRC-JISC), morefrommusic (AHRC), Hands On Sound (AHRC) and Capturing the Contemporary Conductor (British Academy/Leverhulme Trust). I have worked on a wide range of industry projects involving live performance and installations, many of which were in collaboration with Sound Intermedia (London), including world premieres of works by Benedict Mason, Jonathan Harvey, Harrison Birtwistle and Simon Bainbridge, as well as a number of re-workings of compositions by Luigi Nono. My current research is focused computational analysis of movement for musical control.


MA, Natural Sciences, University of Cambridge, 1990

MSc, Keele Univeristy , Digital Music Technology, 1992

PhD, University of Hertfordshire, 1997

Research interests

My main research interests focus on music related Human-Computer Interaction and the ways in which musicians interact with computers in order to achieve creative goals. I have been involved in a number of projects in this and related areas, including:

Capturing the Contemporary Conductor

This British Academy/Leverhulme Trust funded interdisciplinary project sets out to use high-quality motion capture, video and audio recording to provide an online resource for those studying conducting gestures and developing systems for automatic conductor following systems.

Hands-On Sound (ongoing)

This AHRC-funded Collaborative Doctoral Award is in partnership with the London Sinfonietta, Sound Intermedia and Cheryl Metcalf in Health Sciences. The award is funding a PhD student to explore real-time musical applications of a variety of hand and arm scale motion tracking systems.

Synthesizer Parameter Mapping for Sound Design (ongoing)

This project involves PhD student Darrell Gibson, who is investigating interpolation systems for controlling sound synthesis in sound design applications.

Multi-Modal Instrument (ongoing)

In this project I am building a test-bed for exploring non-standard controllers in the context of real-time physical modelling synthesis. Physical modelling uses mathematical acoustic models to simulate the physical interactions between vibrating objects (such as strings, tubes, membranes) and drivers (such as plectrums, bows, reeds) in order to create sounds close to those of physical instruments.

SEMANTICS (2008–12)

In this project, which was co-supervised at the University of Hertfordshire with Aladdin Ariyaeeinia, PhD student Stratis Sofianos worked on the automatic extraction of singing voice from stereo recordings using Independent Component Analysis and other techniques. The aim was that this stage could then be used to pre-process audio for further analysis such as lyrics recognition.

More from Music (2013)

This AHRC-funded work followed on from the Compositions for Cochlear Implantees project, and involved the further development of the IMAP (Interactive Music Awareness Programme) – a rehabilitation software package to help cochlear implant users regain appreciation of music. The project was a collaboration between ISVR, the University of Southampton’s Music Department and the Auditory Implant Service, as well as members of the UK National Cochlear Implant Users Association, led by Rachel Van Besouw (ISVR) and Benjamin Oliver (University of Southampton).

musicSpace (2007–10)

This AHRC-EPSRC-JISC funded project involved using Web 2.0 and Semantic Web technologies to develop tools to integrate and enhance musicological metadata. This was a collaboration with m.c.schraefel in Electronics and Computer Science, and involved partner organisations including the British Library, Cecilia, Copac, Grove Music Online, Naxos and RISM (UK & IRL).

FrameWorks 3D (2004–09)

This project proposed a new 3D user-interface for sequencing audio data based on interconnected regions with dynamically updated transformations, providing a fluid environment for exploring compositional ideas. It is hoped to develop this idea further in the near future.

Modalys-ER/MfOM (1997–2003)

In collaboration with IRCAM (Paris), this project developed a graphical environment for the design of physical model instruments using IRCAM’s Modalys synthesis engine. At the time the synthesis was far from real-time, and the system was designed to support the generation of relatively short musical gestures with the instruments.

Sound Spotter (1999–2002)

In collaboration with INA-GRM (Paris), this project involved PhD student Christian Spevak working on the detection of perceptually similar sounds in audio documents (sound spotting). The idea was to select a target event and search for similar occurrences in the document using an auditory model, a self-organizing neural network and pattern matching. The aim was to aid transcription of non-notated music and audio retrieval from archives.

Interpolator (1999–2001)

In collaboration with INA-GRM (Paris), this project involved research student Martin Spain developing a graphical interpolation system for controlling GRM-Tools plugins using a light-based model. The aim of the system was to allow rapid exploration of a sound space via the simultaneous and intuitive interpolation of a number of DSP parameters.


Affiliate research groups

Composition and Music Technology, Music Performance Research

Research project(s)

musicSpace - Dormant

This AHRC-EPSRC-JISC-funded project was an interdisciplinary collaboration between Music and Electronics and Computer Science that employed Semantic Web technologies to develop tools to integrate and enhance musicological metadata.

Sort via:TypeorYear



Creative Media and Artefacts

  • Gordon, M. (Composer), Chapman, J. (Performer), & Polfreman, R. (Other). (Accepted/In press). Eclipsis. Performance


2013. Beano Town, Southbank Centre. Max/MSP programming and sensors setup for interactive installations (Sound Intermedia).

2011. Virtual Choir. Max/MSP programming for audio-video capture system for a virtual choir project (Sound Intermedia/MJM).

2010. Varese 360. London Sinfonietta: Queen Elizabeth Hall. Sound diffusion assistance (Sound Intermedia).

2009. Sound Rebound, Aldeburgh Festival. David Sheppard, Larry Goves and the House of Bedlam. Max/MSP programming and technical advice.

2008. Music Space Reflection, Simon Bainbridge. Max/MSP programming (Sound Intermedia).

2006–7. Vast Ocean, Dai Fujikura. Re-development of live electronics with Max/MSP (Sound Intermedia).

2006. Britten-Pears School. Course tutor for ‘New Music, New Media’ (Sound Intermedia).

2006. Wing on Wing. BBC Symphony Orchestra: Barbican, London. Computer and sound assistance (Sound Intermedia).

2005. Luigi Nono (a retrospective). London Sinfonietta, BBC singers: Queen Elizabeth Hall, RFH. Max programming and live sound diffusion (Sound Intermedia).

2005. Neruda Madrigales, Harrison Birtwistle (world premiere). London Sinfonietta, BBC singers: Snape Maltings, Aldeburgh Festival. Max/MSP programming (Sound Intermedia).

2004–5. Developed interactive data-driven multimedia CD-ROM’s detailing university departments and their course details.

2004. IT specification and software development for interactive systems in ‘Berio Lounge’ foyer installation at RFH (March–April 2004) (Sound Intermedia).

2004. Britten Sinfonia/Hilliard Ensemble tour. Cambridge Corn Exchange, Chelsea Royal Hospital and Salisbury Cathedral. Sound technician and diffusionist.

2004. Book of Hours, Julian Anderson (premiered January 2005 by the Birmingham Contemporary Music Group). Musical Assistant.

2004. Britten-Pears School. Course tutor for ‘New Music, New Media’ (Sound Intermedia).

2003. Developed a custom environment using Max/MSP with MIDI triggered playlist using pedal control and user-definable play-lists with count-ins, amongst other features (Sound Intermedia).

2002. Ommagio a György Kurtág, Luigi Nono. London Sinfonietta: Queen Elizabeth Hall, RFH. Max consultancy and programming of sound diffusion tools (Sound Intermedia).

2001. Bird Concerto with Pianosong, Jonathan Harvey (world premiere). Joanna MacGregor, Sinfonia 21 and Sound Intermedia: Cheltenham Festival. Max/MSP consultancy/programming, including a joystick controlled sound diffusion system.

1999. Max consultancy and support for Syzergy project involving kite-mounted environmental sensors that transmit control information via mobile phone internet connections to music and sculpture based installations.

1998–9. CD mastering and production for London Sinfonietta educational projects involving HMP Bullingdon.

1995–8. Sound diffusion assistance at concerts for the RCM and the London Sinfonietta (Sound Intermedia).

1995. Clarinet Concerto, Benedict Mason (world premiere). Sound Intermedia and the London Sinfonietta: Royal Albert Hall (Proms). Provision of computer equipment and technical support for MIDI controlled multiple click-track system.

1995. Quando Morendo Stanno Morendo, Diario Polacco No.2, Luigi Nono (first UK performance). London Sinfonietta: Queen Elizabeth Hall, RFH. Design and Programming of automated sound diffusion tools (in collaboration with Sound Intermedia).

1994. Anglia Water's summer shows exhibit. Design of a control set-up synchronising sequenced music to automated fountains, using computer based MIDI system.

Dr Richard Polfreman
Music Department
Building 2
University of Southampton
SO17 1BJ

Room Number NNN: 2/2025

Share this profile Share this on Facebook Share this on Twitter Share this on Weibo
Privacy Settings