The Centre for Vision and Cognition seeks to conduct experimental research that elucidates the psychological processes underlying human vision and cognitive function, including memory and learning. Its members use experimental and computational modelling methods with a diverse range of research tools, including fMRI, haptic feedback control systems and both laboratory-based and perambulatory eye-tracking.
Its members use experimental and computational modelling methods with a diverse range of research tools, including fMRI, haptic feedback control systems and both laboratory-based and perambulatory eye-tracking.
The Laboratory is comprehensively networked: funded links have been created with other 5* Schools across the University (including the School of Electronics and Computer Science, the Institute for Sound and Vibration Research, and Biological Sciences). Close national and international collaborative links exist with key research centres including MPI Tubingen, Tianjin China, UCSD, UMASS, Utrecht, and Oklahoma. Research sponsors include BBSRC, the Department of Homeland Security, the Department of Transport, EPSRC, ESRC, Leverhulme, Microsoft, QinetiQ, the Epilepsy Research Foundation, the British Academy and the Royal Society.
The Visual Cognition Laboratory's research themes range from high-level cognition (for example perceptual expertise & reading and strategic regulation of accuracy in memory) to low-level processes in vision and learning (for example inter-sensory coordination, attentional dynamics, associative learning, and object perception).
View our video library.
The lab is equipped with two Dual Purkinje Image eye trackers, set up to allow simultaneous recording of both eyes. The trackers sample every millisecond, with 1 minarc spatial accuracy. These are interfaced with Cambridge Research Systems shutter goggles, which can be used to present different images to each of the two eyes.
We are currently undertaking research in this lab with adult, child, and dyslexic samples, with both reading and non-reading tasks.
The EyeLink2000 lab has both the monocular, tower-mounted camera and the desk-mounted binocular camera available, interfaced with the software provided by SR-Research (Experiment Builder and DataViewer). The system samples at up to 2000 Hz, with spatial accuracy of 0.25°.
Currently, research is being carried out with adult samples on reading and non-reading tasks. The reading research is looking at both English and Chinese readers.
We have two perambulatory eye trackers that are designed to allow the participants to move around in the real world whilst their eye movements are being recorded. Each tracker has two cameras, allowing the recording of both the participant's eye movements and the scene in their field of view. These cameras are mounted on the frame of glasses, with the lightweight recording equipment and power supplies carried in a rucksack on the participant's back as they are having their eye movements recorded.
We are planning research with these eye trackers to examine social interactions and scene perception.
The PHANTOM force-feedback device enables users to touch and manipulate virtual objects. In combination with stereo shutter goggles, this equipment is used to investigate how we combine visual and touch information.
We have three mirror stereoscopes available. Stereoscopes allow the presentation of different images to each eye, and are used for research into binocular rivalry, stereopsis, and contrast perception.
You can see a BBC news report about some of CVC's work here.
|Staff Member||Primary Position|
|Wendy Adams||Associate Professor|
|Craig Allison||Postgraduate research student|
|Valerie Benson||Senior Lecturer|
|Sana Bouamama||Research Fellow|
|Michael George Cutter||Postgraduate research student|
|Federica Degno||Postgraduate research student|
|Jonny Dickins||Postgraduate research student|
|Nicholas Donnelly||Professor of Cognitive Psychology|
|Denis Drieghe||Associate Professor|
|Gemma Fitzsimmons||Postgraduate research student|
|Steven Glautier||Associate Professor|
|Hayward Godwin||Senior Teaching Fellow|
|Erich Graf||Associate Professor|
|Piril Hepsomali||Postgraduate research student|
|Philip Higham||Reader in Cognitive Psychology|
|Anne P Hillstrom||Enterprise coordinator for the Centre for Vision and Cognition|
|Julie Kirkby||Visiting Research Fellow|
|Louise-Ann Leyland||Visiting postdoctoral researcher|
|Simon P Liversedge||Director of Research|
|Carl Matthew Mann||Postgraduate research student|
|Athina Manoli||Postgraduate research student|
|Katie Meadmore||Research Fellow|
|Natalie Mestry||Research Fellow|
|Alex Muhl-Richardson||Postgraduate research student|
|Alexander Muryy||Research fellow|
|Mirela Nikolova||Postgraduate research student|
|Stuart M. Pugh||Technical Officer (Research)|
|Edward Redhead||Associate Professor|
|Erik D Reichle||Head of Academic Unit|
|Shui-I Shih||Principal Teaching Fellow|
|Sarah Stevenage||Professor in Psychology|
|Oliver Tew||Postgraduate Research Student|
|Eye movements in special populations||Active|
|Elemental and configural processes||Active|
|Visual search and the prevalence effect||Active|
|Visual search for multiple targets||Active|
|Visual adaptation and re-calibration||Active|
|Shape and depth perception||Active|
|Context and learning||Active|
|Resolving visual ambiguity||Active|
|Detecting threats in real-world environments||Active|
|Emotion discrimination in anxiety and prosopagnosia||Active|
|Developing computational models of accuracy regulation||Active|
|Eye movements during reading||Active|
|Emotional face processing without awareness||Active|
|Investigation into spatial learning||Active|
|Colour coding in digital displays||Active|