The University of Southampton
Institute for Life SciencesAbout us
Phone:
(023) 8059 3374
Email:
adb@soton.ac.uk

Professor Andrew Brown BSc, PhD, FIEE, FMIEEE, FBCS, CEng, Eur Ing

Chair in Electronics

Professor Andrew Brown's photo
Related links
Personal homepage

Professor Andrew Brown is part of the Institute for Life Sciences at the University of Southampton.

Andrew Brown currently holds an established chair in Electronics at the University of Southampton. He received a BSc in Physical Electronics from Southampton in 1976 and a PhD in Microelectronics in 1981. He held brief posts as research fellow and computer manager in the Electronics Department at Southampton before being appointed a lecturer at the end of 1980. He was promoted to Senior Lecturer in 1989, Reader in Electronics in 1992 and to one of the established chairs in 1999.

During his time as an academic, he has spent numerous secondments and sabbaticals working in industry. In 1983 he was appointed a Visiting Scientist in the Machine Technology group at IBM Hursley, UK, working on electronic place and route systems for uncommitted logic arrays. In 1985, along with three other academics, he founded Horus System Ltd, an EDA startup (backed by Cirrus Computers) to exploit simulation technology developed at the University. In 1988, he worked at Siemens NeuPerlach (Munich, Germany) on a micro-router for their in-house VENUS EDA suite. In 1995 he was awarded a Senior Academic in Industry secondment to work at a small communications company, MAC, developing a placement tool used in decision support for the placement of mobile phone base stations. In 2001, he co-founded LME Design Automation, a venture capital-backed spinout to exploit an EDA synthesis suite that was been the prime focus of his University research at that time. One consequence of this startup was that he was awarded a Royal Society Industrial Fellowship to continue his work there until 2003. In 2004, he was appointed a Visiting Professor at the University of Trondheim, Norway, and spent time there integrating the simulation and synthesis work of the previous two startup companies. In 2008 he was appointed a Visiting Professor at the Computing Laboratory, University of Cambridge, UK.

He was head of the Design Automation Research Group at Southampton from 1993 to 2007, when he became involved in the Manchester SpiNNaker system, and was able to obtain EPSRC support to allow him to work full time on the project, relinquishing all teaching, supervision and management responsibilities.

Research

Publications

Contact

Research interests

SpiNNaker - a precis:

The human brain remains as one of the great frontiers of science – how does this organ upon which we all depend so critically, actually do its job? A great deal is known about the underlying technology – the neuron – and we can observe in vivo brain activity on a number of scales through techniques such as magnetic resonance imaging, neural staining and invasive probing, but this knowledge - a tiny fraction of the information that is actually there - barely starts to tell us how the brain works, from a perspective that we can understand and manipulate. Something is happening at the intermediate levels of processing that we have yet to begin to understand, and the essence of the brain's information processing function probably lies in these intermediate levels. One way to get at these middle layers is to build models of very large systems of spiking neurons, with structures inspired by the increasingly detailed findings of neuroscience, in order to investigate the emergent behaviours, adaptability and fault-tolerance of those systems.

What has changed, and why could we not do this ten years ago? Multi-core processors are now established as the way forward on the desktop, and highly-parallel systems have been the norm for high-performance computing for a considerable time. In a surprisingly short space of time, industry has abandoned the exploitation of Moore’s Law through ever more complex uniprocessors, and is embracing a 'new' Moore's Law: the number of processor cores on a chip will double roughly every 18 months. If projected over the next 25 years this leads inevitably to the landmark of a million-core processor system. Why wait?

We are building a system containing a million ARM9 cores - not dissimilar to the processor found in many mobile phones. Whilst this is not, in any sense, a powerful core, it possesses aspects that make it ideal for an assembly of the type we are undertaking. With a million cores, we estimate we can sensibly simulate - in real time - the behaviour of a billion neurons. Whilst this is less than 1% of a human brain, in the taxonomy of brain sizes it is certainly not a primitive system, and it should be capable of displaying interesting behaviour.

A number of design axioms of the architecture are radically different to those of conventional computer systems - some would say they are downright heretical. The architecture turns out to be elegantly suited to a surprising number of application arenas, but the flagship application is neural simulation; neurobiology inspired the design.

This biological inspiration draws us to two parallel, synergistic directions of enquiry; significant progress in either direction will represent a major scientific breakthrough:

· How can massively parallel computing resources accelerate our understanding of brain function?
· How can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computation?

 

Technical challenges

SpiNNaker is not just another large computing system. It incorporates - at a fundamental level - a number of unorthodox design paradigms. It is designed primarily to simulate large aggregates of neurons (a billion). To do this, a million cores are interconnected by a novel communication infrastructure - details are in the publications and the SpiNNaker website. This involves distributing the topology of the network to be simulated throughout the topology of the processor network itself. The sheer size of both the simulating and simulated networks means that any central overseer - in almost any capacity - is not really feasible, and pretty much every aspect of the whole simulation ensemble has to be self-assembling. Factoring in the estimated mean time between failures intrinsic to extremely large systems compounds the technical challenges, because the simulating system has to be able to modulate its behaviour - on the fly - in the light of component and communication failures whilst a simulation is in progress.

Articles

Book

Conferences

Professor Andrew Brown
University of Southampton SO17 1BJ

Room Number:59/3207

Telephone:(023) 8059 3374
Email:adb@soton.ac.uk


Professor Andrew Brown's personal home page
Share this profile Share this on Facebook Share this on Google+ Share this on Twitter Share this on Weibo

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×