Postgraduate research project

Event-based computing for real-time computer vision systems

Funding
Competition funded View fees and funding
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

This PhD project develops ultra-low power, DVS-free computer vision hardware by creating event-based chips using nanofabrication. The work spans chip building, signal encoding, and real-world system demonstration, aiming to replace costly DVS cameras and enable fast, efficient AI image processing with conventional cameras. Techniques include lithography, circuit simulation, and FPGA implementation.

Computer vision systems (CVS) help machines see and understand the world around them, e.g. self-driving cars, robots, and security cameras that need to work instantly and efficiently. Today’s CVSs have trouble keeping up in fast-changing situations, which capture full pictures or videos at fixed times, like taking snapshots or video frames; meanwhile, most of the information in each picture doesn't change much. Thus, these systems are inefficient, slow, and use a lot of power. Event-based computing is a new way of doing this. Instead of capturing everything all the time, event-based systems only notice and respond to things that change—like movement or flashes of light—right when they happen. This means AI hardware can process much faster and use far less energy to recognise patterns (moving objects). Nevertheless, the CVS systems in the market today still rely on the expensive dynamic vision sensor (DVS) cameras. 

This research project will develop novel event-based computing chips and their AI-hardware implementation, enabling ultra-low powered and low-cost DVS-free CVS.

In the first year, you will build event-based computing chips based on emerging nanodevices, utilising our state-of-the-art cleanroom facility. You will learn nanofabrication (thin film engineering and lithography technique) to fabricate a wafer-scale massive array of memdiodes and surface/interface analysis (advanced microscopy and spectroscopy tools) to evaluate the quality of the chips.

In the second year, you will develop a protocol to encode and decode input signals generated by moving images. You will learn electrical characterisation techniques to evaluate the response of the chips and simulate how these chips compute moving images.

Finally, in the third year, you will test the full system in real-world conditions—making sure it works reliably and more efficiently than today’s CVS. You will design and build the systems on FPGAs to process image data in real time (captured by a conventional camera) via an AI algorithm.