Event-based computing for real-time computer vision systems
About the Project
Supervisory Team: Dr Firman Simanjuntak and Prof Mark Zwolinski
This PhD project develops ultra-low power, DVS-free computer vision hardware by creating event-based chips using nanofabrication. The work spans chip building, signal encoding, and real-world system demonstration, aiming to replace costly DVS cameras and enable fast, efficient AI image processing with conventional cameras. Techniques include lithography, circuit simulation, and FPGA implementation.
Computer vision systems (CVS) help machines see and understand the world around them, e.g. self-driving cars, robots, and security cameras that need to work instantly and efficiently. Today’s CVSs have trouble keeping up in fast-changing situations, which capture full pictures or videos at fixed times, like taking snapshots or video frames; meanwhile, most of the information in each picture doesn't change much. Thus, these systems are inefficient, slow, and use a lot of power. Event-based computing is a new way of doing this. Instead of capturing everything all the time, event-based systems only notice and respond to things that change—like movement or flashes of light—right when they happen. This means AI hardware can process much faster and use far less energy to recognise patterns (moving objects). Nevertheless, the CVS systems in the market today still rely on the expensive dynamic vision sensor (DVS) cameras.
This research project will develop novel event-based computing chips and their AI-hardware implementation, enabling ultra-low powered and low-cost DVS-free CVS.
In the first year, you will build event-based computing chips based on emerging nanodevices, utilising our state-of-the-art cleanroom facility. You will learn nanofabrication (thin film engineering and lithography technique) to fabricate a wafer-scale massive array of memdiodes and surface/interface analysis (advanced microscopy and spectroscopy tools) to evaluate the quality of the chips.
In the second year, you will develop a protocol to encode and decode input signals generated by moving images. You will learn electrical characterisation techniques to evaluate the response of the chips and simulate how these chips compute moving images.
Finally, in the third year, you will test the full system in real-world conditions—making sure it works reliably and more efficiently than today’s CVS. You will design and build the systems on FPGAs to process image data in real time (captured by a conventional camera) via an AI algorithm.
Entry Requirements
You must have a UK 2:1 honours degree, or its international equivalent.
Fees & Funding
We offer a range of funding opportunities for both UK and international students. Horizon Europe fee waivers automatically cover the difference between overseas and UK fees for qualifying students.
Competition-based Presidential Bursaries from the University cover the difference between overseas and UK fees for top-ranked applicants.
Competition-based studentships offered by our schools typically cover UK-level tuition fees and a stipend for living costs for top-ranked applicants.
Funding will be awarded on a rolling basis, so apply early for the best opportunity to be considered.
For more information, please visit our postgraduate research funding pages.
How to Apply
You need to:
- choose programme type (Research), 2026/27, Faculty of Engineering and Physical Sciences
- select Full time or Part time
- search for programme PhD Electronic & Electrical Engineering (7092)
- add name of the supervisor in section 2 of the application
Applications should include:
- research proposal
- your CV (resumé)
- 2 academic references
- degree transcripts and certificates to date
- English language qualification (if applicable)
Unlock this job opportunity
View more options below
View full job details
See the complete job description, requirements, and application process




