Multimodal Learning for Human-Centered Healthcare: Motion Understanding and Medical Imaging
About the Project
Project summary and focus: The project aims to develop multimodal machine learning methods for advanced healthcare data analysis. It will focus on leveraging and integrating various types of healthcare data to improve disease monitoring and diagnostic capabilities, with specific applications in monitoring mobility difficulties and aiding in computer-assisted diagnosis. The objective is to develop robust, multimodal systems tailored for healthcare applications, utilizing diverse data sources to improve patient monitoring, diagnosis, and disease management. It specifically targets two key areas: human action and mobility assessment, and medical image analysis.
- Human Pose Estimation and Action Understanding for Healthcare Monitoring:
Leverage RGB images, video data, 3D skeletal representations, and wearable sensors to enable robust human action analysis. This research focuses on developing AI models that interpret human movement across human-centered applications, with particular emphasis on indoor monitoring for individuals with Parkinson’s disease and mobility impairments (e.g., stroke). Applications include rehabilitation video analysis and gait lab data to support accurate pose estimation and action quality assessment for patient monitoring and recovery. - Multimodal Learning for Interpretable and Data-Efficient Medical Image Analysis:
Develop AI-driven methods for robust analysis across diverse medical imaging modalities. The project will explore how multimodal signals, e.g., expert attention, sparse annotations, and clinical text reports, can be effectively integrated with diagnostic imaging to build data-efficient, interpretable, and clinically reliable models for disease diagnosis and decision support.
Candidate Profile:
Required: 1) MSc in a relevant field (e.g., computer science, applied mathematics, image processing or biomedical engineering) 2) Strong background in deep learning / machine learning 3) Programming skills (Python, PyTorch / TensorFlow)
Preferred: 1) Biomedical imaging background 2) Publication or open-source track record 3) GPU / HPC experience
This project offers opportunities to work with real clinical data and collaborate with interdisciplinary partners in medicine and healthcare research. Please contact qianhui.men@bristol.ac.uk with your CV if you are interested in these research topics.
Unlock this job opportunity
View more options below
View full job details
See the complete job description, requirements, and application process


