Active Sensing through Learning: Generalising Perception Across Tasks, Sensors, and Robots
About the Project
Modern robotic systems are increasingly used in applications in uncertain environments. To perform tasks in such scenarios, robots need more than just motion planning and control—they must also make informed decisions about how and where to sense their environment to support effective operation. This project aims to develop robotic agents capable of learning how to perceive based on their tasks, sensor feedback, and embodiment, moving beyond fixed sensor strategies toward systems that reason about their uncertainty and act to reduce it (Bajcsy, 1988).
The project extends previous work in model-based strategies for optimal active sensing, where robot trajectories are generated to maximize information coming from onboard sensors, by optimizing information-theoretic quantities such as Observability/Constructibility Gramians (Napolitano et al., 2021; Napolitano et al., 2022) or Fisher Information Metrics (Hausman et al., 2017). These techniques have shown effectiveness in state estimation, model learning (Napolitano et al., 2024), and motion planning (Salaris et al., 2019). However, conventional active sensing approaches often rely on known system dynamics, fixed sensor configurations, and manually designed objective functions. These constraints limit generalization and adaptability, particularly when robots must understand high-level goals (e.g., “inspect the turbine,” “look for signs of danger”) or operate in unfamiliar environments.
This project proposes a step forward: the development of a learning-based active perception framework in which a robot learns, from experience, how to select sensing actions that are maximally informative and task-relevant. The key research question is: How can a robot learn to generate sensing strategies that generalize across tasks, sensor modalities, and robot morphologies?
The agent will learn to map task descriptions and sensor data to informative actions that support task completion. To achieve this, the project will explore various learning-based approaches, such as reinforcement learning for policy optimisation under uncertainty (Chi et al., 2023), large language models (LLMs) for interpreting task intent (Driess et al., 2023), diffusion models for generating diverse action trajectories (Janner et al., 2022; Pan et al., 2024), and physics-informed neural networks (PINNs) to embed domain knowledge into learning. One or more of these directions can be pursued, depending on the application and research focus. Core experiments will begin in simulation, with transfer to real robotic platforms across different sensing and actuation configurations.
Application domains include autonomous inspection, search and rescue, and scientific exploration, where robots must decide not only where to go or what to do, but what information they need and how best to obtain it. For example, a quadrupedal robot navigating a partially collapsed environment must decide where to place its feet not only for safe locomotion, but also to actively collect tactile data that helps reconstruct the terrain geometry and detect signs of structural instability or human presence. Similarly, a planetary rover exploring Martian landscapes must determine which soil patches or rock formations are most likely to yield scientifically valuable information.
This PhD project will contribute to combining deep learning, control theory, and AI to create robots that actively learn what to observe, why, and how. The successful candidate will have the opportunity to explore cutting-edge techniques, shape their methodological path, and contribute to a new generation of robotic systems that adapt their sensing strategies to the world, the task, and themselves.
Funding Notes
there is no funding for this project
References
- Bajcsy, Ruzena. "Active perception." Proceedings of the IEEE 76.8 (1988): 966-1005.
- Napolitano, Olga, et al. "Gramian-based optimal active sensing control under intermittent measurements." 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.
- Napolitano, Olga, et al. "Information-aware Lyapunov-based MPC in a feedback-feedforward control strategy for autonomous robots." IEEE Robotics and Automation Letters 7.2 (2022): 4765-4772.
- Hausman, Karol, et al. "Observability-aware trajectory optimization for self-calibration with application to uavs." IEEE Robotics and Automation Letters 2.3 (2017): 1770-1777.
- Napolitano, Olga, et al. "Active sensing for data quality improvement in model learning." IEEE Control Systems Letters 8 (2024): 1433-1438.
- Salaris, Paolo, et al. "Online optimal perception-aware trajectory generation." IEEE Transactions on Robotics 35.6 (2019): 1307-1322.
- Chi, Cheng, et al. "Diffusion policy: Visuomotor policy learning via action diffusion." The International Journal of Robotics Research (2023): 02783649241273668.
- Driess, Danny, et al. "Palm-e: An embodied multimodal language model." (2023).
- Janner, Michael, et al. "Planning with diffusion for flexible behavior synthesis." arXiv preprint arXiv:2205.09991 (2022).
- Pan, Chaoyi, et al. "Model-based diffusion for trajectory optimization." Advances in Neural Information Processing Systems 37 (2024): 57914-57943.
Unlock this job opportunity
View more options below
View full job details
See the complete job description, requirements, and application process


