Machine-Based Behaviour and Intent Prediction for Complex Aviation Environments
About the Project
Are you ready to be at the forefront of AI innovation and redefine how machines understand and predict human behaviour in complex, high-stakes environments? Join us on a cutting-edge project investigating how AI systems can infer behaviour and intent from multi-source data, supporting decision-making in dynamic Aviation scenarios.
This PhD studentship is 48-month funded through the EPSRC Industrial Doctoral Landscape Awards (IDLA). You will be based in the Department of Computer and Information Sciences at the University of Strathclyde, working in close collaboration with BAE Systems.
About the Project
Operators in Aviation environments rely on their ability to rapidly interpret subtle cues and infer the behaviour and intent of surrounding actors. This capability is critical for maintaining situational awareness and making effective decisions under pressure. However, modern operational environments are becoming increasingly complex, involving large numbers of interacting entities, high data volumes, and rapidly evolving conditions.
As a result, human cognitive capacity is often overwhelmed, and critical signals can be missed.
Advances in AI and multi-sensor systems present an opportunity to augment human capabilities by enabling machines to continuously monitor, interpret, and reason about complex environments. In particular, there is growing interest in developing systems that can predict behaviour and intent in an explainable and trustworthy manner, supporting human operators in decision-making processes where outcomes may have significant operational, legal, and ethical implications.
This project aims to move beyond conceptual models and develop practical, engineering-ready approaches to machine-driven behaviour and intent prediction, grounded in real-world Defence use cases.
Why This Project?
AI systems are increasingly capable of analysing large-scale data, but understanding intent and predicting behaviour in dynamic, uncertain environments remains a major open challenge.
Imagine a system that can:
- Integrate information from multiple sensors and data sources in real time
- Infer the likely intentions of individuals or groups within complex scenarios
- Forecast future behaviours and potential outcomes
- Communicate its reasoning clearly to human operators
This is where your research will make a difference, helping to develop AI systems that enhance situational awareness while remaining explainable, responsible, and aligned with real-world operational requirements.
This is where you come in.
You will design and develop innovative approaches for machine-based behaviour and intent prediction, contributing to next-generation AI systems for complex environments.
Project Roadmap:
- Develop methods for generating and maintaining world and self-state knowledge models
- Design techniques for dynamic assessment of consequences and scenario evolution
- Develop models for forecasting future behaviour and intent in complex environments
- Create approaches for communicating predictions and reasoning to human operators
- Investigate legal, ethical, and explainability considerations in AI-driven decision support
- Design and implement a synthetic sandbox environment to demonstrate and evaluate proposed solutions
- Engage with stakeholders to test, refine, and validate concepts in realistic use cases
What We are Looking For
We are seeking ambitious and curious researchers who want to push the boundaries of AI and contribute to high-impact, real-world systems.
Essential Skills/Qualifications
- A 2:1 Honours degree or Master’s degree in Computer Science, AI, Data Science, or a related field
- A strong background in machine learning, AI, or data analysis
- Proficiency in programming and model development (e.g. Python)
- Excellent written and oral communication skills
- An understanding of experimental design and evaluation
Desirable Skills/Qualifications
- Experience with multi-modal data, sensor systems, or complex systems modelling
- Interest in decision-making systems, situational awareness, or autonomous systems
- Familiarity with explainable AI or human-AI interaction
- Experience working with industry or applied research contexts
- Interest in ethical and responsible AI
How to Apply:
Applications will be processed on a 'first come, first served' basis, and the hiring process will conclude as soon as a suitable candidate is identified. Interested candidates should email Dr Yashar Moshfeghi (yashar.moshfeghi@strath.ac.uk) and include detailing contact information, and motivation, or background and attach an up-to-date CV.
Funding Notes
All home and international students are eligible to apply which will cover the full stipend and tuition fees at the home rate (not the international rate). It includes:
- A fee waiver equivalent to the Home rate; and
- A tax-free stipend of approx. £22,442 p.a. for a maximum of four years and does not need to be paid back. This amount will increase every year, typically with inflation.
International students are permitted to self-fund the difference between the home and international fee rates.
We also welcome self-funded or externally funded applications.
Unlock this job opportunity
View more options below
View full job details
See the complete job description, requirements, and application process


