Towards incorporating causality in explainable artificial intelligence
About the Project
Machine learning has seen a meteoric rise in popularity over the past decade. However, with this success comes growing concern about the opacity of the decisions made by AI. When decisions impact are safety-critical or high-stakes it is no longer enough for a model to be accurate: it must also be transparent, trustworthy, and interpretable.
This need has given rise to the field of explainable artificial intelligence (XAI), which seeks to make AI systems understandable to humans. While progress has been substantial, existing XAI methods face persistent challenges. Popular approaches are often:
- "Post-hoc": explanations are generated after the model is trained, rather than being intrinsic to the model itself (i.e. the model is not inherently interpretable by design).
- Unstable: sensitive to (for example) noise in the data or hyperparameter choices.
- Computationally costly: requiring significant additional overhead; post-hoc methods often involve training several additional models to try and understand the original model.
- Inconsistent: different methods may provide contradictory explanations, which leads to the question “which one is true?”
Most current XAI techniques focus on associations: they highlight correlations between features and outcomes. This can be misleading, because correlation does not imply causation.
A promising new direction is causal explainable artificial intelligence ("causal XAI"). Causal XAI borrows ideas from causal inference, aiming to distinguish spurious correlations from genuine causal drivers of an outcome. The hope is that the associated explanations are not only interpretable but also actionable and reliable. Causal XAI offers the ability to ask "what if?" questions (counterfactual reasoning) and to simulate interventions, moving explanations from description toward understanding.
As a PhD student on this project you will:
- Systematically analyse existing XAI methods to rigorously assess their strengths, weaknesses, and practical limitations.
- Develop new approaches that incorporate causal reasoning to deliver more robust, meaningful explanations.
- Apply these methods to help answer unresolved scientific questions in computational problems.
This project sits at the intersection of machine learning, explainability, and causal inference, which is a rapidly evolving frontier of AI research. It is ideally suited for a motivated student with strong technical skills and a desire to shape the future of trustworthy AI.
Academic qualifications
Have, or expect to achieve by the time of start of the studentship a first-class honours degree, or a distinction at master level, ideally in Computer Science/Artificial Intelligence/Data Science/Mathematics/Statistics or equivalent with a good fundamental knowledge of artificial intelligence, machine learning, statistics.
English language requirement
IELTS score must be at least 6.5 (with not less than 6.0 in each of the four components). Other, equivalent qualifications will be accepted. Full details of the University’s policy are available online.
Essential attributes:
- Only a first-class honours degree, or a distinction at master level in a subject relevant to the PhD project will be considered, or equivalent achievements.
- Strong quantitative and analytical skills: ability to understand and implement complex models, work with large datasets, and perform simulations.
- Computational skills: experience with programming in Python, R, MATLAB, or similar.
- Problem-solving and critical thinking: ability to design methodological frameworks, identify limitations of existing models, and propose innovative solutions.
- Communication and collaboration skills: ability to explain complex technical concepts to interdisciplinary teams and contribute to academic publications.
- Motivation and independence: strong drive to undertake rigorous research, learn new methods, and work autonomously within a structured PhD program.
Desirable attributes:
- Practical experience in research or industry will be considered an advantage.
APPLICATION CHECKLIST
- Completed application form
- CV
- 2 academic references, using the Postgraduate Educational Reference Form (download)
- Research project outline of 2 pages (list of references excluded). The outline may provide details about
- Background and motivation of the project. The motivation, explaining the importance of the project, should be supported also by relevant literature. You can also discuss the applications you expect for the project results.
- Research questions or objectives.
- Methodology: types of data to be used, approach to data collection, and data analysis methods.
- List of references.
The outline must be created solely by the applicant. Supervisors can only offer general discussions about the project idea without providing any additional support.
- Statement no longer than 1 page describing your motivations and fit with the project.
- Evidence of proficiency in English (if appropriate)
To be considered, the application must use
- the advertised title as project title
Application link: https://evision.napier.ac.uk/si/sits.urd/run/siw_sso.go?XrJDjUX4vJQi9yTlLkvJDGWfbaRNzcgmh5xa6E9BBAybtiKEjW
For informal enquiries about this PhD project, please contact S.Thomson4@napier.ac.uk
Unlock this job opportunity
View more options below
View full job details
See the complete job description, requirements, and application process







