Explainable Learning Analytics: Towards Fair and Trustworthy AI in Higher Education
About the Project
These projects are open to students worldwide, but have no funding attached. Therefore, the successful applicant will be expected to fund tuition fees at the relevant level (home or international) and any applicable additional research costs. Please consider this before applying.
Learning analytics has evolved from descriptive dashboards to predictive and prescriptive models that promise personalised support and institutional insights. Yet, these systems face three persistent challenges:
- Adaptivity: Models often fail to adjust dynamically to shifting learner behaviours, contexts, and institutional practices.
- Opacity: Reliance on complex machine learning techniques, particularly deep learning, produces black-box outputs that undermine trust and hinder pedagogical uptake.
- Equity: Biases embedded in data and algorithms risk reinforcing structural inequities, disproportionately affecting underrepresented student groups.
As higher education institutions increasingly adopt AI-driven analytics, there is an urgent need for systems that are not only accurate but also transparent, adaptive, and fairness-aware. This project addresses this need by integrating advances in explainable AI (XAI) with the practical demands of learning analytics.
Specific objectives are:
- Develop adaptive modelling techniques that can update in near real-time as learner behaviours and contexts evolve.
- Integrate explainability mechanisms (for instance, feature attribution and counterfactual explanations) into predictive pipelines to enhance transparency for educators and students.
- Embed fairness-aware algorithms to detect and mitigate bias across diverse student cohorts.
The project will contribute to both scholarship and practice. Academically, it will enrich debates on the intersection of learning analytics, explainable AI, and fairness. Practically, it will provide higher education institutions with actionable frameworks and tools to implement trustworthy analytics, supporting more equitable and transparent student outcomes.
Informal enquiries can be made by contacting Dr M Palomino (marco.palomino@abdn.ac.uk)
Decisions will be based on academic merit. The successful applicant should have, or expect to obtain, a UK Honours Degree at 2.1 (or equivalent) in a relevant field. Confidence and independence in programming systems in Java or Python and previous academic or industry experience in at least one of the following topics would be desirable: Machine Learning, Big Data, Knowledge Representation and Reasoning.
We encourage applications from all backgrounds and communities, and are committed to having a diverse, inclusive team.
Application Procedure:
Formal applications can be completed online: https://www.abdn.ac.uk/pgap/login.php.
You should apply for Degree of Doctor of Philosophy in Computing Science to ensure your application is passed to the correct team for processing.
Please clearly note the name of the lead supervisor and project titleon the application form. If you do not include these details, it may not be considered for the project.
Your application must include: A personal statement, an up-to-date copy of your academic CV, and clear copies of your educational certificates and transcripts.
Please note: you do not need to provide a research proposal with this application.
If you require any additional assistance in submitting your application or have any queries about the application process, please don't hesitate to contact us at researchadmissions@abdn.ac.uk
Funding Notes
This is a self-funding project open to students worldwide. Our typical start dates for this programme are February or October.
Fees for this programme can be found here Finance and Funding | Study Here | The University of Aberdeen
Unlock this job opportunity
View more options below
View full job details
See the complete job description, requirements, and application process


