Exploring Bias in Retrieval-Augmented Generation (RAG) Models
About the Project
Are you ready to be at the forefront of AI innovation and redefine how we build ethical, inclusive language models? Join us on a cutting-edge project investigating hidden biases in Retrieval-Augmented Generation (RAG) models as part of the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) Project. This PhD studentship is funded through the Research Excellence Awards (REA) within the John Anderson Research Studentship Scheme (JARSS). You will be based in the Department of Computer and Information Sciences at the University of Strathclyde.
About the PHAWM Project
The PHAWM Project addresses the challenge of developing trustworthy and safe AI systems by focusing on systematic harm auditing. As AI systems become embedded in critical areas such as healthcare, media content, and cultural heritage, they bring the potential for harmful consequences, including biased decision-making and misinformation.
To tackle this, PHAWM introduces the concept of participatory AI auditing, which involves diverse stakeholders – domain experts, regulators, decision subjects, and end-users – assessing AI systems either collectively or individually. The project will deliver workbenches and auditing methodologies that guide stakeholders in evaluating the quality and potential harms of AI, helping to create safer, more trustworthy AI solutions. Your PhD research will directly contribute to PHAWM’s mission by developing bias detection and mitigation methodologies for RAG models.
Why This PhD Is Your Opportunity to Innovate
RAG models are transforming the AI landscape by combining powerful Large Language Models (LLMs) with external information retrieval mechanisms to deliver more accurate and context-aware responses. These models are reshaping industries such as technology, healthcare, and education. However, RAG models also inherit biases not only from their pre-trained LLMs but also from the external information they retrieve, creating challenges in ethics, fairness, and trust.
Imagine search results reinforcing harmful stereotypes or generating biased outputs due to skewed external documents. This is where your research will make a difference, helping to uncover hidden biases and create solutions that improve fairness, accuracy, and inclusivity across AI systems.
What You Will Do
You will design innovative methodologies for detecting and mitigating biases embedded within RAG models, contributing to participatory AI auditing frameworks. The project will explore techniques such as indirect prompting and role-specific simulations to reveal hidden tendencies and improve auditing accuracy.
Key Objectives:
- Develop scalable and reproducible tools to measure biases introduced by both LLMs and retrieved external content.
- Design indirect prompting techniques to uncover implicit biases without triggering evasive responses.
- Simulate real-world contexts (e.g., fact-checker, policy analyst) to identify and assess systemic bias in practical applications.
- Optimise the retrieval process to improve fairness by analysing the role of external information sources in bias formation.
- Align findings with ethical AI standards and regulatory frameworks, including the EU AI Act, and contribute to the development of participatory harm auditing methodologies.
What We are Looking For
We are seeking ambitious and curious researchers who want to push the boundaries of AI and create meaningful change.
Essential Skills:
- A 2:1 Honours degree or Master’s degree in relevant fields such as Computer Science, Data Science, AI, or Information Retrieval.
- A strong background in AI, machine learning, and retrieval systems.
- Proficiency in programming, data analysis, and model development.
- Excellent written and oral communication in English.
- An understanding of the research lifecycle, including designing experiments, evaluating methods, and interpreting results.
Desirable Skills:
- Knowledge of RAG models, retrieval mechanisms, and LLM architectures.
- Experience with bias mitigation, ethical AI, or fairness in AI systems.
- Familiarity with LLM fine-tuning, retrieval optimisation, and prompt engineering.
- The ability to work collaboratively with external research partners and interdisciplinary teams.
- A proactive mindset and creativity to challenge assumptions and drive innovation.
How to Apply:
Interested candidates should email Dr. Yashar Moshfeghi (yashar.moshfeghi@strath.ac.uk) and include the following attachments:
- Cover letter detailing contact information, motivation, background, and proposed research direction (max 3 pages).
- Up-to-date CV.
- Transcripts and certificates of all degrees.
- Two references, one academic.
Contact Dr. Yashar Moshfeghi to express interest. Applications will be processed on a 'first come, first served' basis, and the hiring process will conclude as soon as a suitable candidate is identified.
Unlock this job opportunity
View more options below
View full job details
See the complete job description, requirements, and application process










