Academic Jobs Logo
Post My Job Jobs

Trustworthy multi-agent LLM systems for security and privacy in software engineering

Applications Close:

Post My Job

Edinburgh, United Kingdom

Academic Connect
5 Star Employer Ranking

Trustworthy multi-agent LLM systems for security and privacy in software engineering

About the Project

Large Language Models (LLMs) are rapidly transforming software development and software engineering. Instead of relying on a single model, recent approaches use multiple AI agents for software development activities. This shift has increased speed and improved capabilities into the field. However, it has also raised new risks. Since LLMs are already known to generate insecure or faulty code, when several agents collaborate, vulnerabilities may not only remain undetected but could also be amplified. In addition to security, the trustworthiness of these systems—covering reliability, accountability, and transparency—becomes a critical concern when AI agents are responsible for building software that may run in production.

In this PhD project, you will focus and study the state of the art in multi-agent AI coding systems and investigate both the root causes of vulnerabilities or the factors that impact their trustworthiness. The goal is to understand how these systems can fail, why vulnerabilities emerge, and what mechanisms can improve their safety, robustness, and transparency. Ultimately, the project aims to develop principles and infrastructures that make multi-agent development safer, more reliable, and more trustworthy.

The research will be guided by the following questions:

  1. Scenarios of vulnerability: In what situations does multi-agent software development become more prone to introducing security flaws, reliability issues, or untrustworthy outcomes?
  2. Mitigation strategies: What techniques or approaches can reduce or eliminate these vulnerabilities and increase trustworthiness in multi-agent systems?
  3. Infrastructure support: What frameworks, architectures, or monitoring tools can decrease the likelihood that multi-agent systems produce vulnerable or untrustworthy applications?
  4. Human–AI oversight: What role should human developers play in supervising, validating, or correcting agent behaviour to ensure trustworthy outcomes?

The outcomes of this research will contribute to both theory and practice. From a theoretical perspective, the project will clarify the mechanisms by which vulnerabilities and trust issues arise in AI-driven development. From a practical perspective, it will provide concrete tools, methods, and design principles to make multi-agent LLM systems more secure, reliable, and trustworthy for real-world software engineering.

Academic qualifications

Have, or expect to achieve by the time of start of the studentship a first-class honours degree, or a distinction at master level, ideally in Computer Science or Software Engineering and Cyber Security, with a good fundamental knowledge of computer programming.

English language requirement

IELTS score must be at least 6.5 (with not less than 6.0 in each of the four components). Other, equivalent qualifications will be accepted. Full details of the University’s policy are available online.

Essential attributes:

  • Solid foundation in software engineering or cyber security (especially software security)
  • Good understanding of AI/ML
  • Openness to learning about responsible AI and trust frameworks

When applying clickhere

APPLICATION CHECKLIST

  • Completed application form
  • CV
  • 2 academic references, using the Postgraduate Educational Reference Form (download)
  • Research project outline of 2 pages (list of references excluded). The outline may provide details about:
    1. Background and motivation of the project. The motivation, explaining the importance of the project, should be supported also by relevant literature. You can also discuss the applications you expect for the project results.
    2. Research questions or objectives.
    3. Methodology: types of data to be used, approach to data collection, and data analysis methods.
    4. List of references.

The outline must be created solely by the applicant. Supervisors can only offer general discussions about the project idea without providing any additional support.

  • Statement no longer than 1 page describing your motivations and fit with the project.
  • Evidence of proficiency in English (if appropriate)

To be considered, the application must use

  • the advertised title as project title

For informal enquiries about this PhD project, please contact Prof Ashkan Sami - A.Sami@napier.ac.uk

References

  1. Jahić, J., & Sami, A. (2024, June). State of practice: LLMs in software engineering and software architecture. In 2024 IEEE 21st International Conference on Software Architecture Companion (ICSA-C) (pp. 311-318). IEEE.
  2. Verdi, M., Sami, A., Akhondali, J., Khomh, F., Uddin, G., & Motlagh, A. K. (2020). An empirical study of C++ vulnerabilities in crowd-sourced code examples. IEEE Transactions on Software Engineering, 48(5), 1497-1514.
  3. He, J., Treude, C., & Lo, D. (2025). LLM-Based Multi-Agent Systems for Software Engineering: Literature Review, Vision, and the Road Ahead. ACM Transactions on Software Engineering and Methodology, 34(5), 1-30.
10

Unlock this job opportunity


View more options below

View full job details

See the complete job description, requirements, and application process

58 Jobs Found

Edinburgh Napier University

9 Sighthill Ct, Edinburgh EH11 4BN, UK
Student / Phd Jobs
Closes: Jul 7, 2026
View More