Academic Jobs Logo
Post My Job Jobs

Responsibility-aware Decision-making and AI Alignment in Multiagent Systems

Applications Close:

Post My Job

Aberdeen, United Kingdom

Academic Connect
5 Star Employer Ranking

Responsibility-aware Decision-making and AI Alignment in Multiagent Systems

About the Project

These projects are open to students worldwide, but have no funding attached. Therefore, the successful applicant will be expected to fund tuition fees at the relevant level (home or international) and any applicable additional research costs. Please consider this before applying.

Artificial Intelligence (AI) increasingly operates in environments where multiple autonomous systems interact with each other and with humans. Ensuring that these systems behave safely, responsibly, and in accordance with legal and ethical standards is one of the most pressing challenges in AI security and governance. This PhD project addresses these challenges by developing formal reasoning and verification frameworks for responsibility-aware decision-making in multi-agent systems (MASs) — systems in which several autonomous entities act, plan, and learn simultaneously.

The research focuses on scenarios involving sensitive information exchange, safety-critical coordination, and distributed accountability. For example, consider a road environment involving autonomous vehicles, human-driven cars, and pedestrians. If a collision occurs, the question of responsibility is complex: Which agents contributed to the outcome? Who possessed knowledge that could have prevented it? Did any agent act negligently or with intent? Answering such questions requires a formal, computational framework for reasoning about responsibility, knowledge, and intent — bridging moral, legal, and technical perspectives.

The project aims to build such a framework by combining formal verification, symbolic AI, and game-theoretic reasoning. Using tools from temporal logic, automata theory, and planning, the candidate will design algorithms that can model, verify, and explain agent behaviours in shared, uncertain, and dynamic environments. These models will enable the analysis of not only what autonomous systems can or will do, but also what they should do, given specific ethical or legal constraints.

A central methodological component will be the study of decision problems for bounded or “natural” strategies—simplified yet realistic forms of strategic reasoning that reflect human-like bounded rationality. These strategies help reduce the complexity of reasoning in large, multi-agent environments while maintaining interpretability and tractability. The research will first analyse the computational complexity of these decision problems, establishing theoretical guarantees for reasoning under strategic and informational constraints. The second stage will involve the design and implementation of a verification and reasoning tool capable of solving such problems automatically, with applications in both simulation and real-world MASs.

Game theory provides the mathematical foundation for this work, as it models systems of interacting agents with potentially conflicting goals. In the context of AI, game-theoretic verification enables the analysis of system behaviour through logical frameworks such as Alternating-time Temporal Logic (ATL) and Strategy Logic, which capture cooperation, competition, and strategic dependence. By extending these approaches to reason about responsibility, accountability, and compliance, this research contributes both to theoretical AI safety and to practical verification techniques relevant for robotics, autonomous vehicles, and AI governance.

The successful candidate will develop skills in formal methods, temporal logic, planning, game theory, and AI verification, with opportunities to apply these techniques to real-world autonomous systems. The project will suit a motivated student with a background in computer science, mathematics, or related disciplines, and an interest in the foundations of AI safety, explainability, and governance.

Ultimately, this research will advance the foundations of responsible AI — developing formal tools that not only ensure autonomous systems act correctly, but also help society understand why they do so.

Informal enquiries can be made by contacting Dr C Mu (chunyan.mu@abdn.ac.uk).

Decisions will be based on academic merit. The successful applicant should have, or expect to obtain, a UK Honours Degree at 2.1 (or equivalent) in Computer Science

We encourage applications from all backgrounds and communities, and are committed to having a diverse, inclusive team.

Application Procedure:

Formal applications can be completed online: https://www.abdn.ac.uk/pgap/login.php.

You should apply for Degree of Doctor of Philosophy in Computing Science to ensure your application is passed to the correct team for processing.

Please clearly note the name of the lead supervisor and project titleon the application form. If you do not include these details, it may not be considered for the project.

Your application must include: A personal statement, an up-to-date copy of your academic CV, and clear copies of your educational certificates and transcripts.

Please note: you do not need to provide a research proposal with this application.

If you require any additional assistance in submitting your application or have any queries about the application process, please don't hesitate to contact us at researchadmissions@abdn.ac.uk

Funding Notes

This is a self-funding project open to students worldwide. Our typical start dates for this programme are February or October.

Fees for this programme can be found here Finance and Funding | Study Here | The University of Aberdeen.

10

Unlock this job opportunity


View more options below

View full job details

See the complete job description, requirements, and application process

72 Jobs Found
View More