Phd Studentship in Computer Science: Empirical Security Assessment of AI Decision Engines in Cyber-Physical Systems
About the Project
Artificial Intelligence(AI)-driven decision engines are increasingly embedded into cyber‑physical systems (CPS) such as autonomous vehicles, smart grids, industrial control systems, robotics, healthcare devices, and intelligent transport infrastructure. While significant research has focused on improving the performance, efficiency, and autonomy of such systems, their empirical security assessment remains underdeveloped, representing a growing and critical gap.
Recent studies have highlighted vulnerabilities in learning‑enabled CPS, demonstrating that small, carefully crafted perturbations can cause unsafe or malicious behaviours. However, much of the literature focuses either on theoretical attack construction or single‑system evaluations, rather than on systematic, reproducible, and comparative empirical assessments across classes of AI decision engines. This PhD project aims to address this gap through the development of a rigorous empirical framework for evaluating the security robustness, failure modes, and operational risks of AI decision engines in cyber‑physical environments.
Impact and Relevance
The proposed research directly addresses challenges faced by industries deploying AI‑enabled CPS in safety‑critical environments such as transportation, energy, and manufacturing. By providing empirical evidence rather than purely theoretical assurances, the project will support regulators, system designers, and security engineers in making informed deployment decisions. The outcomes are expected to influence both academic research and industrial best practices for trustworthy AI in cyber‑physical systems.
Unlock this job opportunity
View more options below
View full job details
See the complete job description, requirements, and application process

