Academic Jobs Logo
Post My Job Jobs

Information Security in AI Agents

Applications Close:

Post My Job

Worcester, United Kingdom

Academic Connect
5 Star Employer Ranking

Information Security in AI Agents

About the Project

This PhD study aims to improve information security in Artificial Intelligence (AI) agents and agentic systems. AI agents complete different tasks when we provide input for them. An agent can interact with users (like a personal assistant), autonomously execute tasks (trading bots), and operate within constraints (a recommender system) (Sapkota, Roumeliotis, & Karkee, 2026).

An agent system refers to one or more AI agents which are organised to achieve broader goals. Such a system can include multiple cooperating or competing agents. A multi-agents orchestration allows agents collaborate with other AI agents and completes the tasks faster. Familiarity with agents and appropriate use of them is an important factor that help to work parallel or sequential in domains such as supply chain management, health, HR, and so on (Hosseini & Seilani, 2025).

An agent system brings productivity, and high level of automation to different processes which we have in organizations due to abilities of thinking, reasoning, and acting. In this study we have focused on information security and privacy protection in AI agents due to importance of the subject and lack of rigorous research in this domain.

Autonomous and semi-autonomous AI agents increasingly read sensitive context, call external tools, and act across networks. This expanded capability surface introduces novel information security risks that differ from traditional application threats: instructions are encoded in natural language; context windows import untrusted data; and tool use can weaponize benign models into powerful attack orchestrators (Deng et al., 2025). Hallucinations, Adversarial manipulation, and False-positive are examples of challenges in this domain (Casheekar, Lahiri, Rath, Prabhakar, & Srinivasan, 2024).

This project:

  1. systematize the threat landscape for AI agents,
  2. build reproducible benchmarks and metrics for information‑security outcomes,
  3. design and empirically evaluate a stack of defensive patterns from prompt‑level hardening to operating‑system sandboxing and provenance, and
  4. develop practical assurance artifacts (policies, test suites, monitoring blueprints) that organizations can adopt.

Supervisory Team

  • Dr Nader Sohrabi Safa
  • Dr Christopher Bowers

Research Group: Digital Innovation and Intelligent Systems Research Group

Application Process

To begin the application process please go to: https://www.worc.ac.uk/research/research-degrees/applying-for-a-phd/.

10

Unlock this job opportunity


View more options below

View full job details

See the complete job description, requirements, and application process

20 Jobs Found
View More