University of Oxford  Jobs

University of Oxford

Applications Close:

central Oxford

5 Star Employer Ranking

"Postdoctoral Research Assistant in AI + Security (Early Career Fellowship)"

Academic Connect
Applications Close
Is this job right for you? View Vital Job Information and Save Time

Postdoctoral Research Assistant in AI + Security (Early Career Fellowship)

Postdoctoral Research Assistant

02-Mar-2026 12:00

Location

central Oxford

University of Oxford

Type

Full-time Fixed-term (24 months)

Salary

Grade 7 £39,424 - £47,779 per annum

Required Qualifications

PhD/DPhil in theoretical computer science, information theory, cryptography, security, game theory or related
Strong mathematical maturity and rigorous theory production
Programming fluency (Python, ML tooling)
Excellent communication and publication skills

Research Areas

Multi-Agent Security
AI Security
Information Theory
Cryptography
Game Theory
79% Job Post Completeness

Our Job Post Completeness indicates how much vital information has been provided for this job listing. Academic Jobs has done the heavy lifting for you and summarized all the important aspects of this job to save you time.

Postdoctoral Research Assistant in AI + Security (Early Career Fellowship)

We are seeking a full-time Postdoctoral Research Assistant to join the Oxford Witt Lab for Trust in AI (OWL) at the Department of Engineering Science (Central Oxford). The post is funded by Schmidt Sciences (AI2050 Early Career Fellowship) and is fixed-term for 24 months, with a possible extension beyond this date subject to satisfactory performance and mutual agreement.

The successful candidate will develop foundational security theory for agentic and multi-agent AI systems, as part of our research programme in multi-agent security. You will build formal models of security-relevant behaviour in interactive systems and derive rigorous results that clarify key limits, trade-offs, and conditions under which stronger guarantees are possible. Alongside theory development, you will run targeted computational experiments (e.g., small-scale simulations or proof-of-concept implementations) to validate assumptions and connect formal insights to realistic agentic settings. You will be responsible for developing and analysing formal models; deriving provable guarantees, impossibility results, and trade-offs; designing attack–defence formulations and evaluation protocols; publishing research in leading conferences and journals; contributing documented software to the group library; and supporting the supervision of graduate and undergraduate research projects. The postholder will have substantial ownership over one or more core research thrusts within the lab’s multi-agent security programme.

You should possess a PhD/DPhil completed, (or near completion), in theoretical computer science, information theory, cryptography, security, game theory, or a closely related area (ML acceptable with strong theory depth). You will have strong mathematical maturity with evidence of producing rigorous, security-relevant theory, sufficient programming fluency (e.g., Python; familiarity with common ML tooling) to run computational experiments, and excellent communication skills, including writing for publication and presenting results.

Informal enquiries may be addressed to Dr Christian Schroeder de Witt (christian.schroeder@eng.ox.ac.uk)

For more information about working at the Department, see www.eng.ox.ac.uk/about/work-with-us/

Only online applications received before midday on 2 March 2026 can be considered. You will be required to upload a covering letter/supporting statement, including a brief statement of research interests (describing how past experience and future plans fit with the advertised position), CV and the details of two referees as part of your online application.

The Department holds an Athena Swan Bronze award, highlighting its commitment to promoting women in Science, Engineering and Technology.

Multi-Agent Security, AI Security, Information Theory, Cryptography, Game Theory

Salary: Grade 7 £39,424 - £47,779 per annum

Location: central Oxford

Closing Date & Time: 02-Mar-2026 12:00

Tell them AcademicJobs.com sent you!

Apply Now

Frequently Asked Questions

🎓What qualifications are required for this Postdoctoral Research Assistant in AI Security role?

Candidates must hold a completed or near-completion PhD/DPhil in theoretical computer science, information theory, cryptography, security, game theory, or related fields (ML with strong theory acceptable). Demonstrate mathematical maturity, rigorous security theory experience, Python programming fluency, and excellent communication for publications. Prepare your academic CV highlighting these.

🔬What are the main responsibilities in this AI + Security postdoc position?

Develop formal models of security in agentic and multi-agent AI systems; derive provable guarantees, impossibility results, and trade-offs; design attack-defense protocols; run computational experiments; publish in top venues; contribute software; support student supervision. Enjoy substantial ownership over research thrusts in the Oxford Witt Lab. Explore postdoc success tips.

📅What is the application deadline and process for this Oxford AI Security postdoc?

Applications close at midday on 2 March 2026. Submit online: covering letter with research interests statement (past experience and fit), CV, and two referees. Only online submissions before deadline considered. Contact Dr. Christian Schroeder de Witt for enquiries. Use our CV template for best results.

💰What is the salary and employment terms for this University of Oxford postdoc?

Salary: Grade 7 £39,424 - £47,779 per annum. Full-time, fixed-term 24 months funded by Schmidt Sciences AI2050 Early Career Fellowship, with possible extension. Location: central Oxford. Check postdoc jobs for similar opportunities.

🛡️What research areas does this AI Security postdoc at Oxford focus on?

Focus on multi-agent security, AI security, information theory, cryptography, and game theory. Build theory for interactive AI systems with experiments in simulations. Join research jobs in cutting-edge AI + Security. Lab: Oxford Witt Lab for Trust in AI (OWL).

👥Is there teaching or supervision involved in this postdoc role?

Primary focus is research, but includes supporting supervision of graduate and undergraduate projects. No heavy teaching load; emphasis on theory development, publications, and software contributions. Ideal for early-career researchers advancing in AI security. See postdoc thriving guide.
409 Jobs Found

Odessa College

201 W University Blvd, Odessa, TX 79764, USA
Academic / Faculty
Add this Job Post to Favorites
Closes: Apr 5, 2026
View More