Securing LLM Use in Critical National Infrastructure-Adjacent Domains
Large language models are increasingly used in CNI-adjacent sectors such as law, journalism, mental health, and government. Misconceptions about privacy and data handling have already produced real-world harm, yet we have little understanding of how LLM interactions in these domains become exposed, misused, or exploited. This project will focus on three threat categories: accidental disclosure (logs, regenerated outputs, shared links), adversarial extraction (prompt reconstruction, conversation probing), and risky insider practices (copy-pasting sensitive information, poor mental models of privacy).
The student will map these risks and build a framework for understanding and mitigating them. The work will involve interviews, surveys, and co-design workshops with practitioners across these domains, supported by analysis of technical, legal, and UX safeguards. Expected outputs include a disclosure-risk framework, design guidelines, prototype interface concepts, and policy recommendations for organisations and regulators.
Required Skills
Applicants should have experience or strong interest in at least one of the following:
- human-computer interaction, privacy, or security
- AI governance, law, or policy
- qualitative research methods (interviews, surveys, workshops)
- foundational understanding of LLMs and their risks
- Technical skills are welcome but not essential.
Preferred Start Date
October 2026.
Unlock this job opportunity
View more options below
View full job details
See the complete job description, requirements, and application process











