
This comment is not public.
Helps students develop critical skills.
A true expert who inspires confidence.
Creates a collaborative learning environment.
A true inspiration to all learners.
Inspires students to achieve their best.
Dr. Trang Vu is a Lecturer in the Department of Data Science and Artificial Intelligence within the Faculty of Information Technology at Monash University. She earned her Doctor of Philosophy in Artificial Intelligence and Machine Learning from Monash University's Department of Data Science and Artificial Intelligence on 26 October 2022. Her thesis, "Learning to Adapt Neural Models with Limited Human Supervision in Natural Language Processing," earned her the Vice-Chancellor's Commendation for Thesis Excellence. Vu's research specializes in the intersection of natural language processing and machine learning. Her ongoing work develops efficient and trustworthy NLP methods to make these technologies safe and accessible, encompassing alignment and hallucination mitigation for large language models, cultural-aware machine translation, and machine learning techniques including active learning, transfer learning, and semi-supervised learning.
Vu's publications feature in top-tier conferences. Key works include "PromptDSI: Prompt-Based Rehearsal-Free Continual Learning for Document Retrieval" (ECML PKDD 2025), "Active Continual Learning: On Balancing Knowledge Retention and Learnability" (AI 2024), "Extending LLMs to New Languages: A Case Study of Llama and Persian Adaptation" (COLING 2025), "Fantastic Targets for Concept Erasure in Diffusion Models and Where to Find Them" (ICLR 2025), "Discourse Graph Guided Document Translation with Large Language Models" (EACL 2026), "CONGRAD: Conflicting Gradient Filtering for Multilingual Preference Alignment" (EACL 2026), "Discrete Minds in a Continuous World: Do Language Models Know Time Passes?" (EMNLP 2025 Findings), "The Best of Both Worlds: Bridging Quality and Diversity in Data Selection with Bipartite Graph" (ICML 2025), "Mixture-of-Skills: Learning to Optimize Data Usage for Fine-Tuning Large Language Models" (EMNLP 2024), "Koala: An Index for Quantifying Overlaps with Pre-training Corpora" (EMNLP 2023), and "Systematic Assessment of Factual Knowledge in Large Language Models" (EMNLP 2023 Findings). As a member of the Vision and Language group, she supervises student projects on LLM applications in medical reports, active learning, and translation. Vu co-presented a tutorial on "Continual Learning for Large Language Models" at the Australasian Joint Conference on Artificial Intelligence (AJCAI 2024), accepted for EMNLP 2025, and gave an invited talk on "Multi-Domain Multilingual NMT" at Lee Lab, Ontario Tech University, in August 2024.
Photo by Brett Jordan on Unsplash
Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.
Submit your Research - Make it Global News