SMU Researchers Develop Strategies to Detect and Counter Misinformation Damage

Pioneering AI Tools and Propagation Models at Singapore Management University

  • deepfakes
  • higher-education-news
  • ai-research
  • singapore-higher-education
  • smu

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

city skyline during night time
Photo by Maxwell Fong on Unsplash

Promote Your Research… Share it Worldwide

Have a story or written a research paper? Become a contributor and publish your work on AcademicJobs.com.

Submit your Research - Make it Global News

The Rising Tide of Misinformation and Singapore's Response

In today's hyper-connected world, misinformation spreads faster than wildfire, eroding trust in institutions, polarizing communities, and even influencing elections and public health decisions. Singapore, a digital hub in Asia, faces unique challenges with its multilingual population and high social media penetration. The Protection from Online Falsehoods and Manipulation Act (POFMA), enacted in 2019, marks the government's legal bulwark against deliberate online falsehoods. Yet, legal measures alone fall short against sophisticated campaigns powered by artificial intelligence (AI). This is where higher education institutions like Singapore Management University (SMU) step in, pioneering technological strategies to detect and mitigate misinformation damage. 82 85

SMU's School of Computing and Information Systems (SCIS) has emerged as a leader, with researchers developing cutting-edge algorithms that analyze propagation patterns, user behaviors, and multimodal content to identify falsehoods early. These efforts not only bolster national resilience but also position Singapore's universities at the forefront of global AI ethics and safety research.

SMU's Leadership in Misinformation Research

Singapore Management University, known for its emphasis on interdisciplinary and applied research, has invested heavily in understanding and countering digital threats. The university's focus aligns with Singapore's Smart Nation initiative, which prioritizes cybersecurity and information integrity. SMU researchers have published extensively in top venues, creating benchmarks that are cited thousands of times worldwide. 107

At the heart of this work is the recognition that misinformation isn't just 'fake news'—it's a spectrum including rumors, disinformation, and AI-generated content (AIGC). SMU's contributions span from social media-era user-generated content (UGC) challenges to the latest generative AI harms, providing tools that governments, platforms, and fact-checkers can leverage.

SMU School of Computing researchers analyzing misinformation data

Spotlight on Wei Gao: A Pioneer in Misinformation Detection

Associate Professor Wei Gao, from SMU's SCIS, exemplifies the university's prowess. With over 8,300 citations and an H-index of 37 as of late 2025, Gao's career traces the evolution of misinformation threats. His journey began with rumor detection during the 2011 Arab Spring and post-truth events like Brexit, evolving to address LLM hallucinations and deepfakes today. 107

Gao recently commented on a viral AI deepfake video falsely claiming Israeli Prime Minister Benjamin Netanyahu's death: "Misinformation campaigns usually emerge when there is a vacuum of trusted information and intense public demand for updates. In a crisis, people are looking for immediate explanations, and the first dramatic story often travels before the first verified one." This insight underscores his research's practical relevance. 82

His profile highlights interests in AI safety, specifically misinformation and disinformation, alongside health analytics and fairness in data science. Gao supervises PhD students and collaborates interdisciplinary, blending computer science with psychology for deeper insights into user vulnerabilities.

Core Strategies: Modeling Propagation and Early Detection

Gao's foundational work introduced recurrent neural networks (RNNs) for microblog rumor detection, incorporating timing and text features. He advanced this with tree-structured models—recursive neural networks and tree transformers—that capture propagation hierarchies, earning an outstanding paper nomination. 107

Key strategies include:

  • Generative Adversarial Networks (GANs): Modeling discussions as 'information campaigns' to filter troll noise and detect coordinated falsehoods.
  • Neural Hawkes Processes: For early rumor detection, balancing accuracy, timeliness, and stability.
  • Multi-task Learning: Jointly detecting rumors and user stances, using moral foundations to predict polarization.

These methods outperform baselines, with datasets becoming community standards.

From Text to Multimodal: Tackling AI-Generated Misinformation

As LLMs like ChatGPT proliferate, Gao's recent focus is AIGC factuality. Techniques include fine-tuning with logical constraints, hierarchical prompting for evidence-based verification, and reinforcement learning for black-box LLMs. A breakthrough: using multimodal LLMs and synthetic data to surpass GPT-4V in fact-checking images and text. 107

Weakly supervised detection via multiple instance learning targets sentence-level claims, generalizing to rumors and propaganda. Graph neural networks predict viral spread and vulnerable users, aiding targeted interventions.Read more on multimodal detection.

Deepfake Countermeasures: SMU's Global Partnerships

Deepfakes amplify misinformation risks. SMU Associate Professor He Shengfeng leads DeepShield, a seminal AI tool developed with South Korea's National Forensic Service and DeepBrain AI, funded by AI Singapore. This scalable, multimodal detector identifies video, voice, and image manipulations across cultures, crucial for Singapore's diverse society. 50

DeepShield AI deepfake detection interface

The project commercializes as Software-as-a-Service, addressing fraud, scams, and election interference prevalent in Asia.

Asia-Pacific Context: Lessons from Regional Challenges

SMU research emphasizes Asia-Pacific nuances, like multilingual content and rapid WeChat/Line spreads. A seminal paper, "Detecting Fake News in Social Media: An Asia-Pacific Perspective," reviews techniques initiated regionally, highlighting Taiwan's election disinformation and platform efforts. 0

In Singapore, COVID-19 saw rumors on vaccines corrected via counter-rumors, validating community-driven strategies SMU models support.

Broader Impacts on Singapore Higher Education

SMU integrates research into curricula, like the Misinformation Management course under SMU-X, blending psychology, sociology, and IR. This equips students for roles in tech policy and cybersecurity. Collaborations with government enhance POFMA's efficacy through predictive analytics. 41

Benefits include:

  • Enhanced societal resilience via policy insights.
  • Training next-gen AI ethicists.
  • Economic value: Protecting SGD billions from scams.

Stakeholders—from MOF to platforms—benefit from SMU's open datasets and tools.

Challenges and Future Outlook

Challenges persist: LLM biases, evolving AIGC, and ethical detection. Gao aims for reasoning-injected models and vulnerability modeling. With Singapore's 2026 AI push, SMU's work promises proactive defenses, ensuring a trustworthy digital ecosystem.

For those in higher ed, explore SMU's approaches for inspiration in research or careers combating digital threats. SMU's latest insights.

city buildings beside body of water under cloudy sky

Photo by Mike Enerio on Unsplash

Portrait of Dr. Elena Ramirez

Dr. Elena RamirezView full profile

Contributing Writer

Advancing higher education excellence through expert policy reforms and equity initiatives.

Discussion

Sort by:

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

New0 comments

Join the conversation!

Add your comments now!

Have your say

Engagement level

Frequently Asked Questions

🔍What are SMU's key strategies for misinformation detection?

SMU employs propagation modeling, GANs for campaign detection, and LLM-based fact-checking to identify falsehoods early.

👨‍💻Who is Wei Gao and his contributions?

Associate Prof. Wei Gao leads SMU's misinformation research, with RNNs, tree transformers, and synthetic data methods cited over 5,000 times.

🛡️How does DeepShield combat deepfakes?

DeepShield, SMU's collaboration with South Korea, uses multimodal AI for scalable video/voice detection across cultures. Details here.

⚖️What role does POFMA play alongside SMU research?

POFMA provides legal tools; SMU's tech enables proactive, predictive countermeasures complementing enforcement.

🌏How has SMU research impacted Asia-Pacific?

Papers like 'Asia-Pacific Fake News Detection' benchmark regional methods, aiding elections and crises.

📚What educational programs does SMU offer on misinformation?

Courses like Misinformation Management under SMU-X teach interdisciplinary approaches.

🤖Challenges in countering AI-generated misinformation?

LLM hallucinations and rapid AIGC require reasoning enhancements and vulnerability modeling.

⏱️Benefits of early detection models?

Neural Hawkes processes balance accuracy and speed, preventing viral spread.

🤝How does SMU collaborate internationally?

With South Korea on DeepShield and global benchmarks via open datasets.

🔮Future outlook for SMU's work?

Focus on trustworthy NLP, policy interventions, and resilient AI against evolving threats.

🏙️Relevance to Singapore's Smart Nation?

Enhances cybersecurity, public trust, and economic security from scams.