Promote Your Research… Share it Worldwide
Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.
Submit your Research - Make it Global NewsRecent NUS Symposium Spotlights Alarming Rise in Online Sexual Harms
On February 23, 2026, the National University of Singapore (NUS) hosted the Online Sexual Harms in Singapore (OSHSG) Symposium, bringing together experts, policymakers, and advocates to address the escalating threat of digital sexual harms. Organized by Assistant Professor Michelle Ho from NUS's Department of Communications and New Media, the event highlighted how rapid technology advancements, particularly generative AI tools, are fueling a surge in incidents like deepfake pornography, sextortion, and non-consensual image sharing.
Speakers, including Minister of State Rahayu Mahzam, emphasized that perpetrators now require minimal expertise—just a text prompt and a photo—to create harmful content at scale using platforms like Microsoft Designer or X's Grok. This democratization of harm production has outpaced regulatory responses, making collaboration essential for victim empowerment and platform accountability.
Key Findings from NUS-Led Research on Perceptions of Severity
The Institute of Policy Studies (IPS) at NUS released 'Online Harms in Singapore: From Evidence to Action' in November 2025, providing groundbreaking data from a nationally representative survey of 600 Singaporeans, focus group discussions with 79 participants, and in-depth interviews with 20 victims and supporters. Non-consensual sexual content emerged as the most severe online harm, with a relative severity score of 578, far ahead of promotion of dangerous behaviors (335) and targeted harassment (292).
This victim-centric perception—driven by factors like individual harm (73.5% rated very/extremely important) and victim vulnerability (74.3%)—reveals a societal consensus prioritizing personal safety. Females rated non-consensual content even more severely (score 369 vs. males' 209), reflecting gendered risks amplified by sexual double standards. The study also noted that 60% of respondents encountered online harms in the past year, with males reporting higher exposure frequency.
Youths (17-35) showed particular concern for deepfakes and misuse of inauthentic materials, underscoring how AI exacerbates traditional harms.
CASMIDA Project Uncovers Campus-Specific Digital Vulnerabilities
Complementing the IPS findings, NUS's Campus Sexual Misconduct in a Digital Age (CASMIDA) project, led by Asst Prof Ho, surveyed 314 university students and interviewed 28 victim-survivors, predominantly women. Shockingly, 40% of students reported experiencing online sexual harms, including cyberflashing and sextortion.
Participants often normalized these incidents, viewing them as inevitable on social media or dating apps, and demonstrated limited awareness of digital well-being. Technologies were not seen as wholly negative but as double-edged swords that both connect and expose users to risks. The project calls for early consent education integrated into curricula, starting from primary school levels.
AWARE Singapore reported a 90% increase in cases from 2021 to 2023, aligning with CASMIDA's campus data and signaling a broader trend in higher education environments.
Victim Experiences: Normalization, Barriers, and Emotional Toll
In-depth interviews from the IPS study painted a vivid picture of victim struggles. Many delayed reporting due to fears of disbelief, reputational damage, or unclear outcomes under varying laws like the Protection from Harassment Act. One victim described the hyper-vigilance of constantly monitoring for leaked images, while others coped by self-reporting content— a workaround under Penal Code Section 377BE.
Secondary harms like cyberstalking compounded initial violations, with anonymity shielding perpetrators across platforms. Symposium panelist Natalie Chia from SG Her Empowerment (SHE) noted clashing institutional definitions frustrate victims, who face mismatched personal experiences.
CASMIDA interviews revealed young survivors downplaying harms: "If everyone experiences it, why fuss?" This normalization perpetuates cycles, especially among university students navigating digital social lives.
Technology Advancements: AI as Double-Edged Sword
Generative AI has lowered entry barriers for harms, enabling anyone to produce deepfakes from innocuous photos. Minister Rahayu highlighted this shift: no need for sophisticated tools anymore. NUS research shows harms now span platforms, escalating rapidly due to easy sharing.
The IPS typology identified 20 localized harms, with gaps in addressing AI-driven ones like synthetic non-consensual content. Youths ranked these higher, anticipating tech's role in future risks. In higher education, students' heavy platform use amplifies exposure, demanding tech-savvy safeguards.
Read the full NUS symposium reportGovernment and Legislative Responses: OSRA Bill Milestones
Singapore's Online Safety (Relief and Accountability) (OSRA) Bill, passed in November 2025, empowers victims to seek rapid content removal, perpetrator details, and damages. The upcoming Online Safety Commission (mid-2026) will enforce takedowns, prioritizing victim-centric relief.
79.3% of IPS survey respondents deemed perpetrator accountability legislation 'very/extremely helpful,' and 77% favored swift platform removals. Cross-border efforts, like 400 arrests in early 2025 across Asia, show momentum, but NUS experts urge intent-proof criminalization for deepfakes.
Platform Responsibilities and Industry Partnerships
IMDA's Edward Wee and Meta's Priyanka Bhalla stressed platforms' role in safety-by-design: friction prompts, youth settings, and local collaborations. Singapore-specific scenarios, like multi-platform offenses, require tailored responses. Transparency on takedown metrics and joint training with NGOs are key recommendations.
In universities, NUS advocates embedding platform education in courses, linking to broader digital literacy initiatives.
Download IPS full study PDFEducational Strategies: Early Consent and Digital Literacy
NUS researchers propose starting consent education in primary schools via parents and MOE's Cyber Wellness, progressing to university modules. Tailored by age/gender: pornography awareness for kids 6-10, deepfake detection for youths. Campaigns challenging normalization, using victim stories, foster shared responsibility—75.6% expect users to act more.
Higher ed institutions like NUS can lead with interdisciplinary programs, preparing students for safe digital careers. Link this to career advice: safe online presence aids professional growth; explore higher ed career advice for digital best practices.
Future Outlook: Multi-Stakeholder Approach and Challenges
Challenges persist: reporting ambiguities, offender rehab needs, cross-border enforcement. NUS visions a unified national platform for resources, annual safety reports, and ASEAN coordination. With 77% demanding more from government and tech firms, a whole-of-society shift is underway.
For Singapore universities, this means prioritizing digital safety in policies, supporting affected students, and researching evolving threats. Prof Ho urges: "Build an education system unafraid of complex issues."
Photo by charlesdeluvio on Unsplash
Implications for Higher Education and Actionable Insights
In Singapore's universities, where 40% of students face harms, NUS models proactive research-to-policy translation. Institutions should audit digital risks, train staff on OSRA, and foster reporting cultures. Students: document incidents, seek SHE/AWARE support early.
Explore faculty roles in combating harms via higher ed faculty jobs at NUS and peers. Rate professors on safety initiatives at Rate My Professor. For career navigators, higher ed career advice emphasizes secure online profiles.
Visit Singapore university jobs for opportunities in digital safety research. Post roles at university jobs or higher ed jobs.

Be the first to comment on this article!
Please keep comments respectful and on-topic.