Promote Your Research… Share it Worldwide
Have a story or written a research paper? Become a contributor and publish your work on AcademicJobs.com.
Submit your Research - Make it Global NewsThe Rise of Generative AI in Medical Imaging: Opportunities and Challenges
Generative Artificial Intelligence (GenAI), powered by advanced models like diffusion models and large language models (LLMs), is transforming medical imaging. These technologies can synthesize realistic images from text prompts or incomplete data, aiding in tasks such as anomaly detection, image reconstruction, and virtual patient simulations. In radiology, for instance, GenAI enhances MRI scans by filling in gaps from motion artifacts or generates synthetic CT images to reduce radiation exposure. However, this rapid evolution raises critical questions about reliability, bias, and safety in clinical use.
Singapore, a hub for biomedical innovation, is at the forefront through institutions like Duke-NUS Medical School. Recent advancements highlight GenAI's potential to democratize high-quality imaging in resource-limited settings, but underscore the need for robust oversight to prevent harms like hallucinated pathologies or amplified dataset biases.
Duke-NUS npj Digital Medicine Paper: A Landmark Perspective
Published on March 19, 2026, in npj Digital Medicine, the paper "Innovating global regulatory frameworks for generative AI in medical devices" by Jasmine Chiat Ling Ong and colleagues from Duke-NUS' Centre of Regulatory Excellence (CoRE) and international partners calls for urgent regulatory evolution. Led by Duke-NUS researchers, it dissects risks unique to GenAI—such as data poisoning vulnerabilities and non-deterministic outputs—and critiques static frameworks unfit for dynamic AI.
The authors, including experts from Stanford, University of Birmingham, and Duke-NUS AI + Medical Sciences Initiative, emphasize multidisciplinary collaboration. Duke-NUS' role exemplifies Singapore's higher education leadership in bridging AI research and policy.
Key Risks Highlighted in GenAI Medical Devices
GenAI in medical imaging risks perpetuating biases if training data underrepresents diverse populations, leading to inequitable diagnostics. For example, models trained on Western datasets may falter on Asian ethnicities common in Singapore. Other concerns include adversarial attacks altering images imperceptibly and ethical issues like synthetic data attribution.
- Hallucinations: Fabricating non-existent tumors in scans.
- Vulnerabilities: Prompt injection exploiting LLMs.
- Equity gaps: Poor performance in low-resource settings.
The paper advocates algorithmovigilance, akin to pharmacovigilance, for post-market monitoring.
Current Global Regulatory Landscape
The U.S. Food and Drug Administration (FDA) has cleared over 1,000 AI/ML-enabled devices, many for imaging, via the 510(k) pathway but lacks GenAI-specific rules. Its 2025 Action Plan emphasizes lifecycle management, yet struggles with adaptability.FDA AI/ML SaMD guidance
In the EU, the AI Act (effective 2024) classifies medical AI as high-risk, mandating transparency and human oversight, intersecting with Medical Device Regulation (MDR).
Singapore's HSA: Pioneering AI Medical Device Oversight
Singapore's Health Sciences Authority (HSA) achieved WHO Maturity Level 4 in 2026, the highest globally, with AI-enabled risk classification tools and updated Software Medical Devices guidelines (GL-04 r4, Dec 2025). HSA's framework covers lifecycle approaches for AI-SaMD, including public institution notifications.HSA Software MD Guidelines
Duke-NUS CoRE contributes via initiatives like regulatory sandboxes, aligning with IMDA's Privacy Enhancing Tech sandboxes for safe GenAI testing.
Photo by Markus Winkler on Unsplash
Case Studies: GenAI in Action and Regulatory Hurdles
In diabetic retinopathy screening, GenAI federated learning (Duke-NUS example) preserves privacy across sites. Yet, FDA-cleared imaging AIs show predicate networks vulnerabilities. Singapore trials highlight GenAI for chest X-rays, but need equity safeguards.
| Region | Key Framework | GenAI Focus |
|---|---|---|
| USA (FDA) | AI/ML Action Plan | Lifecycle mgmt, no GenAI specific |
| EU | AI Act + MDR | High-risk, transparency |
| Singapore (HSA) | Software MD Guidelines | AI risk tool, sandboxes |
Duke-NUS Recommendations: Adaptive and Collaborative Paths
The paper proposes:
- Global regulatory science centers (e.g., expand Duke-NUS CoRE model).
- Ethical tools: CARE-AI checklists, TRIPOD-LLM reporting.
- Sandboxes for interoperability testing.
- IMDRF harmonization including HSA, FDA.
Read the full npj Digital Medicine paper for detailed strategies.
Singapore Higher Education's Pivotal Role
Duke-NUS, alongside NUS and NTU, drives AI-health convergence. CoRE trains regulators, fostering Singapore's ecosystem. Universities collaborate on federated learning, addressing data silos while complying with PDPA.
Stakeholder Perspectives: From Clinicians to Policymakers
Clinicians praise GenAI for workflow efficiency but demand explainability. Regulators like HSA prioritize equity; ethicists urge diverse datasets. Duke-NUS' Nan Liu notes, "Global collaboration ensures AI benefits all populations."
Future Outlook: Harmonized Frameworks by 2030?
With IMDRF and WHO efforts, harmonized GenAI regs could emerge. Singapore positions as leader via HSA's ML4 status and Duke-NUS innovations, potentially exporting frameworks to ASEAN.
In medical imaging, expect adaptive licensing, real-world evidence mandates, and GenAI-specific predicates.
Photo by Dorian Labbe on Unsplash
Actionable Insights for Singapore Academics and Researchers
- Leverage HSA sandboxes for prototypes.
- Adopt CARE-AI in grants/publications.
- Collaborate via Duke-NUS initiatives.
- Focus on equity in datasets for local relevance.
This positions Singapore universities as global AI-health pioneers.
Be the first to comment on this article!
Please keep comments respectful and on-topic.