Promote Your Research… Share it Worldwide
Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.
Submit your Research - Make it Global NewsIn a striking case of technology turning against its creators, South Africa's government departments have faced embarrassment after senior officials were suspended for incorporating fabricated AI-generated research into major policy drafts. The incidents, unfolding in late April 2026, spotlight the perils of unchecked artificial intelligence (AI) use in official documentation, raising alarms about verification processes in public administration and their ripple effects on academic standards.
The Department of Communications and Digital Technologies (DCDT) withdrew its highly anticipated Draft National Artificial Intelligence Policy on April 27, just 16 days after public release, following revelations of fictitious citations. Similarly, the Department of Home Affairs (DHA) suspended two directors on April 30 after discovering AI 'hallucinations'—plausible but invented references—in the bibliography of its Revised White Paper on Citizenship, Immigration, and Refugee Protection, approved by Cabinet earlier that month.
These events underscore a growing tension between rapid AI adoption and the need for rigorous human oversight, particularly as South Africa positions itself as an AI leader on the continent. While the policy texts themselves remained substantively sound, the tainted references eroded public trust and prompted swift disciplinary action.
Timeline of the AI Policy Scandals
The saga began with the DCDT's 86-page Draft National AI Policy, released for public comment on April 11, 2026. Investigative journalism by News24 and civic group Article One uncovered at least six bogus references, including non-existent journals like the 'South African Journal of Artificial Intelligence Ethics' and phantom articles by fabricated authors. These matched patterns of AI hallucinations, where tools like ChatGPT invent credible-sounding sources.
Minister Solly Malatsi promptly withdrew the document, admitting an 'unacceptable failure of oversight.' Two DCDT officials were placed on precautionary suspension pending investigation.
Days later, DHA's white paper came under scrutiny. Minister Leon Schreiber revealed that suspect references, appended post-drafting, were AI-generated but not cited in the core text. The Chief Director and a key drafter were suspended immediately, with a full forensic audit ordered for all DHA policies since late 2022—the advent of widespread large language models (LLMs).

Unpacking AI Hallucinations: A Technical Primer
AI hallucinations occur when generative models produce outputs that confidently assert false information. In research contexts, this manifests as invented citations: real journals paired with fake papers, or entirely fabricated publications. Unlike simple errors, hallucinations stem from the model's training data gaps and probabilistic nature, making them eerily convincing.
Step-by-step, the process unfolds: Users prompt an LLM for references on a topic; the AI scans its vast but imperfect knowledge base and fabricates plausible entries to complete the response. Without cross-verification against primary sources, these slip into documents undetected.
In South Africa's cases, the fakes were isolated to bibliographies, but the breach highlighted systemic vulnerabilities. Experts note that while AI excels at drafting and ideation, it lacks true comprehension, demanding human fact-checking at every stage.
Government Actions and Broader Reforms
Both departments responded decisively. DCDT launched an inquiry, while DHA engaged two law firms for disciplinary hearings and historical reviews. New protocols include mandatory AI declarations in approvals and verification checklists.
President Cyril Ramaphosa's administration, amid its Government of National Unity, directed Digital Transformation Minister Solly Malatsi to roll out AI verification across DA-led departments. This scandal, occurring during Freedom Month, amplified calls for ethical AI governance aligned with OECD principles.
Statistics reveal the scale: Global AI hallucination incidents in academia rose 300% post-ChatGPT, per a 2025 Nature study. South Africa's proactive suspensions signal intent to lead, but experts urge statutory safeguards.
Parallels in South African Higher Education
South African universities, home to vibrant AI research hubs like Wits University's Centre for AI Research, confront similar challenges. Over 70% of public institutions have drafted AI policies addressing hallucinations, per a 2026 USAf survey. The University of Pretoria logged 53 AI-related misconduct cases from 2024-2025, while Unisa battles plagiarism surges.
Institutions like Stellenbosch and UCT mandate disclosure of AI use in assignments, with tools like Turnitin's AI detector flagging 15-20% of submissions. The scandals have intensified scrutiny, prompting NRF Research Insights Vol. 3 (2026) to advocate balanced AI integration preserving academic integrity.
Cultural context matters: In resource-constrained settings, AI aids overburdened academics, but without training, misuse proliferates. A UKZN case study exemplifies governance evolution, blending ethics with innovation.
University Responses and Proactive Measures
Post-scandal, SA universities accelerated AI literacy. Wits lecturer Nomalanga Mshinini, in The Conversation, called for 'epistemic integrity'—verifiable research methods. UJ's Frontiers in AI study urges mandatory human oversight.
Guidelines typically outline:
- Permitted uses: Brainstorming, editing (with citation).
- Prohibited: Submitting unedited AI outputs as original work.
- Verification: Cross-check all AI-sourced facts against primaries.
- Sanctions: Zero-tolerance for undetected hallucinations, mirroring plagiarism.
Stats: 94% of master's students seek clear policies, per a 2026 survey. NRF endorses global AI principles, prioritizing human decision-making in funding.

Expert Voices from SA Academia
Prof. Jane Duncan (Wits) laments snubbed local experts: 'Real AI scholars were bypassed for fabricated sources.' Dr. Mshinini emphasizes transparency: 'Demand explanations of AI tools used.'
UCT's Policy Innovation Lab advocates sector-specific frameworks. Stellenbosch's AI ethics module, now compulsory, teaches hallucination detection via case studies like these scandals.
Stakeholders agree: Incidents erode trust, but offer teachable moments. Balanced views highlight AI's equity potential for underrepresented researchers, if governed responsibly.
Implications for Research Integrity Nationwide
The scandals threaten SA's research reputation. NRF reports rising misconduct: 25% AI-linked in 2025 audits. Impacts include delayed funding, retracted papers, eroded international collaborations.
In higher ed, real-world cases: UCT retracted a thesis citing hallucinations; NWU piloted AI audits. Broader effects: Policymakers now wary of AI-drafted reports, demanding academic vetting.
Table of common AI pitfalls in SA research:
Photo by Hennie Stander on Unsplash
| Risk | Example | Mitigation |
|---|---|---|
| Fake Citations | Phantom journals | Primary source checks |
| Plagiarized Content | Undisclosed rewrites | AI detectors + disclosure |
| Biased Outputs | Cultural insensitivity | Diverse training data |
Solutions from Academia and Beyond
Universities lead with actionable insights:
- Training Programs: compulsory modules at UP, UJ on ethical AI.
- Tech Tools: Grammarly AI flags, custom LLMs fine-tuned on verified SA data.
- Policy Harmonization: USAf template for all 26 publics.
- Partnerships: NRF-IBM AI Centre for integrity tools.
Step-by-step verification process: Prompt AI → Cross-reference databases (Google Scholar, Scopus) → Cite originals → Disclose usage.
Future: Integrate AI literacy into NRF funding criteria, fostering verifiable innovation.
Global Context and SA's Path Forward
SA joins US courts citing fake cases, Australian unis retracting theses. Yet, proactive reforms position it ahead: Revised AI policy expected Q3 2026, with academic input.
Outlook optimistic: Universities like UCT pioneer AI ethics hubs, training 10,000 researchers annually. As AI evolves, SA's scandals catalyze resilient governance, blending tech with human wisdom for equitable progress.
For higher ed professionals, this reinforces vigilance: Explore research positions emphasizing integrity, or craft CVs highlighting AI ethics expertise.

Be the first to comment on this article!
Please keep comments respectful and on-topic.