Academic Jobs Logo

AI-Resistant Student Assessments: SMU Research Reveals Strategies to Redesign Assessments Amid ChatGPT and Generative AI Era

SMU Leads Singapore's Charge in Crafting AI-Resistant Assessments for the GenAI Era

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

brown wooden blocks on white surface
Photo by Brett Jordan on Unsplash

Promote Your Research… Share it Worldwide

Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.

Submit your Research - Make it Global News

The Rise of Generative AI in Singapore's Higher Education Landscape

Generative Artificial Intelligence (GenAI), exemplified by tools like ChatGPT, has revolutionized how students approach learning and assignments since its widespread adoption around 2023. In Singapore's competitive higher education sector, universities such as the National University of Singapore (NUS), Nanyang Technological University (NTU), Singapore Management University (SMU), and Singapore University of Technology and Design (SUTD) have seen a surge in GenAI usage. A 2025 survey indicated that over 88 percent of students across these institutions have used GenAI for assignments, prompting educators to rethink traditional assessment methods. This shift is not just about preventing misuse but harnessing AI to foster deeper learning in a knowledge-driven economy like Singapore's.

The Ministry of Education (MOE) has supported this evolution through its AI in Education (AIEd) Ethics Framework, emphasizing safe and responsible integration. However, concerns over academic integrity persist, with low but rising cases of unauthorized AI use reported. Singapore universities report fewer than a handful of confirmed incidents annually, yet experts warn that detection challenges make proactive redesign essential. SMU's recent research stands at the forefront, offering evidence-based strategies to create AI-resistant student assessments that prioritize human cognition.

SMU's Pioneering Research on ChatGPT's Assessment Performance

Singapore Management University (SMU) has led the charge with groundbreaking studies evaluating ChatGPT's capabilities in real-world academic tasks. Dr. Michelle Cheong's research, detailed in a study published in the Journal of Computer Assisted Learning, tested ChatGPT version 3.5 on spreadsheet modeling quizzes involving financial calculations and Monte Carlo simulations for COVID-19 projections. Using revised Bloom's Taxonomy—ranging from knowledge (level 1) to creation (level 6)—the experiments employed various prompting techniques: zero-shot baseline, zero-shot chain-of-thought, one-shot, and one-shot chain-of-thought.

Key findings revealed ChatGPT excels at lower levels like knowledge, comprehension, and application but falters in analysis, synthesis, and creation. Even advanced prompts couldn't achieve level 6, highlighting GenAI's limitations in novel problem-solving. This empirical data underscores the need for assessments targeting higher-order thinking skills (HOTS), where students must demonstrate originality and critical evaluation beyond AI's reach. SMU's work provides a blueprint for redesigning assessments amid the generative AI era, influencing policies across Singapore's autonomous universities.

Illustration of ChatGPT analyzing spreadsheets in SMU research context

Unpacking SMU's DRIVE Framework for GenAI Integrity

Complementing the performance studies, SMU's Centre for Teaching Excellence (CTE) introduced the Framework for the Use of Generative AI Tools, featuring the DRIVE approach: Detect, Review, Inform, Verify, and Escalate. This structured protocol equips instructors to handle suspected unauthorized use systematically.

  • Detect: Leverage tools like Turnitin AI Detection (flagging >20% AI content), GPTZero, or Originality.AI, alongside manual checks for repetition, inaccuracies, or uniform structure.
  • Review: Analyze reports, compare with student history, and generate AI samples for benchmarking.
  • Inform: Privately notify students, inviting explanations.
  • Verify: Conduct meetings, review drafts, or require step-by-step explanations.
  • Escalate: Report confirmed cases via SMU's Code of Academic Integrity channels.

The framework's three-pronged strategy—Adapt assessments, Incorporate AI ethically, and Detect misuse—ensures balance. Instructors receive templates for essays, coding projects, and problem sets, specifying tasks where AI is permitted, disclosed, or prohibited, such as ideation (prohibited) versus literature search (permitted).

Practical Strategies for AI-Resistant Assessments from SMU Insights

SMU research advocates redesigning assessments to emphasize process over product, focusing on HOTS. Specific strategies include:

StrategyDescriptionExample
In-Class AI UseAllow supervised ChatGPT access to teach error identification.Students refine prompts and critique outputs in financial modeling quizzes.
Peer Group ProjectsCollaborative tasks building on AI suggestions.Teams develop spreadsheet models for real-world scenarios like pandemic simulations.
Higher-Order PromptsTest analysis and creation where AI struggles.Require novel interpretations or custom formula derivations.
Oral Examinations (Viva)Defend written work verbally.15-minute discussions on thesis choices and evidence, as proposed by SMU's Matthew Hammerton.

These methods not only deter cheating but enhance skills like prompting and critical evaluation, vital for graduates entering AI-pervasive industries. For instance, in coding projects, students must document design rationale beyond AI-generated code.

Explore SMU's GenAI Framework

Implementation Across Singapore Universities: NUS, NTU, and Beyond

NUS and NTU echo SMU's proactive stance. NUS's Department of History guidelines for AY2025-2026 permit GenAI with disclosure, prioritizing ethical use in research and writing. NTU's Inspire platform offers AI-enhanced assessment designs, such as process-tracked submissions emphasizing creativity.

SUTD's 2025 primer encourages GenAI in class but stresses intellectual contribution. Despite low cheating rates—NUS reports minimal cases—professors like those at SMU deem detection a 'lost cause,' advocating adaptation. Budget 2026 allocates funds for AI literacy programs at these institutions, projecting doubled AI research output by 2030.

Panel discussion on AI assessments at Singapore universities

Stakeholders, from students to faculty, benefit: a 2025 CNA report notes professors using AI for grading, freeing time for mentorship. Aspiring educators can find opportunities via higher ed jobs in Singapore.

Challenges in Balancing Innovation and Academic Integrity

Despite progress, hurdles remain. AI detectors yield false positives, eroding trust. Overreliance risks stunting critical thinking, as Hammerton warns against abandoning essays. Equity issues arise with varying AI access, though Singapore's digital infrastructure mitigates this.

  • False detections undermine confidence.
  • Prompt engineering demands prior knowledge.
  • Ethical dilemmas in crediting AI outputs.

Solutions involve clear syllabi statements, student AI training modules via SMU's Student Success Centre, and hybrid models blending AI support with human verification.

Stakeholder Perspectives: Students, Faculty, and Policymakers

Students view GenAI as a learning aid, with surveys showing acceptance if transparent. Faculty, per SMU CTE webinars, appreciate tools for personalization but prioritize HOTS. Policymakers align via National AI Strategy 2.0, funding S$37 billion in RIE2030 for quantum and AI higher ed investments.

Real-world case: SMU's tort law testbed studies learner-AI interactions, identifying best practices for accuracy evaluation. Rate professors adapting these via Rate My Professor.

Future Outlook: AI Literacy and Evolving Assessments

By 2030, Singapore envisions AI-integrated curricula, with universities like NUS launching Google-MOE labs. Projections include expanded AI degrees and ethics mandates. SMU's strategies pave the way for resilient assessments, preparing students for AI-augmented careers.

Actionable insights: Instructors, adopt DRIVE and Bloom-aligned redesigns. Students, master prompting ethically. Explore higher ed career advice for thriving in this era.

a sticker on the side of a wall that says life is resilient

Photo by Marija Zaric on Unsplash

Conclusion: Embracing AI-Resistant Assessments for Tomorrow's Graduates

SMU research illuminates a path forward: redesign assessments to leverage GenAI's strengths while safeguarding learning. Singapore's universities exemplify adaptive leadership, ensuring graduates excel in critical thinking and innovation. For faculty positions shaping this future, visit higher ed jobs, university jobs, or Singapore academic opportunities. Share your experiences in the comments below.

Rate My Professor | Higher Ed Jobs | Career Advice
Portrait of Dr. Nathan Harlow

Dr. Nathan HarlowView full profile

Contributing Writer

Driving STEM education and research methodologies in academic publications.

Acknowledgements:

Discussion

Sort by:

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

New0 comments

Join the conversation!

Add your comments now!

Have your say

Engagement level

Browse by Faculty

Browse by Subject

Frequently Asked Questions

🧠What are AI-resistant student assessments?

AI-resistant student assessments are redesigned evaluations targeting higher-order thinking skills like analysis and creation, where tools like ChatGPT underperform, as shown in SMU research.

🔍What is SMU's DRIVE framework?

DRIVE stands for Detect, Review, Inform, Verify, Escalate—a protocol for handling suspected GenAI misuse in assessments, detailed in SMU's framework.

📊How does ChatGPT perform in assessments per SMU study?

ChatGPT excels in lower Bloom's levels (knowledge to application) but struggles with synthesis and creation, per Dr. Michelle Cheong's spreadsheet modeling research.

⚙️What redesign strategies does SMU recommend?

Strategies include in-class AI use, peer projects, oral vivas, and HOTS-focused tasks. Templates for essays and coding are available via SMU CTE.

🏛️How are NUS and NTU adapting to GenAI?

NUS permits GenAI with disclosure; NTU offers AI-enhanced designs. Both report low cheating but emphasize redesign, aligning with SMU.

⚠️What challenges exist in AI detection?

False positives from tools like Turnitin and evolving AI make detection unreliable; SMU advises adaptation over policing.

🗣️How does oral examination enhance integrity?

Proposed by SMU's Matthew Hammerton, vivas ensure intellectual responsibility by requiring defense of work, deterring uncomprehended AI use.

🇸🇬What is Singapore's policy on AI in higher ed?

Budget 2026 boosts AI literacy; RIE2030 invests S$37B. Unis like SMU lead with ethical frameworks.

Can students use AI ethically?

Yes, with disclosure and for supportive tasks like searches. SMU modules teach responsible use; prohibit in ideation.

🔮What future trends for assessments?

Hybrid human-AI models, expanded AI labs at NUS/SMU, ethics mandates by 2030. Check career advice for roles.

💼How to find AI education jobs in Singapore?

Explore higher ed jobs and Singapore listings on AcademicJobs for faculty adapting assessments.