Promote Your Research… Share it Worldwide
Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.
Submit your Research - Make it Global NewsThe Surge of AI in Higher Education and the Cheating Challenge
In recent years, generative artificial intelligence (GenAI) tools like ChatGPT, Grok, and others have revolutionized various sectors, including higher education. However, this innovation has brought significant challenges, particularly in academic integrity. Across US colleges and universities, instructors are grappling with a marked increase in students submitting AI-generated work for assignments, essays, and even exams. What was once a suspicion has become a crisis, with faculty reporting perfect homework submissions followed by students unable to explain basic concepts during discussions.
This phenomenon, often dubbed the 'AI crisis' in higher education, stems from the ease with which students can outsource cognitive tasks to AI. A national survey by the American Association of Colleges & Universities (AAC&U) in late 2025 revealed that 78% of faculty observed increased cheating since GenAI became widely available, with 73% personally handling AI-related academic integrity cases.
Reviving Oral Exams: A Time-Tested Solution
Oral exams, also known as viva voce or oral assessments, involve students verbally demonstrating their knowledge through direct interaction with instructors. Dating back to ancient Socratic dialogues, this method requires real-time explanation, reasoning, and response to follow-up questions, making it inherently resistant to AI assistance. In the US higher education context, where written exams and papers dominate, oral exams are experiencing a renaissance as instructors seek authentic evaluation tools.
The process typically unfolds in steps: students first submit written work, followed by a scheduled defense where they explain their reasoning, derive solutions, or connect concepts. Sessions last 15-30 minutes, often one-on-one or in small groups, allowing probing questions like 'How did you arrive at this conclusion?' or 'What if we change this variable?' This format not only detects AI misuse—students falter when unable to verbalize AI-generated content—but also fosters deeper learning.
Case Studies: US Universities Leading the Charge
Leading institutions are pioneering oral exam implementations tailored to their disciplines. At Cornell University, biomedical engineering professor Chris Schaffer requires 20-minute 'oral defenses' for problem sets in his class of 70 students. He and teaching assistants grade solely on these sessions, ignoring written submissions to emphasize understanding. Schaffer notes, 'You won’t be able to AI your way through an oral exam.'
The University of Pennsylvania (UPenn) exemplifies a broad shift. Emily Hammer in Middle Eastern Languages and Cultures pairs 15-20 minute oral defenses with papers, explicitly banning AI but relying on conversations to reveal authenticity. Other UPenn faculty like Phil Gressman (Mathematics) use 30-minute Zoom sessions where students select topics, and Karen Tani (History and Law) employs low-stakes group orals graded simply as check/plus/minus. UPenn's Center for Teaching and Learning promotes these via workshops, addressing pandemic-era online cheating precedents.
New York University (NYU) innovates with technology: Panos Ipeirotis at Stern School of Business deploys an AI-powered voice agent (built with ElevenLabs) for remote oral exams in AI product management courses. Students defend group projects against probing questions, with the AI providing feedback and detecting free-riders. Ipeirotis plans expansion, stating, 'I don’t trust written assignments anymore.' NYU also mandates office hours, presentations, and cold-calling for eye-to-eye verification.
At the University of California, San Diego (UCSD), engineering professor Huihui Qi's three-year pandemic study on scaling orals now informs workshops nationwide. These examples span STEM, humanities, and business, proving versatility.
Benefits of Oral Exams: Beyond AI Detection
Oral exams offer multifaceted advantages in the AI era. Primarily, they ensure authentic assessment by requiring spontaneous reasoning, which AI cannot replicate in real-time without detection. Faculty like Hammer emphasize skill preservation: 'We’re doing this because students are actually losing skills, losing cognitive capacity and creativity.'
- Develops communication skills: Students practice articulating complex ideas, vital for careers.
- Personalized feedback: One-on-one interactions allow tailored guidance and breakthrough moments for shy learners.
- Promotes accountability: As Cornell student Olivia Piserchia shared, 'It’s a lot harder to look people in the eyes and say out loud, ‘I don’t know this.’'
85 - Scalable with tech: AI proctors or TAs handle volume.
- Reduces grading bias: Focus on process over product.
Studies and faculty reports confirm improved critical thinking and retention, countering AI-induced overreliance noted in 95% of AAC&U survey respondents.
Photo by Anthony Mensah on Unsplash
Challenges and Student Perspectives
Despite benefits, oral exams pose hurdles. Logistics for large classes demand scheduling and TA support, while anxious students face stress. NYU's Andrea Liu described AI-chatbot orals as 'awkward' due to pauses and blank screens. Cultural factors, like introversion in Gen Z, amplify discomfort, though preparation mitigates this—faculty start with easy questions.
Student reactions vary: many appreciate accountability and real-world prep, but some view it as punitive. Surveys show 50-60% see AI use as cheating, yet adoption persists, underscoring the need for balanced policies.
Supporting Statistics and Surveys
Empirical data underscores urgency. The AAC&U's 2025 survey of 1,057 faculty found 90% fear diminished critical thinking from AI, with 63% deeming 2025 graduates unprepared for GenAI in workplaces. Student-side, 2026 polls report 88-95% AI use for assessments, rising from 53% in 2024, with 12% directly copying GenAI text.
| Survey | Key Finding | Year |
|---|---|---|
| AAC&U Faculty | 78% increased cheating | 2025 |
| Student Polls | 92% use AI academically | 2026 |
| BestColleges | 51% view AI as cheating | 2023-26 |
Faculty Voices and Institutional Policies
Instructors praise orals for restoring trust. UPenn's Hammer shifted from grading despair to rewarding discussions, while Cornell's Carolyn Aslan highlights breakthroughs for quiet students. Many syllabi now include AI policies with oral verification, as seen in .edu guidelines from Skidmore, Broward College, and Vanderbilt.
Complementary Strategies and Hybrid Approaches
Oral exams pair with in-person essays, process portfolios, and AI detectors. NYU's 'fight fire with fire' via AI proctors exemplifies hybrids. Workshops at Cornell, UPenn, and UCSD train faculty, ensuring equitable scaling.
Photo by Joss Broward on Unsplash
Future Outlook: Reshaping Assessment in the AI Age
As AI evolves, oral exams position US colleges to prioritize human skills like reasoning and ethics. Experts predict broader adoption, with policy evolution toward AI literacy. Challenges remain, but success stories signal a constructive path forward, safeguarding higher education's integrity. Explore faculty opportunities at AcademicJobs.com faculty positions amid these shifts.
Be the first to comment on this article!
Please keep comments respectful and on-topic.