University of Sydney's Two-Lane Approach Fixes AI Disruption in College Assessments

Revolutionizing Higher Ed: Embracing AI Without Compromising Integrity

  • generative-ai
  • higher-education-news
  • ai-in-education
  • higher-education-australia
  • academic-integrity

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

a sign on a building
Photo by bruce ma on Unsplash

Promote Your Research… Share it Worldwide

Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.

Submit your Research - Make it Global News

The Rise of Generative AI and Its Challenge to Traditional Assessments

Generative artificial intelligence (AI), such as ChatGPT and similar large language models, has rapidly transformed higher education since late 2022. These tools can generate coherent essays, solve complex problems, and even mimic expert-level analysis, raising profound questions about the validity of traditional student assessments. In Australia, where university enrollment exceeds 1.5 million students annually, surveys indicate that nearly 80 percent of undergraduates are using AI in their studies as of 2025, with up to 40 percent admitting to employing it in prohibited contexts.6064 This "cognitive offloading"—where students delegate thinking to AI—threatens academic integrity and the very purpose of evaluation: measuring genuine learning outcomes.

At the University of Sydney, Australia's oldest and largest university with over 70,000 students, educators confronted this disruption head-on. A comprehensive internal review revealed that approximately 90 percent of existing assessments were vulnerable to AI generation, rendering them obsolete for verifying student mastery.80 Traditional take-home essays, reports, and problem sets could be completed at high distinction levels by AI alone, prompting a paradigm shift from prohibition to integration.

University of Sydney's Bold Response: The Two-Lane Approach

Rather than chasing futile AI detection or endless "AI-proofing," the University of Sydney introduced its pioneering "two-lane approach" to assessment design. Announced in November 2024 and rolled out progressively from Semester 1 2025, this framework divides evaluations into two distinct categories: Lane 1 (secure assessments) and Lane 2 (open assessments).79 Lane 1 ensures students demonstrate independent knowledge and skills under supervision, while Lane 2 fosters productive AI use as a tool for learning and real-world preparation.

This policy reversal sets AI as the default for non-exam unsecured assessments unless explicitly prohibited by course coordinators—a stark contrast to the prior 75 percent ban on AI in such tasks.81 By Semester 2 2025, the model formalized with curated menus of assessment types, empowering faculty while safeguarding degree credibility. Professor Adam Bridgeman, Pro Vice-Chancellor for Educational Innovation, emphasized, "It’s no secret that any take-home assessment can be completed to a high level by AI... We have worked closely with the regulator, educators, and students to chart a way forward."79

Breaking Down Lane 1: Secure Assessments for Authentic Verification

Lane 1 assessments prioritize assessment of learning, focusing on program-level verification of core competencies without external aids. These supervised, in-person tasks align with Australia's Tertiary Education Quality and Standards Agency (TEQSA) principles for trustworthy judgments.81

  • In-person exams: Written, practical, or oral formats under invigilation.
  • Interactive orals and viva voces: Real-time questioning to probe depth.
  • In-class skills tests: Hands-on demonstrations, performances, or contemporaneous writing.
  • Placement evaluations: Peer/expert observations during internships or clinicals.

The university curates 13 secure formats, ensuring at least 20-30 percent of a program's assessments fall here to certify graduate capabilities. For instance, in engineering courses, students might defend designs viva voce, explaining modifications under faculty scrutiny—a method resilient to AI as it demands spontaneous critical thinking.

Students participating in an in-person oral defense at University of Sydney

Lane 2: Embracing AI as a Collaborative Tool in Open Assessments

Lane 2, comprising the majority of unit-level tasks, treats AI as assessment for and as learning. Students must disclose usage but can leverage tools for brainstorming, drafting, analysis, or feedback. This open environment scaffolds ethical AI integration, mirroring workplace realities where professionals use AI daily.80

  • Quizzes and practice: In- or out-of-class, with AI for idea generation.
  • Inquiry tasks: Data analysis, case studies, or experimental design enhanced by AI summaries.
  • Creative productions: Portfolios, presentations, or theses where students critique/refine AI outputs.
  • Discussions: Debates or evaluations requiring justification of AI-assisted arguments.

With 18 open formats, coordinators select from Sydney Curriculum platform options marked by icons. Students in business units, for example, might use AI to generate case study outlines, then analyze biases—a process that builds discernment over rote memorization.

Menus, Not Traffic Lights: Danny Liu's Innovative Framework

Central to Lane 2 is Professor Danny Liu's "menus not traffic lights" philosophy, rejecting simplistic red-yellow-green restrictions that prove unenforceable in unsupervised settings.82 Traffic lights fail because advanced AI evades detection, and partial bans erode validity. Menus, inspired by curated dining options, guide students toward suitable AI applications—like appetizers for reflection (AI as 'critical friend'), mains for content generation, or desserts for self-feedback.

Liu, who co-chairs the university's AI in Education group, argues, "We shouldn’t think of ourselves as the police... We should see ourselves as the teachers who encourage students to learn."80 Tools like Elicit for literature reviews or Cogniti.ai agents exemplify this, helping students engage deeply rather than outsource thinking. For details on prompting strategies, see the university's AI guide for students.

Implementation: From Policy to Practice

Rollout involved embedding academic leads in every school, discipline-specific workshops, and one-on-one instructional design support. The online platform now icons assessments for clarity, with risk assessments required for AI prohibitions. Early 2025 Academic Standards Committee approvals integrated new categories into procedures.

Faculty initially resisted the top-down shift, wary of workload. However, AI demonstrations—showing indistinguishable outputs—converted skeptics. Bridgeman noted, "We say we teach critical thinking, but were we assessing it? I don’t think we were."80 By March 2026, courses across faculties like medicine and humanities had overhauled portfolios, with positive feedback on expanded options.

Stakeholder Perspectives: Faculty, Students, and Employers

Professors appreciate the menu's flexibility, enabling authentic tasks like verbal workplace simulations.35 Students gain agency and career readiness, as employers demand AI proficiency—evident in interviews testing tool usage. Deputy Vice-Chancellor Joanne Wright stated, "Generative AI has profoundly impacted workplaces... ensuring our graduates are equipped."79

Critics worry about equity, as AI access varies, but university supports via Cogniti.ai mitigate this. Broader Australian context shows peers like UNSW adopting similar allowances, amid cheating scandals where AI secured undeserved High Distinctions.37

Early Outcomes and Broader Impacts

Post-implementation data is emerging, but initial reviews show sustained integrity with enhanced learning. For full policy details, visit the responsible AI use page.80 The model influences globally, with U.S. experts praising its structure despite decentralization hurdles. In Australia, it counters 19 percent plagiarism rates partly AI-driven.66

Illustration of AI assessment menus at University of Sydney

Challenges, Comparisons, and Lessons for Other Institutions

Challenges include training demands and cultural shifts from autonomy. Unlike U.S. faculty-led policies, Sydney's centralized approach leverages no-tenure structures. Comparable efforts at Melbourne University permit disclosed AI, but Sydney's menus stand out for granularity.

Lessons: Prioritize learning verification over detection; invest in faculty development; integrate AI ethically. As AI evolves, resilient designs like vivas prove timeless.

Future Outlook: AI as Ally in Higher Education

Looking ahead, Sydney plans expanded Cogniti integrations and program-level mapping. This fix not only mends AI-broken assessments but reimagines education for an AI-augmented world, ensuring graduates thrive professionally. Explore related career advice at AcademicJobs.com.

Portrait of Dr. Nathan Harlow

Dr. Nathan HarlowView full profile

Contributing Writer

Driving STEM education and research methodologies in academic publications.

Discussion

Sort by:

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

New0 comments

Join the conversation!

Add your comments now!

Have your say

Engagement level

Frequently Asked Questions

🚦What is the two-lane approach at University of Sydney?

The two-lane approach divides assessments into Lane 1 (secure, no AI, in-person) for verifying learning outcomes and Lane 2 (open, AI allowed with disclosure) for skill-building with tools.

🔍Why was this change necessary?

90% of prior assessments were AI-vulnerable. Banning detection failed; integration ensures integrity and career prep. See Chronicle coverage.

🔒What assessments are in Lane 1?

In-person exams, vivas, skills tests, and placements under supervision to confirm independent mastery.

🤖How does Lane 2 work with AI?

Students use AI for brainstorming, drafting, analysis; must disclose and apply critical thinking. Menus guide optimal uses.

📝Is AI use mandatory disclosure?

Yes, in open tasks. Non-disclosure breaches integrity policy, with investigations possible.

👥What support exists for faculty?

Workshops, 1:1 design help, platform icons, and academic leads per school.

💼How does this prepare students for jobs?

Mirrors workplaces demanding AI skills; employers test proficiency in interviews.

👍Reactions from academics?

Initial resistance turned positive post-demos; values authentic critical thinking assessment.

📊AI cheating stats in Australia?

80% students use AI; 40% improperly. Sydney's model counters this trend.

🌍Global influence?

Praised by U.S. experts; inspires peers like VU Amsterdam. Future: more AI scaffolding.

🍽️Menus vs traffic lights?

Menus guide flexible AI uses; traffic lights unenforceable in open settings.