Academic Jobs Logo

UK Universities Failing to Teach Students How to Use AI Tools, Warns Kent AI Chief

Inadequate AI Literacy Leaves Students at Risk in Rapidly Evolving Tech Landscape

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

aerial view of city buildings during daytime
Photo by Iulia Topan on Unsplash

Promote Your Research… Share it Worldwide

Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.

Submit your Research - Make it Global News

The Growing Reliance on AI Among UK Students

Generative artificial intelligence (AI), tools like ChatGPT and Microsoft Copilot that create human-like text, images, and code from simple prompts, has become a staple in the daily routines of UK university students. Recent data reveals that 95 percent of undergraduates use AI in some capacity, with 94 percent applying it directly to assessed work such as essays, reports, and problem sets. This near-universal adoption marks a dramatic shift from just two years prior, when usage hovered around 66 percent, underscoring how quickly these technologies have permeated higher education.

Students turn to AI for a variety of tasks: 61 percent to explain complex concepts, 58 percent to summarize articles, and 49 percent to generate research ideas. While many report benefits like time savings and improved understanding—49 percent say it enhances their overall experience—the divide is stark. Sixteen percent feel it detracts from learning, citing concerns over skill erosion and fairness. This polarization highlights a critical gap: widespread use without commensurate support.

Phil Anthony's Wake-Up Call from University of Kent

At the forefront of this debate is Phil Anthony, Head of AI at the University of Kent. In a recent commentary, Anthony pointedly observes that UK universities often dictate which AI tools students may use—such as institution-provided ChatGPT Edu—but rarely teach how to wield them effectively and ethically. "We no longer need to ask whether AI is part of students' study habits. It clearly is," Anthony states. "The more important question is whether we are helping students use it in ways that support learning, rather than quietly outsourcing the thinking we want them to develop."

The University of Kent has taken proactive steps, rolling out free access to ChatGPT Edu for all staff and students in early 2026. This enterprise-grade version, powered by advanced models, includes data privacy safeguards tailored for education. Kent's guidelines emphasize responsible use, prompting students to declare AI assistance, check outputs for accuracy and bias, and integrate it as a learning aid rather than a shortcut. Yet, Anthony argues this is insufficient without hands-on teaching.

Phil Anthony, Head of AI at University of Kent, advocating for better student AI literacy

HEPI Survey Exposes the Literacy Gap

The Higher Education Policy Institute (HEPI) and Kortext's 2026 Student Generative AI Survey, based on 1,054 UK undergraduates, paints a concerning picture. While 68 percent view AI skills as essential for future careers, only 48 percent believe their lecturers are equipping them adequately. Arts and Humanities students fare worst, with just 26-30 percent feeling supported, compared to 53 percent in STEM fields.

Only 37 percent of students say their university encourages AI use, and 38 percent have access to institution-provided tools—a improvement from 23 percent in 2025 but still lagging. Russell Group institutions lead slightly at 39 percent encouragement. Barriers persist: 42 percent fear cheating accusations, 35 percent worry about inaccurate outputs, and 32 percent cite biases. The survey recommends embedding AI literacy across curricula, from induction sessions to subject-specific modules, to bridge this divide. For full details, explore the HEPI report.

Current AI Policies: Rules Without Instruction

Most UK universities now have generative AI policies, often module-specific, outlining permitted uses and declaration requirements. The Quality Assurance Agency (QAA) advises principles like authentic assessment redesign to mitigate integrity risks. Jisc provides ethical frameworks, stressing transparency and bias mitigation.

However, these focus on restrictions rather than empowerment. Students receive lists of approved tools—e.g., Kent's Copilot and ChatGPT Edu—but little on crafting effective prompts or evaluating hallucinations (AI-generated falsehoods). This leaves many navigating a 'hidden curriculum,' learning informally via peers or trial-and-error, exacerbating inequities. Lower socioeconomic groups and those without prior school AI exposure (33 percent) are particularly disadvantaged.

Risks of Poor AI Guidance

Inadequate instruction fosters 'automation bias,' where users over-rely on AI outputs. A NEJM AI study cited by Anthony showed even trained physicians deferring to erroneous large language model (LLM) suggestions. For students, early AI use can precondition thinking, foregrounding certain perspectives while omitting others, hindering independent analysis.

Academic integrity suffers too: 12 percent now paste AI text directly into submissions, up from 3 percent in 2024. Anxiety over detectors—75 percent in one wellbeing report—breeds mistrust. Long-term, graduates risk employability shortfalls; employers seek critical AI users, not rote operators. Environmentally, unchecked use amplifies AI's carbon footprint, a concern for 23 percent of students.

Aerial view of buildings and trees in a town

Photo by Annie Spratt on Unsplash

Spotlight on Pioneers: Kent and Beyond

Kent exemplifies progress with AI prompting banks, ethics modules, and seminar activities. Students experiment with prompts across tools, judging outputs collaboratively. "Changing the prompt changes the answer—and you still judge if it's good," Anthony notes.

Other leaders include Cambridge's Generative AI Literacy Course for staff and students, and Sussex's library-led literacy initiatives. Russell Group peers like Nottingham offer balanced guidance: use AI for brainstorming but verify rigorously. Yet, patchy implementation persists; non-Russell Group lag in tool provision.

Students at University of Kent participating in AI literacy seminar, comparing tool outputs

Equity and Inclusion Challenges

AI adoption widens divides. Men report 12 percent more prior experience than women; STEM students outpace humanities by 30 points. Working students use AI to hone skills (39 percent vs. 19 percent non-workers), but access barriers hit part-timers hardest.

QAA's toolkit urges inclusive policies: free tools, digital divides addressed via induction. Jisc's maturity model scores institutions on equitable access. Without this, underrepresented groups risk falling behind in an AI-driven job market. For guidance, see QAA's resources.

Solutions: Embedding AI Literacy in the Curriculum

Experts advocate curriculum overhaul. HEPI calls for AI induction, covering ethics, prompting, and limitations. Seminars could include:

  • Pre-AI brainstorming: Document initial ideas, then compare with AI.
  • Prompt engineering: Iterate queries for precision.
  • Cross-tool evaluation: Contrast ChatGPT vs. Copilot outputs.
  • Ethical debates: Bias detection, environmental impact.

Staff training is pivotal—48 percent support feels low because lecturers lack time. Jisc offers modules on responsible AI. Assessments evolve too: oral exams, process portfolios reduce over-reliance. Kent links restrictions to outcomes: "No AI here builds independent argument skills."

Staff Development and Institutional Buy-In

Lecturers need support: only half feel confident. Universities must allocate time for Jisc's ethical AI training, discipline-specific upskilling. Leaders like Anthony push 'AI co-pilot' mindset—tools augment, not replace, human insight.

Funding via levies or grants could scale tool access. Collaborative projects, per QAA, foster best practices sharing.

Future Outlook for UK Higher Education

By 2030, AI literacy may rival digital as core graduate attributes. Proactive unis thrive: enhanced employability, innovative pedagogy. Laggards face integrity scandals, graduate skill gaps.

Government's AI strategy eyes education; expect national standards. Students using AI for wellbeing (15 percent) signals broader needs—mental health integration vital. With thoughtful guidance, AI transforms UK universities from reactive rule-makers to literacy leaders. Explore Kent's approach at their student portal.

Aerial view of buildings and landscaped grounds

Photo by Annie Spratt on Unsplash

AI Use in Assessed Work: HEPI 2026 Stats
Task% Using AI% Acceptable
Explain Concepts6158
Summarize Articles5850
Research Ideas4943
Direct Text Inclusion126

Actionable Insights for Stakeholders

Students: Declare use, verify outputs, seek module guidance. Lecturers: Integrate seminars, explain restrictions. Leaders: Prioritize training, tools. The path forward? Turn AI from policy footnote to curriculum cornerstone, ensuring UK graduates lead the AI era responsibly.

Portrait of Prof. Clara Voss

Prof. Clara VossView full profile

Contributing Writer

Illuminating humanities and social sciences in research and higher education.

Discussion

Sort by:

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

New0 comments

Join the conversation!

Add your comments now!

Have your say

Engagement level

Frequently Asked Questions

📊What percentage of UK students use AI in assessed work?

According to the HEPI 2026 survey, 94% of undergraduates use generative AI for tasks like explaining concepts or summarizing, up significantly from prior years.

👨‍💼Who is Phil Anthony and what is his stance on AI guidance?

Phil Anthony, Head of AI at University of Kent, argues universities specify approved AI tools but fail to teach critical use, risking a 'hidden curriculum' of unequal skills.

🛠️What tools does University of Kent provide students?

Kent offers free ChatGPT Edu and Microsoft Copilot, with guidelines on prompting, ethics, and declaration to ensure responsible integration. See Kent's portal.

⚠️What are the main risks of inadequate AI guidance?

Risks include automation bias, biased outputs shaping thinking, academic integrity breaches (12% direct pasting), and employability gaps as employers demand critical AI skills.

📚How does AI use vary by subject area?

STEM students report highest prior experience (59%) and support (53%); Arts/Humanities lowest (29-33% experience, 26% support), per HEPI data.

💡What solutions does HEPI recommend?

Embed AI literacy in curricula via inductions, subject modules; provide tools equitably; train staff; clear assessment guidance. Full report here.

🏛️What role do QAA and Jisc play?

QAA offers AI toolkits for authentic assessments and ethics; Jisc provides maturity models, ethical training for responsible adoption in UK HE.

👩‍🏫How can lecturers integrate AI literacy?

Use seminars for prompt iteration, output comparison, pre-AI brainstorming to avoid bias. Link restrictions to learning outcomes for transparency.

😌Does AI affect student wellbeing?

Mixed: 21% feel less lonely, 20% more; 15% use for companionship. HEPI urges mental health research on AI reliance.

🔮What’s the future for AI in UK universities?

Expect national standards, curriculum mandates for literacy. Proactive institutions will boost employability; laggards risk integrity issues.

⚖️How equitable is AI access in UK HE?

38% have institutional tools; gaps hit lower socio-economic, female, humanities students. Recommendations focus on free provision and training.