University of Glasgow Launches Free PHAWM Workbench for AI Safety Auditing by Non-Experts

Empowering Non-Experts to Audit AI Systems for Trustworthiness

  • higher-education-research
  • research-publication-news
  • ai-safety-auditing
  • phawm-workbench
  • university-of-glasgow

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

brown concrete building under cloudy sky during daytime
Photo by Johnny Briggs on Unsplash

Promote Your Research… Share it Worldwide

Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.

Submit your Research - Make it Global News

Empowering Non-Experts in AI Safety Auditing with PHAWM Workbench

The University of Glasgow has unveiled the PHAWM Workbench, a groundbreaking free tool designed to democratize AI safety auditing. Led by Professor Simone Stumpf from the School of Computing Science, this open-source platform enables individuals without deep AI expertise—such as end-users, patients, and cultural heritage professionals—to rigorously evaluate AI systems for potential harms like bias, inaccuracies, and unfair outcomes. 62 60 Launched as the first major output of the £3.5 million Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project, funded by Responsible AI UK (RAi UK), the tool addresses a critical gap in AI governance by putting auditing power into the hands of those most affected by AI decisions. 99

In an era where AI influences sectors from healthcare to education, traditional audits often overlook social and cultural impacts because they are conducted solely by technical experts. The PHAWM Workbench changes this by facilitating participatory audits that incorporate diverse lived experiences, ensuring AI applications are not only technically sound but also fair and trustworthy. 61

Background: The Urgent Need for Participatory AI Auditing

Rapid AI adoption in the UK has brought immense benefits, with 75% of organizations reporting improved workforce productivity and 57% developing new processes. 65 However, risks loom large: 38% cite privacy concerns, 37% worry about ethical misuse, and biases in AI systems perpetuate unfair outcomes in higher education, such as discriminatory admissions or grading. 67 93 UK universities, at the forefront of AI research, face unique challenges; a 2026 survey highlighted patchy genAI use in research assessments like REF, raising integrity issues. 68

The EU AI Act (2024) mandates risk assessments, yet tools for non-experts remain scarce. PHAWM fills this void, building on UK initiatives like the AI Safety Institute, which notes harms from biased chatbots and skewed views. 89 By involving stakeholders early, it promotes responsible AI, aligning with calls for AI literacy in higher education. 69

Overview of the PHAWM Project

Initiated in May 2024, PHAWM unites over 30 researchers from seven UK universities—including Glasgow, Sheffield, Strathclyde, and York—and 28 partners like NHS National Services Scotland (NSS), Wikimedia, and the National Library of Scotland. 62 Co-designed through workshops, the project targets predictive and generative AI across four use cases: health, media content, cultural heritage, and collaborative content generation.

Professor Stumpf emphasizes: “Our workbench provides diverse perspectives on AI applications which might otherwise go unexamined,” fostering fairer systems in critical areas like policing, finance, and research jobs. 62 Dr. Yashar Moshfeghi from Strathclyde adds that it reassures non-technical users, minimizing risks while maximizing AI's potential. 98

PHAWM project team collaboration at University of Glasgow

How the PHAWM Workbench Works: A Step-by-Step Guide

The tool follows a structured four-stage participatory process, accessible via phawm.org:

  • Stage 1: The audit instigator describes the AI system in plain language, outlining its purpose and inputs/outputs.
  • Stage 2: Invite diverse stakeholders—end-users, affected communities—to join, ensuring representation.
  • Stage 3: Participants map concerns to metrics, test the AI, score it (pass/fail), drawing on lived experiences to detect biases or harms.
  • Stage 4: Compile insights into action plans for AI refinement or procurement decisions. 62 61

Features include user-friendly UIs explaining AI concepts, privacy protections (e.g., against inference attacks), multi-criteria optimization for trade-offs, and computational argumentation to counter groupthink. 100 Ideal for auditing in-house or commercial AI before deployment.

Use Case 1: Auditing AI in Healthcare

In health, PHAWM targets tools like SPARRA, which predicts emergency admissions. Audits assess patient impacts, biases in datasets, and privacy, involving clinicians and patients. For instance, stakeholders identify if predictions unfairly disadvantage certain demographics, leading to refined models. 43 This supports NHS efforts amid rising AI use in diagnostics.

Explore postdoctoral research roles advancing such health AI safety.

Use Case 2: Media Content Moderation

For media, partners like Istella use PHAWM to audit predictive AI for content flagging. Non-experts evaluate accuracy, hate speech detection fairness, and cultural biases, ensuring reliable moderation. 99

Use Case 3: Cultural Heritage Preservation

Collaborating with Scotland's National Library and museums, audits generative AI for metadata or exhibit descriptions. Tools like extended ArticlePlaceholder validate text groundedness, provenance, and stereotype avoidance, preventing historical distortions. 100 Kathryn Simpson from Sheffield notes it boosts applied AI literacy. 61

Use Case 4: Collaborative Content Generation

With Wikimedia, PHAWM audits wiki-like AI for collaborative editing, using chain-of-thought explanations and bias metrics to ensure factual, unbiased expansions. 100

The Research Publication Behind PHAWM

Core to this launch is the paper “Engineering Safe and Trustworthy AI: The Participatory Harm Auditing Workbenches and Methodologies (PHAWM) Project” by Stumpf et al., accepted for the 17th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS 2025). 59 It outlines workbenches for non-experts, emphasizing lifecycle auditing per EU AI Act. This publication underscores Glasgow's leadership in responsible AI research.

Researchers can find opportunities in research assistant jobs focusing on AI ethics.

Read the PHAWM project paper

Implications for UK Higher Education and Research

UK universities are pivotal in AI, but 2026 reports warn of uneven adoption and risks like cognitive undermining in education (top concern globally). 70 PHAWM equips academics to audit tools in teaching, REF, and professor roles, fostering ethical innovation. It aligns with RAi UK's ecosystem, enhancing public trust amid 700 million global AI users. 63

Step-by-step PHAWM AI safety auditing process

As AI transforms research, tools like PHAWM prevent scandals, supporting academic CV strengths in ethics.

Challenges, Solutions, and Future Outlook

Challenges include stakeholder recruitment and scaling audits, addressed via training and certification frameworks planned by PHAWM. 99 Future expansions may integrate advanced metrics for emerging risks like genAI hallucinations.

  • Benefits: Informed procurement, reduced biases, compliance.
  • Risks mitigated: Data privacy breaches, unfair decisions.
  • Comparisons: Unlike expert-only tools, PHAWM is inclusive.

By 2030, expect widespread adoption in UK higher ed, driving safer AI research. Visit University of Glasgow's announcement and RAi UK page.

a tall building with a clock tower next to a street light

Photo by Johnny Briggs on Unsplash

Getting Started with PHAWM: Actionable Insights for Researchers

Download from phawm.org and pilot audits in your lab. For career growth in AI safety, check higher ed jobs, rate professors, and career advice. Share experiences in comments below.

Portrait of Dr. Elena Ramirez

Dr. Elena RamirezView full profile

Contributing Writer

Advancing higher education excellence through expert policy reforms and equity initiatives.

Discussion

Sort by:

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

New0 comments

Join the conversation!

Add your comments now!

Have your say

Engagement level

Frequently Asked Questions

🔍What is the PHAWM Workbench?

The PHAWM Workbench is a free, open-source online tool from the University of Glasgow for participatory AI safety auditing, allowing non-experts to assess AI systems for harms like bias and inaccuracy.

👩‍🏫Who leads the PHAWM project?

Professor Simone Stumpf from the University of Glasgow School of Computing Science leads the £3.5m RAi UK-funded project involving 7 UK universities.

📋How does the four-stage auditing process work?

1. Describe AI in plain language. 2. Invite stakeholders. 3. Assess impacts and score pass/fail. 4. Develop action plans. Try it.

🏥What are PHAWM's use cases?

Health (NHS), media (Istella), cultural heritage (National Library of Scotland), collaborative content (Wikimedia). Examples include auditing predictive admission tools.

🎓Is PHAWM suitable for higher education?

Yes, it helps UK universities audit AI in research, teaching, and REF, addressing biases amid 2026 adoption risks. See research jobs.

📚What research publication supports PHAWM?

'Engineering Safe and Trustworthy AI: The PHAWM Project' accepted for EICS 2025. View abstract.

⚖️How does PHAWM address AI biases?

Through stakeholder-defined metrics, privacy mechanisms, and multi-criteria optimization, detecting issues like dataset biases in health AI.

What are the benefits for organizations?

Informed AI procurement, compliance with EU AI Act, reduced harms, and enhanced trust. Training and certification coming soon.

🚨UK AI risks and PHAWM's role?

With 38% privacy fears and bias concerns, PHAWM empowers unis to mitigate risks in education and research.

🚀How to get involved with PHAWM?

Download from phawm.org, join pilots, or explore AI career advice at AcademicJobs.

🔮Future developments for PHAWM?

Certification frameworks, expanded training, and integration for emerging genAI risks by project end.