Promote Your Research… Share it Worldwide
Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.
Submit your Research - Make it Global NewsEmpowering Non-Experts in AI Safety Auditing with PHAWM Workbench
The University of Glasgow has unveiled the PHAWM Workbench, a groundbreaking free tool designed to democratize AI safety auditing. Led by Professor Simone Stumpf from the School of Computing Science, this open-source platform enables individuals without deep AI expertise—such as end-users, patients, and cultural heritage professionals—to rigorously evaluate AI systems for potential harms like bias, inaccuracies, and unfair outcomes.
In an era where AI influences sectors from healthcare to education, traditional audits often overlook social and cultural impacts because they are conducted solely by technical experts. The PHAWM Workbench changes this by facilitating participatory audits that incorporate diverse lived experiences, ensuring AI applications are not only technically sound but also fair and trustworthy.
Background: The Urgent Need for Participatory AI Auditing
Rapid AI adoption in the UK has brought immense benefits, with 75% of organizations reporting improved workforce productivity and 57% developing new processes.
The EU AI Act (2024) mandates risk assessments, yet tools for non-experts remain scarce. PHAWM fills this void, building on UK initiatives like the AI Safety Institute, which notes harms from biased chatbots and skewed views.
Overview of the PHAWM Project
Initiated in May 2024, PHAWM unites over 30 researchers from seven UK universities—including Glasgow, Sheffield, Strathclyde, and York—and 28 partners like NHS National Services Scotland (NSS), Wikimedia, and the National Library of Scotland.
Professor Stumpf emphasizes: “Our workbench provides diverse perspectives on AI applications which might otherwise go unexamined,” fostering fairer systems in critical areas like policing, finance, and research jobs.
How the PHAWM Workbench Works: A Step-by-Step Guide
The tool follows a structured four-stage participatory process, accessible via phawm.org:
- Stage 1: The audit instigator describes the AI system in plain language, outlining its purpose and inputs/outputs.
- Stage 2: Invite diverse stakeholders—end-users, affected communities—to join, ensuring representation.
- Stage 3: Participants map concerns to metrics, test the AI, score it (pass/fail), drawing on lived experiences to detect biases or harms.
- Stage 4: Compile insights into action plans for AI refinement or procurement decisions.
62 61
Features include user-friendly UIs explaining AI concepts, privacy protections (e.g., against inference attacks), multi-criteria optimization for trade-offs, and computational argumentation to counter groupthink.
Use Case 1: Auditing AI in Healthcare
In health, PHAWM targets tools like SPARRA, which predicts emergency admissions. Audits assess patient impacts, biases in datasets, and privacy, involving clinicians and patients. For instance, stakeholders identify if predictions unfairly disadvantage certain demographics, leading to refined models.
Explore postdoctoral research roles advancing such health AI safety.
Use Case 2: Media Content Moderation
For media, partners like Istella use PHAWM to audit predictive AI for content flagging. Non-experts evaluate accuracy, hate speech detection fairness, and cultural biases, ensuring reliable moderation.
Use Case 3: Cultural Heritage Preservation
Collaborating with Scotland's National Library and museums, audits generative AI for metadata or exhibit descriptions. Tools like extended ArticlePlaceholder validate text groundedness, provenance, and stereotype avoidance, preventing historical distortions.
Use Case 4: Collaborative Content Generation
With Wikimedia, PHAWM audits wiki-like AI for collaborative editing, using chain-of-thought explanations and bias metrics to ensure factual, unbiased expansions.
The Research Publication Behind PHAWM
Core to this launch is the paper “Engineering Safe and Trustworthy AI: The Participatory Harm Auditing Workbenches and Methodologies (PHAWM) Project” by Stumpf et al., accepted for the 17th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS 2025).
Researchers can find opportunities in research assistant jobs focusing on AI ethics.
Read the PHAWM project paperImplications for UK Higher Education and Research
UK universities are pivotal in AI, but 2026 reports warn of uneven adoption and risks like cognitive undermining in education (top concern globally).
As AI transforms research, tools like PHAWM prevent scandals, supporting academic CV strengths in ethics.
Challenges, Solutions, and Future Outlook
Challenges include stakeholder recruitment and scaling audits, addressed via training and certification frameworks planned by PHAWM.
- Benefits: Informed procurement, reduced biases, compliance.
- Risks mitigated: Data privacy breaches, unfair decisions.
- Comparisons: Unlike expert-only tools, PHAWM is inclusive.
By 2030, expect widespread adoption in UK higher ed, driving safer AI research. Visit University of Glasgow's announcement and RAi UK page.
Photo by Johnny Briggs on Unsplash
Getting Started with PHAWM: Actionable Insights for Researchers
Download from phawm.org and pilot audits in your lab. For career growth in AI safety, check higher ed jobs, rate professors, and career advice. Share experiences in comments below.
Be the first to comment on this article!
Please keep comments respectful and on-topic.