University of Edinburgh Staff Urge End to OpenAI Deal Over US Military Links

Ethical Clash: AI Innovation Meets Protest in Scottish Academia

  • ai-ethics
  • higher-education-ai
  • higher-education-news
  • university-of-edinburgh
  • openai-protest

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

People ascend a wide stone staircase between old buildings.
Photo by Antony Hyson Seltran on Unsplash

Promote Your Research… Share it Worldwide

Have a story or written a research paper? Become a contributor and publish your work on AcademicJobs.com.

Submit your Research - Make it Global News

Over 350 staff members at the University of Edinburgh have signed an open letter demanding that the institution does not renew its contract with OpenAI, the creators of ChatGPT. The protest, which has gained significant attention in higher education circles, centers on ethical, security, and alignment concerns with the university's principles. As the contract approaches its end, this movement highlights growing tensions around artificial intelligence (AI) partnerships in academia, particularly in light of OpenAI's recent collaboration with the US military.

The University of Edinburgh's Edinburgh Language Model (ELM) platform provides staff and students with access to various large language models (LLMs), including those from OpenAI. ELM serves as a secure gateway to generative AI, emphasizing responsible use through guidelines that require users to agree to ethical standards before access. However, critics argue that including OpenAI's proprietary models contradicts these very standards.

University of Edinburgh campus with AI ethics discussion imagery

The Open Letter: A Collective Call for Change

The open letter, addressed to the EDINA leadership team responsible for the OpenAI contract, meticulously outlines why continuing the partnership is untenable. Signatories include prominent AI experts such as Adam Lopez, Reader in the School of Informatics; Zeerak Talat, Chancellor's Fellow in Responsible Machine Learning; and Shannon Vallor, Baillie Gifford Chair in the Ethics of Data and AI. Other notable names encompass researchers, lecturers, and administrative staff from across disciplines, demonstrating broad community support.

James Galbraith, a postdoctoral research associate in the School of Biological Sciences, articulated a core sentiment: "The central issue is that contracting OpenAI to provide LLMs to staff and students does not follow the university’s AI policies, in particular the labour rights issues, the impact their data centres are having on the communities they have been built in, and their contracts with the US military." The letter urges a shift exclusively to open-weight models hosted locally within ELM, allowing better monitoring of energy use and alignment with ethical procurement.

The document is available for public viewing and signing via Google Forms, reflecting transparency in the protesters' approach.

Background on the University of Edinburgh's AI Initiatives

ELM, launched as the university's AI innovation platform, aims to offer safer access to generative AI tools. It integrates multiple LLMs, with OpenAI's models among them, paid centrally via API on a token basis to reduce costs significantly—up to 90% in some cases. The platform mandates that users read and accept AI guidelines, covering data protection, ethical use, and risks like hallucinations or bias.

The University of Edinburgh, a global leader in informatics and AI research, has invested heavily in this infrastructure. Information Services Group (ISG) collaborates with EDINA to ensure responsible exploration of emerging technologies. Guidelines for staff and students emphasize limitations of generative AI, such as potential inaccuracies, and promote its use as a supportive tool rather than a replacement for critical thinking. More details on ELM can be found on the official university page.

Core Concerns: OpenAI's US Military Ties

A pivotal trigger for the protest is OpenAI's February 2026 agreement with the US Department of War (formerly Defense), enabling deployment of its AI models in classified networks. Announced shortly after the Trump administration blacklisted rival Anthropic for refusing unrestricted military use, the deal faced immediate backlash for potential involvement in autonomous weapons and surveillance.

OpenAI's official statement details "red lines": no mass domestic surveillance of US persons, no directing autonomous weapons (requiring human oversight), and no high-stakes automated decisions like social credit systems. Despite these safeguards, protesters view it as enabling semi-autonomous weaponry, conflicting with UN calls for bans on lethal autonomous weapons systems (LAWS). The letter also cites OpenAI's services to US Immigration and Customs Enforcement (ICE), accused of paramilitary actions.

For full details, see OpenAI's agreement post.

Safety and Security Risks Highlighted

The letter documents OpenAI's poor safety record: multiple court cases involving harm, including suicides linked to ChatGPT interactions. Security is another flashpoint, with OpenAI reporting the highest number of data breaches among LLM providers and a low security score. Protesters contrast this with alternatives like Mistral, which prioritize privacy by design.

Responsible access is questioned since OpenAI models run remotely, evading local energy monitoring—a key university principle. OpenAI's lobbying against AI regulation, including $100 million in Super PAC funding and donations to political figures, further erodes trust.

Ethical and Environmental Misalignments

OpenAI's procurement practices violate the university's Responsible Procurement Policy, which demands carbon emission targets, fair work commitments, and human rights alignment. Data centers' massive climate impact remains opaque, conflicting with net-zero goals. Labor exploitation is stark: Kenyan workers earned less than $2/hour moderating traumatic content, described as "modern-day slavery" in open letters.

Bias in models perpetuates discrimination, with studies showing unprecedented discriminatory attitudes toward Black Americans and place-based inequalities. Transparency lags, as proprietary nature hinders accountability in research and education.

The University's Measured Response

Gavin McLachlan, Vice-Principal, Chief Information Officer, and Librarian, responded thoughtfully: "The university aims to provide all students and staff safer access to AI tools and technology in a way that aligns with our values. We welcome the opportunity to engage with our community... and plan to discuss the concerns directly with the authors." He highlighted existing AI training, guidelines, and secure platforms protecting data privacy.

The administration positions ELM as offering choice among LLMs in a managed framework, supporting diverse disciplines without compromising security.

OpenAI's Defense Against Criticisms

An OpenAI spokesperson countered: "This letter makes misleading claims about OpenAI. We’re focused on building AI that is safe, useful, and benefits as many people as possible." They emphasize heavy safety investments and global government collaborations for responsible deployment.

Broader Implications for UK Higher Education

This protest echoes global debates on AI ethics in academia. The University of Oxford partners with OpenAI for education-focused ChatGPT and grants, while Manchester uses Microsoft Copilot. In the UK, concerns over military AI ties, job displacement, and ethics are mounting, with calls for open-source alternatives.

Stakeholders like the Centre for Technomoral Futures at Edinburgh underscore the need for responsive policies. As AI integrates deeper into teaching and research, universities must balance innovation with principles—a challenge for institutions worldwide.

Illustration of AI ethics debate in higher education

Stakeholder Perspectives and Expert Insights

  • AI Experts: Signatories like Shannon Vallor advocate for ethical oversight, warning of societal risks.
  • Students: Concerns over skill erosion from over-reliance on LLMs.
  • Administrators: Value cost savings and managed access but face pressure to diversify providers.
  • Industry: OpenAI pushes accessibility; critics push local, open models like Llama.

Future Outlook: Navigating AI in Academia

The University of Edinburgh is benchmarking more local LLMs, potentially phasing out OpenAI. This could set a precedent, encouraging procurement based on ethics audits. Actionable steps include prioritizing open-weight models, enhancing transparency reporting, and stakeholder consultations.

For higher education professionals, this underscores the importance of aligning tech partnerships with institutional values, fostering innovation responsibly amid rapid AI evolution.

For comprehensive coverage, refer to the Times Higher Education article.

Portrait of Prof. Marcus Blackwell

Prof. Marcus BlackwellView full profile

Contributing Writer

Shaping the future of academia with expertise in research methodologies and innovation.

Discussion

Sort by:

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

New0 comments

Join the conversation!

Add your comments now!

Have your say

Engagement level

Frequently Asked Questions

📜What is the University of Edinburgh OpenAI protest about?

Staff signed an open letter urging non-renewal of the OpenAI contract due to military ties, safety issues, and ethical misalignments.

✍️How many staff signed the open letter?

More than 350, including AI experts like Shannon Vallor and Zeerak Talat.

🤖What are ELM and its relation to OpenAI?

ELM is Edinburgh's AI platform providing LLM access, including OpenAI models via API for safer, managed use.

⚔️Why military links a concern?

OpenAI's Pentagon deal allows classified use, raising fears of autonomous weapons despite red lines.

🚨What safety issues with OpenAI?

Court cases on harm/suicides, highest data breaches among providers.

💬University response to protest?

Plans discussions, emphasizes guidelines and value-aligned access.

🛡️OpenAI's rebuttal?

Claims misleading, focuses on safety investments.

🌍Environmental and labor concerns?

Opaque energy use, exploited moderators at low wages.

🔄Alternatives to OpenAI suggested?

Open-weight local models like Llama for better ethics.

🏫Implications for UK higher ed?

Pushes ethical AI procurement, precedents for other unis like Oxford.

📋What safeguards in OpenAI military deal?

No surveillance, no autonomous weapons direction, human oversight.