Academic Jobs Logo

AI Hoax Disease Bixonimania Exposes Cracks in European Academic Integrity

Swedish Hoax Tests AI and Peer Review Vulnerabilities

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

a man in a blue jacket standing in front of a yellow and blue wall
Photo by Daniele Franchi on Unsplash

Promote Your Research… Share it Worldwide

Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.

Submit your Research - Make it Global News

In a striking demonstration of AI's potential to undermine scientific credibility, a Swedish researcher at the University of Gothenburg crafted a fictitious eye condition called bixonimania and watched as major language models rapidly incorporated it into their knowledge bases as fact. This hoax, detailed in a recent Nature feature, highlights the urgent challenges facing academic publishing in Europe, where universities are grappling with the influx of AI-generated content polluting the research ecosystem.

The Bixonimania Experiment: Origins in Sweden

Almira Osmanovic Thunström, a medical researcher based at the University of Gothenburg, initiated the experiment in March 2024 by publishing blog posts on Medium describing bixonimania—a nonexistent skin disorder around the eyes allegedly triggered by prolonged blue light exposure from screens. Symptoms included sore, itchy eyes and pinkish eyelids, presented in a plausibly medical tone but with the absurd name incorporating 'mania,' a term typically reserved for psychiatric conditions.

To elevate the ruse, Thunström uploaded two preprints to SciProfiles, a lesser-known server, under the pseudonym Lazljiv Izgubljenovic from the fictional Asteria Horizon University in Nova City, California. These documents featured blatant red flags: AI-generated author photos, disclaimers stating 'this entire paper is made up,' and recruitment of 'fifty made-up individuals.' Despite these, the papers gained traction.

Screenshot of fake bixonimania preprint with obvious hoax indicators

AI Chatbots Amplify the Fiction

Within weeks, leading AI models began treating bixonimania as legitimate. On April 13, 2024, Microsoft's Copilot described it as 'an intriguing and relatively rare condition,' while Google's Gemini linked it directly to blue light exposure and recommended ophthalmologist consultations. Perplexity AI cited a prevalence of one in 90,000, and ChatGPT diagnosed user symptoms as matching the hoax in April 2024.

Even into 2026, responses varied: ChatGPT on March 11 deemed it 'made-up,' but days later elaborated on it as a 'proposed subtype.' This persistence underscores how LLMs, trained on vast web data including preprints, prioritize format over veracity, hallucinating details to fill gaps.

Peer Review Fails: Citation in Legitimate Journals

The hoax infiltrated peer-reviewed literature when a 2024 Cureus paper on periorbital melanosis cited one fake preprint, calling bixonimania an 'emerging form.' Retraction followed in 2026 due to 'irrelevant references, including one to a fictitious disease.' This incident reveals peer reviewers' overreliance on citations without scrutiny, exacerbating AI misinformation spread.

In Europe, where retraction rates for biomedical papers rose from 10.7 to 44.8 per 100,000 between 2000-2020, such lapses are alarming. Research misconduct drives most cases, and AI amplifies this vulnerability.

European Experts React with Concern

Alex Ruani, a doctoral researcher at University College London (UCL), UK, called it a 'masterclass on how mis- and disinformation operates,' warning, 'If the scientific process... aren't capturing and filtering out chunks like these, we’re doomed.' Thunström herself noted the intent to test LLM susceptibility, inspired by prompt injection studies.

At Gothenburg, the experiment sparked internal discussions on ethical AI use, aligning with Sweden's push for robust data verification in research.

AI-Generated Papers Flood European Journals

Europe faces a surge in suspicious papers. In 2023, over 10,000 retractions globally tripled prior rates, with AI implicated in many. A 2026 study found 46.3% of 335 AI-related retractions in 2023, median time-to-retraction 550 days. UK universities report AI in peer reviews, prompting ERC bans on undisclosed use.

Germany's Max Planck Society and France's CNRS emphasize human oversight, while Italy sees 'paper mills' exploiting open access.

University Policies Across the Continent

Responding to threats like bixonimania, European institutions mandate AI disclosure. The University of Luxembourg's 2026 guidelines require transparency in teaching and assessments. EU's ethical AI guidelines for educators, updated March 2026, address rising usage post-AI Act.

UK's University of Edinburgh bans undisclosed AI in exams; Germany's Heidelberg University uses detection tools like Turnitin. France's Sorbonne requires statements on AI contributions in theses. Sweden's Karolinska Institutet, post-Gothenburg, trains on spotting hallucinations.

Graph showing rise in AI paper detections at European universities

Detection Tools and Technological Solutions

Tools like GPTZero and OpenAI's classifier aid detection, but false positives plague non-native English speakers. EU AI Act (effective 2026) classifies academic AI as high-risk, mandating transparency. Projects at ETH Zurich develop watermarking for AI text.

Blockchain for provenance and pre-registration combat fabrication, as piloted by Netherlands' Utrecht University.

Case Studies: UK, Germany, France Responses

In the UK, UCL's Ruani advocates protecting 'trust like gold.' Cambridge integrates AI literacy. Germany's LMU Munich retracted 12 AI-suspect papers in 2025. France's PSL University runs workshops on ethical AI authorship.

Across Europe, 94% of AI exam submissions evaded detection in a 2024 study, prompting policy overhauls.

Implications for Researchers and Students

Junior researchers risk career damage from tainted citations; students face integrity dilemmas. Bixonimania shows even obvious fakes persist, eroding literature reliability. European funding bodies like ERC demand verification.

Solutions and Path Forward

Experts urge hybrid human-AI review, mandatory disclosure, and training. EU's 2026 code on AI transparency in publishing aids. Universities like Oxford pilot 'AI audits' for submissions.

For Europe's higher education, vigilance, policy evolution, and tech integration are key to safeguarding science.

EU Ethical AI Guidelines for Educators outline proactive steps.
Portrait of Prof. Evelyn Thorpe

Prof. Evelyn ThorpeView full profile

Contributing Writer

Promoting sustainability and environmental science in higher education news.

Discussion

Sort by:

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

New0 comments

Join the conversation!

Add your comments now!

Have your say

Engagement level

Frequently Asked Questions

🩹What is bixonimania?

Bixonimania is a completely fictional eye condition invented by researcher Almira Osmanovic Thunström at the University of Gothenburg. Described as periorbital melanosis from blue light exposure, it never existed but was promoted by AI models.

📄How did the bixonimania hoax spread?

Fake Medium blogs in March 2024 and preprints on SciProfiles in April-May 2024 with obvious fakes like disclaimers. AI chatbots cited them within weeks, and a Cureus paper referenced one before retraction.

🤖Which AI models promoted bixonimania?

ChatGPT, Gemini, Copilot, and Perplexity AI treated it as real, providing symptoms and advice. Persistence into 2026 shows training data vulnerabilities.

🔍Why did peer review fail?

Reviewers cited preprints without verification. Cureus retracted after noticing the fake reference, highlighting citation habits over content checks.

🇪🇺What is Europe's response to AI fake papers?

EU guidelines updated in 2026 mandate ethical AI use. Universities like Luxembourg and Edinburgh require disclosure.

📉How many AI-related retractions in Europe?

Retraction rates up to 44.8/100k in biomed; 46% of 2023 AI retractions global, with Europe hit hard by paper mills and hallucinations.

🎓What policies do UK universities have?

UCL, Cambridge emphasize AI literacy and audits. ERC bans undisclosed AI in reviews.

📚German and French uni approaches?

Heidelberg uses Turnitin; Sorbonne mandates AI statements in theses. Focus on watermarking and training.

🛡️Detection tools for AI papers?

GPTZero, Turnitin, but false positives common. EU AI Act classifies academic AI high-risk.

🔮Future for European academia?

Hybrid review, blockchain provenance, mandatory disclosure to restore trust amid rising AI threats like bixonimania.

💡Lessons from Gothenburg hoax?

Obvious fakes still spread; need verification culture. Thunström: Test LLM misinformation pipelines.