Promote Your Research… Share it Worldwide
Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.
Submit your Research - Make it Global NewsIn a striking demonstration of AI's potential to undermine scientific credibility, a Swedish researcher at the University of Gothenburg crafted a fictitious eye condition called bixonimania and watched as major language models rapidly incorporated it into their knowledge bases as fact. This hoax, detailed in a recent Nature feature, highlights the urgent challenges facing academic publishing in Europe, where universities are grappling with the influx of AI-generated content polluting the research ecosystem.
The Bixonimania Experiment: Origins in Sweden
Almira Osmanovic Thunström, a medical researcher based at the University of Gothenburg, initiated the experiment in March 2024 by publishing blog posts on Medium describing bixonimania—a nonexistent skin disorder around the eyes allegedly triggered by prolonged blue light exposure from screens. Symptoms included sore, itchy eyes and pinkish eyelids, presented in a plausibly medical tone but with the absurd name incorporating 'mania,' a term typically reserved for psychiatric conditions.
To elevate the ruse, Thunström uploaded two preprints to SciProfiles, a lesser-known server, under the pseudonym Lazljiv Izgubljenovic from the fictional Asteria Horizon University in Nova City, California. These documents featured blatant red flags: AI-generated author photos, disclaimers stating 'this entire paper is made up,' and recruitment of 'fifty made-up individuals.' Despite these, the papers gained traction.

AI Chatbots Amplify the Fiction
Within weeks, leading AI models began treating bixonimania as legitimate. On April 13, 2024, Microsoft's Copilot described it as 'an intriguing and relatively rare condition,' while Google's Gemini linked it directly to blue light exposure and recommended ophthalmologist consultations. Perplexity AI cited a prevalence of one in 90,000, and ChatGPT diagnosed user symptoms as matching the hoax in April 2024.
Even into 2026, responses varied: ChatGPT on March 11 deemed it 'made-up,' but days later elaborated on it as a 'proposed subtype.' This persistence underscores how LLMs, trained on vast web data including preprints, prioritize format over veracity, hallucinating details to fill gaps.
Peer Review Fails: Citation in Legitimate Journals
The hoax infiltrated peer-reviewed literature when a 2024 Cureus paper on periorbital melanosis cited one fake preprint, calling bixonimania an 'emerging form.' Retraction followed in 2026 due to 'irrelevant references, including one to a fictitious disease.' This incident reveals peer reviewers' overreliance on citations without scrutiny, exacerbating AI misinformation spread.
In Europe, where retraction rates for biomedical papers rose from 10.7 to 44.8 per 100,000 between 2000-2020, such lapses are alarming. Research misconduct drives most cases, and AI amplifies this vulnerability.
European Experts React with Concern
Alex Ruani, a doctoral researcher at University College London (UCL), UK, called it a 'masterclass on how mis- and disinformation operates,' warning, 'If the scientific process... aren't capturing and filtering out chunks like these, we’re doomed.' Thunström herself noted the intent to test LLM susceptibility, inspired by prompt injection studies.
At Gothenburg, the experiment sparked internal discussions on ethical AI use, aligning with Sweden's push for robust data verification in research.
AI-Generated Papers Flood European Journals
Europe faces a surge in suspicious papers. In 2023, over 10,000 retractions globally tripled prior rates, with AI implicated in many. A 2026 study found 46.3% of 335 AI-related retractions in 2023, median time-to-retraction 550 days. UK universities report AI in peer reviews, prompting ERC bans on undisclosed use.
Germany's Max Planck Society and France's CNRS emphasize human oversight, while Italy sees 'paper mills' exploiting open access.
University Policies Across the Continent
Responding to threats like bixonimania, European institutions mandate AI disclosure. The University of Luxembourg's 2026 guidelines require transparency in teaching and assessments. EU's ethical AI guidelines for educators, updated March 2026, address rising usage post-AI Act.
UK's University of Edinburgh bans undisclosed AI in exams; Germany's Heidelberg University uses detection tools like Turnitin. France's Sorbonne requires statements on AI contributions in theses. Sweden's Karolinska Institutet, post-Gothenburg, trains on spotting hallucinations.

Detection Tools and Technological Solutions
Tools like GPTZero and OpenAI's classifier aid detection, but false positives plague non-native English speakers. EU AI Act (effective 2026) classifies academic AI as high-risk, mandating transparency. Projects at ETH Zurich develop watermarking for AI text.
Blockchain for provenance and pre-registration combat fabrication, as piloted by Netherlands' Utrecht University.
Case Studies: UK, Germany, France Responses
In the UK, UCL's Ruani advocates protecting 'trust like gold.' Cambridge integrates AI literacy. Germany's LMU Munich retracted 12 AI-suspect papers in 2025. France's PSL University runs workshops on ethical AI authorship.
Across Europe, 94% of AI exam submissions evaded detection in a 2024 study, prompting policy overhauls.
Implications for Researchers and Students
Junior researchers risk career damage from tainted citations; students face integrity dilemmas. Bixonimania shows even obvious fakes persist, eroding literature reliability. European funding bodies like ERC demand verification.
Solutions and Path Forward
Experts urge hybrid human-AI review, mandatory disclosure, and training. EU's 2026 code on AI transparency in publishing aids. Universities like Oxford pilot 'AI audits' for submissions.
For Europe's higher education, vigilance, policy evolution, and tech integration are key to safeguarding science.
EU Ethical AI Guidelines for Educators outline proactive steps.
Be the first to comment on this article!
Please keep comments respectful and on-topic.