Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards a transparent and reproducible AI-assisted research paper writing
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI)-assisted scientific writing is now a common practice in academic publishing, yet concerns persist regarding the authenticity and reproducibility of AI-generated content. While AI tools offer significant advantages, particularly for non-native English speakers who face substantial linguistic barriers in scientific communication, the risk of AI hallucinations and fabricated citations threatens the integrity of scholarly discourse. Journals often require disclosure of the entire AI prompt rather than meaningful intellectual contributions, but this is becoming increasingly impractical as AI prompts are getting longer and more complex. In this paper, I argue that transparency in AI-assisted writing should focus on capturing the author's core research perspective and section-specific key points-the foundational elements that drive meaningful scientific communication. To address this challenge, I developed a web-based tool that implements a human-in-the-loop approach requiring authors to define their research perspective and create detailed outlines with key points before any AI text generation occurs. The tool mitigates AI hallucination by only allowing the use of user-provided citations and generating transparency reports documenting the key elements used for text generation. I validated this approach by writing this paper using the tool itself, demonstrating how the transparency reporting method works in practice. This methodology ensures that AI serves as a linguistic tool rather than a content generator, preserving scientific integrity while democratizing access to high-quality academic writing across linguistic and cultural boundaries.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.