Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Charting Truth, Trust, and Transformers: A Critical Look at AI Text Detection and Recommendations for Medical Journals
0
Zitationen
4
Autoren
2025
Jahr
Abstract
This editorial provides an overview of large language models (LLMs), the risks associated with their use, and the challenges involved in using artificial intelligence (AI) to determine the extent to which LLMs have been used to write text, including articles for publication in medical journals. As narratives generated by LLMs become increasingly difficult to distinguish from human writing, concerns have emerged about their impact on scholarly communication, particularly in health and medicine. The medical community is becoming more aware of various tools that can detect AI-generated text; however, adopting these tools comes with unique challenges. The purpose of this article is to provide readers with an understanding of how AI text detectors work, the limitations of these tools, and recommendations for what medical editors, reviewers, and readers can do to navigate these challenges, along with future directions to help safeguard the integrity of scholarly work.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.