Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Risks of abuse of large language models, like <scp>ChatGPT</scp>, in scientific publishing: Authorship, predatory publishing, and paper mills
77
Zitationen
2
Autoren
2023
Jahr
Abstract
Key points Academia is already witnessing the abuse of authorship in papers with text generated by large language models (LLMs) such as ChatGPT. LLM‐generated text is testing the limits of publishing ethics as we traditionally know it. We alert the community to imminent risks of LLM technologies, like ChatGPT, for amplifying the predatory publishing ‘industry’. The abuse of ChatGPT for the paper mill industry cannot be over‐emphasized. Detection of LLM‐generated text is the responsibility of editors and journals/publishers.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.