Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Editorial Position of the American Thoracic Society Journal Family on the Evolving Role of Artificial Intelligence in Scientific Research and Review
1
Zitationen
10
Autoren
2024
Jahr
Abstract
In recent years, significant advances in large language models (LLMs) and generative artificial intelligence (AI) have caused a rapid integration of AI tools into research (1).For example, researchers are increasingly able to leverage these tools to support literature reviews, data management and analysis, data synthesis, and the editing of scientific writing (2-4).On the editorial side, many medical journals now use AI tools for detection of plagiarism and image manipulation, identification of expert peer reviewers, and other applications to support the publication process.However, with the rapid integration of AI tools, clear challenges have arisen that both authors and editors must navigate-together, we hope.If not, these challenges potentially undermine the goals of academic publishing, which we posit is to curate and disseminate new knowledge toward the pursuit of truth.For example, LLMs may base their output on outdated or lower-quality information or generate content that appears to be of high quality but is factually inaccurate.Furthermore, bias in algorithm output and lack of transparency ("black box") in how LLMs operate raise concerns about the appropriate uses of such tools (5).The foundation of the scientific publication community, peer review, relies on human judgment and expertise, and therefore unfettered adoption of these AI tools runs the risk of undermining the quality and the community's trust in the integrity of the publishing process.As the scale and speed of these tools expand, their benefits to the academic and publication communities will grow.Still, it is imperative that we balance excitement with vigilance for their potential limitations and known flaws.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
Institutionen
- National Institutes of Health(US)
- National Institutes of Health Clinical Center(US)
- Nanyang Technological University(SG)
- Tan Tock Seng Hospital(SG)
- Geriatric Education and Research Institute(SG)
- University of Massachusetts Chan Medical School(US)
- Cornell University(US)
- Research Manitoba(CA)
- Children's Hospital Research Institute of Manitoba(CA)
- University of Manitoba(CA)
- Institute for Medical Informatics and Biostatistics(CH)
- University of Pennsylvania(US)
- University of North Carolina at Chapel Hill(US)
- Riley Hospital for Children(US)
- Northwestern University(US)
- Duke University(US)
- Columbia University(US)
- University of Michigan(US)
- Veterans Health Administration(US)
- Michigan United(US)
- Michigan Medicine(US)
- Department of Veterans Affairs(AU)