Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Regulating the unseen hand: AI, authorship, and trust in medical science
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Large language models (LLMs) have transformed medical research and scientific publishing by facilitating manuscript preparation, literature synthesis, and editorial processes, yet pose significant threats to research integrity through generation of potential pseudoscientific content. Current AI detection algorithms demonstrate inconsistent reliability, particularly against paraphrased or humanized content, while LLM integration in peer review compromises expert critical evaluation and homogenizes scientific discourse. These systems exhibit documented bias against non-male, non-white researchers, compounding ethical concerns. Heterogeneous editorial policies regarding AI disclosure across medical journals create regulatory gaps enabling undetected misconduct. However, excessive focus on detection over content quality risks establishing counterproductive "AI phobia" that impedes legitimate technological integration. Preserving research credibility requires standardized disclosure frameworks, enhanced detection algorithms, comprehensive privacy safeguards, and mandatory AI watermarking systems to maintain scientific integrity while accommodating technological advancement in research practices.
Ähnliche Arbeiten
World Medical Association Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects
2003 · 10.819 Zit.
Estimating the mean and variance from the median, range, and the size of a sample
2005 · 8.926 Zit.
SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials
2013 · 6.940 Zit.
The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research
2020 · 5.215 Zit.
The global landscape of AI ethics guidelines
2019 · 4.495 Zit.