Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How to Prevent Hallucination in Artificial Intelligence-Assisted Clinical Practice
1
Zitationen
1
Autoren
2025
Jahr
Abstract
The integration of artificial intelligence (AI) into clinical practice has ushered in new frontiers in diagnostic accuracy, operational efficiency, and healthcare accessibility. However, an emerging concern in AI-assisted healthcare is the phenomenon of “hallucination,” the generation of incorrect, fabricated, or unverifiable information, which can mislead clinical decision-making. This review examines the causes and implications of hallucinations in AI-generated clinical data and proposes practical mitigation strategies. Hallucinations can be minimized through enhanced model training, validation using high-quality medical datasets, robust human oversight, adherence to ethical design principles, and the implementation of comprehensive regulatory frameworks, thereby ensuring the safe, ethical, and effective deployment of AI in clinical settings. Interdisciplinary collaboration is critical to improve model transparency and reliability.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.255 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.625 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.396 Zit.