OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.03.2026, 00:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

How to Prevent Hallucination in Artificial Intelligence-Assisted Clinical Practice

2025·1 Zitationen·Gyemyeong uidae haksuljiOpen Access
Volltext beim Verlag öffnen

1

Zitationen

1

Autoren

2025

Jahr

Abstract

The integration of artificial intelligence (AI) into clinical practice has ushered in new frontiers in diagnostic accuracy, operational efficiency, and healthcare accessibility. However, an emerging concern in AI-assisted healthcare is the phenomenon of “hallucination,” the generation of incorrect, fabricated, or unverifiable information, which can mislead clinical decision-making. This review examines the causes and implications of hallucinations in AI-generated clinical data and proposes practical mitigation strategies. Hallucinations can be minimized through enhanced model training, validation using high-quality medical datasets, robust human oversight, adherence to ethical design principles, and the implementation of comprehensive regulatory frameworks, thereby ensuring the safe, ethical, and effective deployment of AI in clinical settings. Interdisciplinary collaboration is critical to improve model transparency and reliability.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Machine Learning in HealthcareMental Health and PsychiatryArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen