OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 16:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI

2023·13 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

13

Zitationen

3

Autoren

2023

Jahr

Abstract

Large language models have proliferated across multiple domains in as short period of time. There is however hesitation in the medical and healthcare domain towards their adoption because of issues like factuality, coherence, and hallucinations. Give the high stakes nature of healthcare, many researchers have even cautioned against its usage until these issues are resolved. The key to the implementation and deployment of LLMs in healthcare is to make these models trustworthy, transparent (as much possible) and explainable. In this paper we describe the key elements in creating reliable, trustworthy, and unbiased models as a necessary condition for their adoption in healthcare. Specifically we focus on the quantification, validation, and mitigation of hallucinations in the context in healthcare. Lastly, we discuss how the future of LLMs in healthcare may look like.

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationBiomedical Text Mining and OntologiesMachine Learning in Healthcare
Volltext beim Verlag öffnen