Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Detecting and Mitigating Hallucinations in Large Language Models (LLMs) Using Reinforcement Learning in Healthcare
0
Zitationen
3
Autoren
2024
Jahr
Abstract
Large Language Models (LLMs) have demonstrated significant potential in enhancing healthcare services, including clinical decision support, patient engagement, and medical research. However, their susceptibility to hallucinations generating factually incorrect, misleading, or fabricated information poses serious risks in high-stakes medical contexts. This study proposes a reinforcement learning (RL)-based framework to detect and mitigate hallucinations in LLM outputs tailored for healthcare applications. The approach integrates domain-specific knowledge bases with reward-driven fine-tuning to penalize inaccurate or unsupported responses and reinforce factual precision. The model leverages automated fact-checking, uncertainty estimation, and expert-in-the-loop feedback to refine its reasoning process. Experimental evaluation across multiple healthcare datasets, including medical question-answering and clinical note summarization, shows a substantial reduction in hallucination frequency while preserving response fluency and contextual relevance. This research offers a scalable, adaptive strategy for improving the trustworthiness, safety, and ethical deployment of LLMs in healthcare systems.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.594 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.861 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.426 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.921 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.496 Zit.