OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 04:17

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Hierarchical Reinforcement Learning for Detecting Safety and Reliability Vulnerabilities in Large Language Model-Assisted Healthcare Systems

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

The integration of large language models into the telemedicine industry has revolutionized digital health by promoting effective clinical decision-making and patient interactions. However, the deployment of LLMS in healthcare imposes critical risks to health data. Moreover, the existing standards of vulnerability detection lack the practical approach to dynamic and enhanced interactions in context-dependent healthcare systems. To overcome these potential hurdles, this paper presents a Hierarchical Reinforcement Learning (HRL) framework explicitly designed to detect and mitigate vulnerabilities in LLM-assisted healthcare applications. The proposed HRL model decomposes clinical interactions into hierarchical tasks, enabling efficient modeling of complex, multi-step clinical reasoning and nuanced conversational dynamics. Our approach integrates safety-aware reward engineering and policy trajectory analysis to systematically identify risky LLM behaviors. Experimental validation using realistic clinical scenarios demonstrates that our HRLbased approach significantly outperforms conventional methods, providing robust detection of safety and reliability vulnerabilities in healthcare LLM systems.

Ähnliche Arbeiten