Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Design and Evaluation of an Intelligent Agent-Based Home Healthcare System for Clinical Decision Support
0
Zitationen
8
Autoren
2026
Jahr
Abstract
The rapid development of large language models (LLMs) has stimulated growing interest in medical intelligent agents for clinical decision support. However, existing systems often suffer from limited grounding in authoritative medical knowledge, potential safety risks, and a tendency to generate definitive diagnostic conclusions without sufficient clinical context. In this work, we present the design of a medical intelligent agent aimed at supporting clinical decision-making through evidence-grounded information retrieval and safety-aware interaction. The proposed system focuses on two primary functions: (i) providing drug usage guidance, dosage information, and food–drug interaction warnings based on authoritative medical knowledge sources, and (ii) retrieving relevant clinical guidelines in response to patient-reported symptoms to assist clinicians with differential diagnostic considerations rather than definitive diagnoses. To mitigate safety risks, the agent is explicitly constrained to avoid diagnostic claims and instead emphasizes guideline-based recommendations and referral suggestions when appropriate. The agent integrates structured medical knowledge retrieval with natural language interaction, enabling users to obtain context-aware, interpretable and clinically relevant responses. By grounding outputs in curated medical references and enforcing non-diagnostic constraints, the system aims to reduce hallucinations and enhance reliability in medical consultations. This work highlights the potential of retrieval-augmented medical intelligent agents as supportive tools for clinical decision support, medical education, and patient-facing health information services, while underscoring the importance of safety, transparency, and scope limitation in medical AI deployment.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.
Autoren
Institutionen
- Beijing International Studies University(CN)
- Xi’an Jiaotong-Liverpool University(CN)
- Xi'an Jiaotong University(CN)
- Shandong University of Finance and Economics(CN)
- Wuhan University of Science and Technology(CN)
- Dongbei University of Finance and Economics(CN)
- Zhuhai Institute of Advanced Technology(CN)
- Beijing Language and Culture University(CN)