Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Privacy-Preserving Clinical Decision Support for Emergency Triage Using LLMs: System Architecture and Real-World Evaluation
4
Zitationen
5
Autoren
2025
Jahr
Abstract
This study presents a next-generation clinical decision-support architecture for Clinical Decision Support Systems (CDSS) focused on emergency triage. By integrating Large Language Models (LLMs), Federated Learning (FL), and low-latency streaming analytics within a modular, privacy-preserving framework, the system addresses key deployment challenges in high-stakes clinical settings. Unlike traditional models, the architecture processes both structured (vitals, labs) and unstructured (clinical notes) data to enable context-aware reasoning with clinically acceptable latency at the point of care. It leverages big data infrastructure for large-scale EHR management and incorporates digital twin concepts for live patient monitoring. Federated training allows institutions to collaboratively improve models without sharing raw data, ensuring compliance with GDPR/HIPAA, and FAIR principles. Privacy is further protected through differential privacy, secure aggregation, and inference isolation. We evaluate the system through two studies: (1) a benchmark of 750+ USMLE-style questions validating the medical reasoning of fine-tuned LLMs; and (2) a real-world case study (n = 132, 75.8% first-pass agreement) using de-identified MIMIC-III data to assess triage accuracy and responsiveness. The system demonstrated clinically acceptable latency and promising alignment with expert judgment on reviewed cases. The infectious disease triage case demonstrates low-latency recognition of sepsis-like presentations in the ED. This work offers a scalable, audit-compliant, and clinician-validated blueprint for CDSS, enabling low-latency triage and extensibility across specialties.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.