Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
C-RLM: Schema-Enforced Recursive Synthesis for Auditable, Long-Context Clinical Documentation
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Clinical decision-making for multi-morbid patients requires synthesizing evidence from lengthy, fragmented records—a task that exposes the limitations of standard Retrieval-Augmented Generation (RAG) and long-context Large Language Models (LLMs), which often lose critical information or lack auditability. We introduce the Clinical-Recursive Language Model (C-RLM), a framework that reframes evidence synthesis as a structured, recursive compilation process rather than a single-pass retrieval task. C-RLM iteratively builds a validated knowledge state using schema-enforced transitions, a Robust Nomenclature Resilience (RNR) layer for synonym consolidation, and a TraceTracker system for deterministic provenance. Evaluated on 100 complex Lupus Nephritis case reports (∼24.5k tokens each), C-RLM achieves 100% structural consistency and 99% regimen recall (F1), outperforming a strong Flat RAG baseline. While introducing a 2.7× computational overhead, C-RLM delivers a crucial “Synthesis Dividend”: recovery of clinically critical entities fragmented across distant text spans, with full auditability back to source text offsets. Our results demonstrate that for safety-critical clinical applications, the trade-off in latency is justified by gains in reliability, auditability, and support for human-in-the-loop governance.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.307 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.679 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.411 Zit.