OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 25.03.2026, 03:39

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Enhancing privacy-preserving deployable large language models for perioperative complication detection: a targeted strategy with LoRA fine-tuning

2025·0 Zitationen·npj Digital MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

10

Autoren

2025

Jahr

Abstract

Perioperative complications are a major global concern, yet manual detection suffers from 27% under‑reporting and frequent misclassification. Clinical LLM deployment is constrained by data sovereignty, compute cost, and limited locally deployable model performance. We show targeted prompt engineering plus Low‑Rank Adaptation (LoRA) fine‑tuning converts smaller open‑source LLMs into expert‑level diagnostic tools. In dual‑center validation, we built a framework simultaneously identifying and grading 22 complication severities. State‑of‑the‑art models outperformed human experts; Chain‑of‑Thought prompting significantly improved general models (p < 0.001) while preserving reasoning models' performance. Across documentation length quartiles, AI models maintained F1 > 0.64, whereas human performance declined from 0.73 to 0.45, demonstrating superior robustness to documentation complexity. Our targeted strategy-decomposing detection into focused single‑complication assessments-improved small models, with further gains from LoRA. On external validation (Center 2), the optimized 4B model's micro‑F1 rose from 0.28 to 0.64, approaching human experts (F1 = 0.69), driven by the targeted strategy (ΔF1 = 0.256, 95% CI [0.181, 0.336]) and LoRA (ΔF1 = 0.103, 95% CI [0.023, 0.186]). Concurrently, the 8B model surpassed human experts (F1 > 0.70). Optimized small models enable expert‑level accuracy with local deployment and preserved data sovereignty, offering a practical path for resource‑limited healthcare.

Ähnliche Arbeiten