Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bootstrapping Large Language Models as Clinical Predictors for Multimodal Irregular Time Series
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Recently, leveraging the medical knowledge embedded in pre-trained large language models (LLMs) to enhance clinical prediction (e.g. patient deterioration forecasting) using multimodal irregular clinical time series has become a bright yet challenging research direction. In this study, we identify a couple of major challenges currently faced by this direction: 1) LLMs could miss most of temporal dependencies among MICTS tokens if we directly input MICTS data into them in a textual format. 2) Deploying an upstream encoder for temporal modeling MICTS produces MICTS representations neither understandable nor applicable to LLMs. To tackle these challenges, we 1) decouple them; 2) propose Snow-CTE, a single-stream encoder to effectively model temporal dependencies (particularly longrange) and mitigate feature heterogeneity in MICTS; 3) propose CT-QFormer to “translate” the sequential MICTS representations (output by Snow-CTE) into real-time clinical condition information understandable to LLMs. Experimental results on benchmark clinical prediction tasks show that Snow-CTE + CTQFormer outperforms SOTA baselines in bootstrapping LLMs to perform clinical prediction using MICTS.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.210 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.586 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.382 Zit.