Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification
0
Zitationen
3
Autoren
2026
Jahr
Abstract
As artificial intelligence systems move toward clinical deployment, ensuring reliable prediction behavior is fundamental for safety-critical decision-making tasks. One proposed safeguard is selective prediction, where models can defer uncertain predictions to human experts for review. In this work, we empirically evaluate the reliability of uncertainty-based selective prediction in multilabel clinical condition classification using multimodal ICU data. Across a range of state-of-the-art unimodal and multimodal models, we find that selective prediction can substantially degrade performance despite strong standard evaluation metrics. This failure is driven by severe class-dependent miscalibration, whereby models assign high uncertainty to correct predictions and low uncertainty to incorrect ones, particularly for underrepresented clinical conditions. Our results show that commonly used aggregate metrics can obscure these effects, limiting their ability to assess selective prediction behavior in this setting. Taken together, our findings characterize a task-specific failure mode of selective prediction in multimodal clinical condition classification and highlight the need for calibration-aware evaluation to provide strong guarantees of safety and robustness in clinical AI.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.366 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.716 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.254 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.678 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.430 Zit.