Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Detecting and Remediating Harmful Data Shifts for the Responsible Deployment of Clinical AI Models
23
Zitationen
11
Autoren
2025
Jahr
Abstract
In this prognostic study, a proactive, label-agnostic monitoring pipeline detected harmful data shifts for a clinical AI system predicting in-hospital mortality. Transfer learning and drift-triggered continual learning strategies mitigated performance degradation, maintaining model performance across health care settings. These findings suggest that the approach used here may ensure the robust and equitable deployment of clinical AI models. Future research should explore the generalizability of this framework across diverse clinical domains, data modalities, and longer deployment periods to further validate its effectiveness.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.307 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.679 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.411 Zit.