Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparison of Prediction Model Performance Updating Protocols: Using a Data-Driven Testing Procedure to Guide Updating.
24
Zitationen
5
Autoren
2019
Jahr
Abstract
In evolving clinical environments, the accuracy of prediction models deteriorates over time. Guidance on the design of model updating policies is limited, and there is limited exploration of the impact of different policies on future model performance and across different model types. We implemented a new data-driven updating strategy based on a nonparametric testing procedure and compared this strategy to two baseline approaches in which models are never updated or fully refit annually. The test-based strategy generally recommended intermittent recalibration and delivered more highly calibrated predictions than either of the baseline strategies. The test-based strategy highlighted differences in the updating requirements between logistic regression, L1-regularized logistic regression, random forest, and neural network models, both in terms of the extent and timing of updates. These findings underscore the potential improvements in using a data-driven maintenance approach over "one-size fits all" to sustain more stable and accurate model performance over time.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.661 Zit.
Coding Algorithms for Defining Comorbidities in ICD-9-CM and ICD-10 Administrative Data
2005 · 10.538 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.915 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.483 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.003 Zit.