Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Benchmarking AutoML frameworks for disease prediction using medical claims
39
Zitationen
7
Autoren
2022
Jahr
Abstract
OBJECTIVES: Ascertain and compare the performances of Automated Machine Learning (AutoML) tools on large, highly imbalanced healthcare datasets. MATERIALS AND METHODS: We generated a large dataset using historical de-identified administrative claims including demographic information and flags for disease codes in four different time windows prior to 2019. We then trained three AutoML tools on this dataset to predict six different disease outcomes in 2019 and evaluated model performances on several metrics. RESULTS: The AutoML tools showed improvement from the baseline random forest model but did not differ significantly from each other. All models recorded low area under the precision-recall curve and failed to predict true positives while keeping the true negative rate high. Model performance was not directly related to prevalence. We provide a specific use-case to illustrate how to select a threshold that gives the best balance between true and false positive rates, as this is an important consideration in medical applications. DISCUSSION: Healthcare datasets present several challenges for AutoML tools, including large sample size, high imbalance, and limitations in the available features. Improvements in scalability, combinations of imbalance-learning resampling and ensemble approaches, and curated feature selection are possible next steps to achieve better performance. CONCLUSION: Among the three explored, no AutoML tool consistently outperforms the rest in terms of predictive performance. The performances of the models in this study suggest that there may be room for improvement in handling medical claims data. Finally, selection of the optimal prediction threshold should be guided by the specific practical application.
Ähnliche Arbeiten
SMOTE: Synthetic Minority Over-sampling Technique
2002 · 30.475 Zit.
An introduction to ROC analysis
2005 · 20.903 Zit.
Mining association rules between sets of items in large databases
1993 · 14.775 Zit.
pROC: an open-source package for R and S+ to analyze and compare ROC curves
2011 · 13.762 Zit.
Fast algorithms for mining association rules
1998 · 10.754 Zit.