Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Machine Learning Methods for Person-Based Prediction in Simulated and Real Datasets: Methodological Research
0
Zitationen
2
Autoren
2023
Jahr
Abstract
Objective: The aim of this study is to build personbased prediction models for simulated and real datasets separately with the SHapley Additive exPlanations method, and to demonstrate whether the obtained person-based models are more valid and applicable than overall models. Material and Methods: Simulated datasets encompassed 13 independent and 1 dependent variable, across sample sizes of 250, 500, and 1,000, while the real dataset contained 826 patient records with 11 variables. ''bindata'', ''shaper'' and ''RWeka'' packages in the R (version 4.1.2) programming language were used. Extreme Gradient Boosting, Bagging, Random Forest, Support Vector Machine and Logistic Regression were used as classification methods. The assessment employed 10-fold crossvalidation, repetaed 1,000 times. Results: Accuracy values of the overall model in the datasets with 250, 500, and 1,000 samples were found to be 0.856, 0.886, and 0.891, respectively. In these samples, the person-based accuracy values were found to be 0.886, 0.964, and 0.962 for those with ''yes'' prediction results, and 0.930, 0.961, and 0.961 for those with ''no'' prediction results, respectively. In the real dataset, the accuracy of the overall model was found to be 0.736. The person-based accuracy values were found to be 0.783 in the patient who was predicted with stroke, and 0.868 in the patient who was predicted without stroke. Conclusion: Personbased predictions consistently outperformed model-based results across datasets due to real-life individual heterogeneity, emphasizing the need for attention. Considering this diversity, person-based modeling is expected to produce a more realistic and clinically applicable model.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.