Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Hazards of data leakage in machine learning: a study on classification of breast cancer using deep neural networks
36
Zitationen
4
Autoren
2020
Jahr
Abstract
With the renewed interest in developing machine learning methods for medical imaging using deep-learning approaches, it is essential to reexamine data leakage. In this study, we simulated data leakage in the form of feature leakage, where a classifier was trained on the training set, but the feature selection was influenced by the performance on the validation set. A pre-trained deep-learning convolutional neural network (DCNN) without fine-tuning was used as a feature extractor for malignant and benign mass classification in mammography. A feature selection algorithm was trained in the wrapper mode with a cost function tuned to follow the performance metric on the validation set. Linear discriminant analysis (LDA) classifier was trained to classify masses on mammographic patches. Mammograms from 1,882 patient cases with 4,577 unique patches were partitioned by patient into 3,222 for training and 508 for validation, while 847 were sequestered as unseen independent test set to evaluate the generalization error. The effects of the finite sample size on data leakage were studied by varying the training and validation set sizes from 10% to 100% of the available sets. The area under the receiver operating characteristic curve (AUC) was used as the performance metric. The results show that the performance on the validation set could be overestimated, having AUCs of 0.75 to 0.99 for various sample sizes, whereas the independent test performance could realistically only reach an AUC of 0.72. The analysis indicates that deep learning can risk a high inflation in performance and proper housekeeping rules should be followed when designing and developing deep learning methods in medical imaging.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.