Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Processing multi-expert annotations in digital pathology: a study of the Gleason 2019 challenge
1
Zitationen
3
Autoren
2021
Jahr
Abstract
Deep learning algorithms rely on large amounts of annotations for learning and testing. In digital pathology, a ground truth is rarely available, and many tasks show large inter-expert disagreement. Using the Gleason2019 dataset, we analyse how the choices we make in getting the ground truth from multiple experts may affect the results and the conclusions we could make from challenges and benchmarks. We show that using undocumented consensus methods, as is often done, reduces our ability to properly analyse challenge results. We also show that taking into account each expert’s annotations enriches discussions on results and is more in line with the clinical reality and complexity of the application.
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 13.911 Zit.
pROC: an open-source package for R and S+ to analyze and compare ROC curves
2011 · 13.762 Zit.
Dermatologist-level classification of skin cancer with deep neural networks
2017 · 13.458 Zit.
A survey on Image Data Augmentation for Deep Learning
2019 · 12.052 Zit.
QuPath: Open source software for digital pathology image analysis
2017 · 8.387 Zit.