Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The CrowdGleason dataset: Learning the Gleason grade from crowds and experts
4
Zitationen
7
Autoren
2024
Jahr
Abstract
BACKGROUND: Currently, prostate cancer (PCa) diagnosis relies on the human analysis of prostate biopsy Whole Slide Images (WSIs) using the Gleason score. Since this process is error-prone and time-consuming, recent advances in machine learning have promoted the use of automated systems to assist pathologists. Unfortunately, labeled datasets for training and validation are scarce due to the need for expert pathologists to provide ground-truth labels. METHODS: This work introduces a new prostate histopathological dataset named CrowdGleason, which consists of 19,077 patches from 1045 WSIs with various Gleason grades. The dataset was annotated using a crowdsourcing protocol involving seven pathologists-in-training to distribute the labeling effort. To provide a baseline analysis, two crowdsourcing methods based on Gaussian Processes (GPs) were evaluated for Gleason grade prediction: SVGPCR, which learns a model from the CrowdGleason dataset, and SVGPMIX, which combines data from the public dataset SICAPv2 and the CrowdGleason dataset. The performance of these methods was compared with other crowdsourcing and expert label-based methods through comprehensive experiments. RESULTS: The results demonstrate that our GP-based crowdsourcing approach outperforms other methods for aggregating crowdsourced labels (κ=0.7048±0.0207) for SVGPCR vs.(κ=0.6576±0.0086) for SVGP with majority voting). SVGPCR trained with crowdsourced labels performs better than GP trained with expert labels from SICAPv2 (κ=0.6583±0.0220) and outperforms most individual pathologists-in-training (mean κ=0.5432). Additionally, SVGPMIX trained with a combination of SICAPv2 and CrowdGleason achieves the highest performance on both datasets (κ=0.7814±0.0083 and κ=0.7276±0.0260). CONCLUSION: The experiments show that the CrowdGleason dataset can be successfully used for training and validating supervised and crowdsourcing methods. Furthermore, the crowdsourcing methods trained on this dataset obtain competitive results against those using expert labels. Interestingly, the combination of expert and non-expert labels opens the door to a future of massive labeling by incorporating both expert and non-expert pathologist annotators.
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 14.019 Zit.
pROC: an open-source package for R and S+ to analyze and compare ROC curves
2011 · 13.808 Zit.
Dermatologist-level classification of skin cancer with deep neural networks
2017 · 13.528 Zit.
A survey on Image Data Augmentation for Deep Learning
2019 · 12.149 Zit.
QuPath: Open source software for digital pathology image analysis
2017 · 8.437 Zit.