Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Validation of Whole Slide Imaging for Primary Diagnosis in Surgical Pathology
156
Zitationen
6
Autoren
2013
Jahr
Abstract
CONTEXT: High-resolution scanning technology provides an opportunity for pathologists to make diagnoses directly from whole slide images (WSIs), but few studies have attempted to validate the diagnoses so obtained. OBJECTIVE: To compare WSI versus microscope slide diagnoses of previously interpreted cases after a 1-year delayed re-review ("wash-out") period. DESIGN: An a priori power study estimated that 450 cases might be needed to demonstrate noninferiority, based on a null hypothesis: "The true difference in major discrepancies between WSI and microscope slide review is greater than 4%." Slides of consecutive cases interpreted by 2 pathologists 1 year prior were retrieved from files, and alternate cases were scanned at original magnification of ×20. Each pathologist reviewed his or her cases using either a microscope or imaging application. Independent pathologists identified and classified discrepancies; an independent statistician calculated major and minor discrepancy rates for both WSI and microscope slide review of the previously interpreted cases. RESULTS: The 607 cases reviewed reflected the subspecialty interests of the 2 pathologists. Study limitations include the lack of cytopathology, hematopathology, or lymphoid cases; the case mix was not enriched with difficult cases; and both pathologists had interpreted several hundred WSI cases before the study to minimize the learning curve. The major and minor discrepancy rates for WSI were 1.65% and 2.31%, whereas rates for microscope slide reviews were 0.99% and 4.93%. CONCLUSIONS: Based on our assumptions and study design, diagnostic review by WSI was not inferior to microscope slide review (P < .001).
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 13.999 Zit.
pROC: an open-source package for R and S+ to analyze and compare ROC curves
2011 · 13.798 Zit.
Dermatologist-level classification of skin cancer with deep neural networks
2017 · 13.518 Zit.
A survey on Image Data Augmentation for Deep Learning
2019 · 12.135 Zit.
QuPath: Open source software for digital pathology image analysis
2017 · 8.425 Zit.