Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Analyzing the Sources of Error in Visual Search of Whole Slide Images in Pathology
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Errors are a problem in pathology: false negatives lead to disease not being treated and false positives lead to unnecessary treatment and resulting risk to the patient. We used eye tracking to investigate pathologists’ search patterns and behaviors with the goal of understanding the nature of errors in search for cancer in whole slide images (WSIs) of lymph nodes. Ten pathologists of varying experience levels diagnosed and annotated a set of 60 lymph node WSIs; 45 with metastases and 15 benign, while we recorded their gaze and mouse behaviors. Our pathologists had 100% accuracy on the benign slides. There were no false positives in this data set. The false negative error rate ranged from 17.8% to 73.3% (46.1% avg). Based on eye movement scanpaths, we categorized these errors into “search”, “recognition”, and “decision” errors based on a taxonomy introduced by Kundel et al (1978). The majority (67.5%) of false negatives can be labeled search errors. Search errors are defined as cases where pathologists never fixated on a tumor region. 11.7% were recognition errors where the eyes landed briefly on or near the malignancy without being noted by the pathologist. Decision errors, where a pathologist scrutinized the malignancy but decided it was benign accounted for 20.4%. For all experience groups except residents, longer viewing time was associated with higher accuracy. Curiously, for residents, the reverse was true: longer viewing time led to lower accuracy. Additionally, residents make more use of zooming in comparison to more experienced groups (avg 34.9 vs. 17.2 zooms). They also tended to view at higher magnification (avg 22.64x vs 15.42x). Compared to more experienced pathologists, residents spent more of their viewing time zooming rather than panning (Residents: 22.0% zoom, 10.0% pan; non-residents: 15.2% zoom, 13.4% pan).
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 13.739 Zit.
Dermatologist-level classification of skin cancer with deep neural networks
2017 · 13.327 Zit.
A survey on Image Data Augmentation for Deep Learning
2019 · 11.922 Zit.
QuPath: Open source software for digital pathology image analysis
2017 · 8.289 Zit.
Radiomics: Images Are More than Pictures, They Are Data
2015 · 8.077 Zit.