Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Reverse Verification of Goodness of Classification of MRI Images by Clinical Experts
2
Zitationen
3
Autoren
2023
Jahr
Abstract
Radiology offers a presumptive diagnosis. The etiology of radiological errors are prevalent, recurrent, and multi-factorial. The pseudo-diagnostic conclusions can arise from varying factors such as, poor technique, failures of visual perception, lack of knowledge, and misjudgments. This retrospective and interpretive errors can influence and alter the Ground Truth (GT) of Magnetic Resonance (MR) imaging which in turn result in faulty class labeling. Wrong class labels can lead to erroneous training and illogical classification outcomes for Computer Aided Diagnosis (CAD) systems. This work aims at verifying and authenticating the accuracy and exactness of the GT of biomedical datasets which are extensively used in binary classification frameworks. Generally such datasets are labeled by only one radiologist. Our article adheres a hypothetical approach to generate few faulty iterations. An iteration here considers simulation of faulty radiologist's perspective in MR image labeling. To achieve this, we try to simulate radiologists who are subjected to human error while taking decision regarding the class labels. In this context, we swap the class labels randomly and force them to be faulty. The experiments are carried out on some iterations (with varying number of brain images) randomly created from the brain MR datasets. The experiments are carried out on two benchmark datasets DS-75 and DS-160 collected from Harvard Medical School website and one larger input pool of self-collected dataset NITR-DHH. To validate our work, average classification parameter values of faulty iterations are compared with that of original dataset. It is presumed that, the presented approach provides a potential solution to verify the genuineness and reliability of the GT of the MR datasets. This approach can be utilized as a standard technique to validate the correctness of any biomedical dataset.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.