Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
xMagNet: Dynamic magnification-aware fusion with uncertainty quantification for robust breast cancer histopathology
1
Zitationen
6
Autoren
2026
Jahr
Abstract
Histopathology image analysis faces challenges due to magnification variability, limiting robust tumor categorization. Existing deep learning models prioritize accuracy but neglect explainability, ethical biases, and real-world deployment. This study proposes xMagNet, a hybrid Transformer-Convolutional Neural Network (CNN) framework that synergizes technical rigor, clinical transparency, and ethical fairness for multi-magnification breast cancer diagnostics. xMagNet integrates a hybrid encoder combining Vision Transformers (ViT) for global tissue modeling at low magnifications (4x - 10x) and Separable Dilation Convolutions (SDC) for localized nuclear texture extraction at high magnifications (20x - 40x). Magnification-Aware Gating (MAG) dynamically balances ViT and SDC features via temperature-scaled sigmoid activation. A multi-task decoder employs Thresholded Grad-CAM (top 10% gradients) for explainable decision-making and Point-wise Reformation Blocks (PRB) for boundary preservation. Federated learning (FL) with momentum-enhanced aggregation and Sinkhorn divergence regularization ensures scanner/stain-invariant training across six institutions (Hamamatsu/Leica, H&E/IHC). Uncertainty-quantified predictions (Monte Carlo dropout) and adversarial debiasing mitigate demographic leakage. xMagNet achieves 97.8% F1-score for tumor segmentation on Camelyon16 and 93% Gleason AUC on PANDA, with 96.5% pathologist concordance via Grad-CAM. At 40x magnification, it detects micro-metastases with 94% sensitivity (vs. UNet++’s 89% and ResUNet’s 91%). Computational efficiency includes sub-second inference (0.42 sec/slide) and 2.3 x faster convergence than HoVer-Net. Ethical auditing reveals 3% fairness gaps ( ) and 73% domain shift reduction (MMD: 0.12 vs. FedAvg’s 0.45), validated on 15,000 whole-slide images (WSIs) from TCGA-BRCA, Camelyon16, and PANDA datasets. xMagNet bridges critical gaps in multi-magnification histopathology by harmonizing technical robustness (MAG fusion, bounded gradients) with clinical utility (HER2+/ER+ subtyping, Gleason grading) and ethical scalability. By achieving high accuracy, rapid inference, and equitable deployment, it advances AI-driven diagnostics toward trustworthy, deployable systems for breast, prostate, and metastatic cancer imaging. Code available at: xMagNet .
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 13.483 Zit.
Dermatologist-level classification of skin cancer with deep neural networks
2017 · 13.116 Zit.
A survey on Image Data Augmentation for Deep Learning
2019 · 11.718 Zit.
QuPath: Open source software for digital pathology image analysis
2017 · 8.074 Zit.
Radiomics: Images Are More than Pictures, They Are Data
2015 · 7.969 Zit.