OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 05.05.2026, 04:34

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

An Interpretable Vision Model Integrating Radiomics for Precision Oncology Diagnostics Using Multi-Modal Medical Imaging

2026·0 Zitationen·International Journal of Research and Innovation in Social ScienceOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

The use of deep learning models in clinical oncology remains underutilized, despite their potential in cancer diagnosis. The current methods utilize either traditional radiomics features, whose representational power is limited, or opaque deep neural networks that are unable to provide explanations useful to clinicians. This study addresses the interpretability-performance trade-off by introducing a novel hybrid architecture that synergistically combines convolutional neural networks with radiomics biomarkers through attention-based fusion mechanisms. Our framework takes multi-modal imaging (CT, MRI, and PET) data (2,847 patients with 5 different cancer types). It operates through a two-stream architecture, specifically enforcing a correlation-based constrained relationship and sparsity-based regularization between the deep learning and radiomics pathways. The model employs trained gating choices using automatic feature selection and cross-modal attention as an ad hoc weighting mechanism to produce an accurate forecast and a human-comprehensible explanation. The results of the experiments show improved performance, with an area under the ROC curve of 0.947, which represents 8.4% and 2.6% better performance than pure radiomics methods and standard deep learning models, respectively. In older people, as validated by five expert radiologists, the generated explanations received a high rating of relevance (78.4% rated 4-5 on a 5-point scale) and demonstrated high agreement among raters (α = 0.68). The study informed contributions including learnable architecture with interpretability constraints built into its objective, direct measurement of individual features through quantified attention weights consistent with radiological intuitions, detection consistency across a variety of different kinds of cancers providing generalizability, and demonstrated that interpretability improvements do not affect predictive accuracy. Therefore, this study developed a reliable AI in oncology by offering an empirical roadmap for the engineering of high-performance diagnostic environments that can meet clinical accountability and transparency standards.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Radiomics and Machine Learning in Medical ImagingArtificial Intelligence in Healthcare and EducationAI in cancer detection
Volltext beim Verlag öffnen