OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.03.2026, 13:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

VR-DiagNet: Medical Volumetric and Radiomic Diagnosis Networks with Interpretable Clinician-like Optimizing Visual Inspection

2024·1 Zitationen
Volltext beim Verlag öffnen

1

Zitationen

9

Autoren

2024

Jahr

Abstract

Interpretable and robust medical diagnoses are essential traits for practicing clinicians. Most computer-augmented diagnostic systems suffer from three major problems: non-interpretability, limited modality analysis, and narrow focus. Existing frameworks can either deal with multimodality to some extent but suffer from non-interpretability or partially interpretable but provide a limited modality and multifaceted capabilities. Our work aims to integrate all these aspects in one complete framework to fully utilize the full spectrum of information offered by multiple modalities and facets. We propose our solution via our novel architecture VR-DiagNet, consisting of a planner and a classifier, optimized iteratively and cohesively. VR-DiagNet simulates the perceptual process of clinicians via the use of volumetric imaging information integrated with radiomic features modality; at the same time, it recreates human thought processes via a customized Monte Carlo Tree Search (MCTS) which constructs a volume-tailored experience tree to identify slices of interest (SoIs) in our multi-slice perception space. We conducted extensive experiments across two diagnostic tasks comprising six public medical volumetric benchmark datasets. Our findings showcase superior performance, as evidenced by heightened accuracy and area under the curve (AUC) metrics, reduced computational overhead, and expedited convergence while conclusively illustrating the immense value of integrating volumetric and radiomic modalities for our current problem setup.

Ähnliche Arbeiten