Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Hybrid AI Framework for Accurate Diagnostics: Merging Deep Learning with Rule-Based and Explainable Techniques Across Imaging Modalities
0
Zitationen
3
Autoren
2025
Jahr
Abstract
This paper presents a robust hybrid artificial intelligence (AI) framework designed to improve diagnostic accuracy in medical imaging tests such as magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET) scans. The enhancement is achieved by combining deep learning methods with rule-based reasoning and model-agnostic explainability techniques. The proposed hybrid architecture integrates convolutional neural networks (CNNs) and transformer models for feature extraction, while incorporating expert-defined rule-based logic to strengthen interpretability and ensure consistency in decision-making. To improve transparency, model-agnostic explainability approaches such as SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) are applied, offering deeper insights into AI-driven diagnoses. This integration addresses critical clinical concerns related to trust, traceability, and adoption of AI systems. Experimental evaluations conducted on benchmark medical imaging datasets achieved an accuracy of 94.2%, which is 3.8% higher than conventional deep learning approaches, demonstrating improved robustness. Additionally, the proposed system enhances clinical interpretability, with a 21% increase in explainability ratings provided by healthcare professionals. These findings highlight the novelty and clinical relevance of hybrid AI models by combining automation with decision support, thereby promoting greater trust and adoption of AI in diagnostic workflows across healthcare facilities.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.