OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 13:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable Multimodal Deep Learning in Healthcare: A Survey of Current Approaches

2025·0 Zitationen·International Research Journal on Advanced Engineering Hub (IRJAEH)Open Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

Multimodal data integration has been considered the next step in transformation for modern healthcare as it brings an improved level of patient outcome and clinical decision-making. With the multimodal data set consisting of medical images, electronic health records, wearable sensor data, genetic information, and behavioral insights, the complexity of patient health becomes much clearer. Traditional methods for data analysis find it challenging in handling such complexities and diversities in data sets. This paper proposes a deep learning multimodal framework that exploits feature extraction, optimal selection of feature, and explainable AI techniques in order to detect and predict diseases. Data fusion techniques are used in the suggested system to efficiently combine various data sources, improving diagnosis accuracy and dependability. Furthermore, by using explainable AI techniques, the model guarantees decision-making transparency and helps doctors comprehend the roles that various modalities play in diagnostic results. Using a Python implementation on this framework brings promising results of disease categorisation and prediction with the possibility for AI-driven multimodal healthcare to improve medical diagnosis and individual therapy.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Biomedical Text Mining and OntologiesArtificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen