OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 20:49

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Unveiling the Black Box: Explainable AI in Diabetes and Cancer Care

2025·0 Zitationen·International Journal of Deep Tech in Medical Science and TechnologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

The integration of artificial intelligence (AI) into healthcare has yielded significant advancements in diagnostics, prognostics, and personalised treatment pathways, particularly in the domains of diabetes and cancer care. However, the opaque “black-box” nature of many AI models hinders their clinical adoption due to limited interpretability and trust. This review explores the transformative potential of Explainable Artificial Intelligence (XAI) in enhancing transparency and accountability within AI-driven medical systems. Focusing on diabetes and oncology, the study systematically synthesises literature published between 2021 and 2025, highlighting how XAI techniques—such as SHAP, LIME, Grad-CAM, and federated learning—bridge the gap between complex algorithms and clinical interpretability. In diabetes care, XAI enables the visualisation of critical predictors like HbA1c levels, BMI, and glucose trends, thereby improving diagnostic precision, insulin dosing, and complication forecasting. In cancer research, XAI supports early detection, subtype classification, and personalised drug recommendations across various malignancies including breast, lung, colorectal, cervical, and ovarian cancers. The review also delves into bibliometric analyses, revealing a global surge in XAI-related healthcare research, with India leading the publication count. Additionally, the paper addresses ethical, regulatory, and equity considerations, emphasising how XAI can detect bias, support FDA and EMA compliance, and enhance model generalizability. By showcasing use cases across multi-modal data applications—spanning genomics, imaging, and metabolomics—the study underlines XAI’s critical role in fostering clinician trust, patient safety, and transparent AI governance. Recommendations for future research include developing clinically integrated XAI tools, advancing real-world validations, and supporting collaborative, privacy-preserving data ecosystems. Ultimately, XAI is positioned not merely as a technical add-on but as a foundational enabler for trustworthy, human-centric AI in healthcare.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen