Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI Models for Healthcare Diagnostics
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Artificial Intelligence (AI) has revolutionized healthcare diagnostics, allowing the use of computer-based tools to identify medical issues with the help of the imaging, laboratory data, genomics, and electronic health records. Nevertheless, the black-box AI models, especially deep learning, are not transparent, which hinders their usage in clinical settings where transparency and reliability are critical. Explainable AI (XAI) offers insights into how a model makes decisions that are interpretable and understandable by a human being, which is a challenge to trust, bias and regulatory compliance. The paper will discuss the structure, procedures and uses of XAI in healthcare diagnostics, review the most typical explainability algorithms: LIME, SHAP, Grad-CAM and interpretable decision trees, and provide a conceptual model of XAI integration into clinical workflow. The experimental evidence collected using open medical datasets proves that XAI has the potential to enhance the clinician trust levels, minimize diagnostic error rates, and outline the possible biases. The paper concludes that effective monitoring of AI-driven healthcare systems through XAI is necessary to guarantee safe, transparent, and ethical deployment of AI-driven healthcare systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.