Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI for Healthcare: An Approach Towards Interpretable Healthcare Models
11
Zitationen
3
Autoren
2023
Jahr
Abstract
Artificial intelligence (AI) along with deep learning techniques has become an integral part of almost all aspects of life. One of the domains significantly impacted by this technological revolution is healthcare. Deep learning-based AI systems assist clinicians and medical professionals in disease diagnosis, personalized treatment, and monitoring through wearables, among other applications. Despite its expedient integration in healthcare, the trustworthiness of deep learning models remains a concern, primarily due to a lack of understanding of their underlying processes. However, Explainable AI (XAI) offers explanations through various methods, including Local Interpretable Model-agnostic Explanations (LIME), Shapley Additive explanation (SHAP), and GRAD-CAM. XAI is utilized to enhance transparency, allowing users to understand and trust AI decisions. In this study, we present deep learning models for the classification of pneumonia disease in Chest X-ray Images followed by their explanations. Convolutional Neural Networks (CNNs) and other pre-trained models, including VGG16, MobileNetV3, and ResNet50, were used for classification of images as ‘normal’ or ‘pneumonia’. The VGG16 model, known for its exceptional image understanding capabilities, achieved the highest accuracy, with an impressive 93% score. Further, we used XAI techniques including SHAP, LIME, and Grad-CAM for explanation of models. LIME and Grad-CAM provided more accurate results than SHAP in our experiments. This approach was taken to evaluate the fairness and transparency of the model. The insights gained from XAI can be used to refine and improve machine learning models by identifying areas of weakness or misinterpretation which increases overall model robustness.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.227 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.601 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.387 Zit.