Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
TIxAI: A Trustworthiness Index for eXplainable AI in skin lesions classification
11
Zitationen
5
Autoren
2025
Jahr
Abstract
Skin cancer is one of the leading causes of mortality worldwide. Early diagnosis can ensure more effective patient treatment and outcomes, but, this is challenging due to the high similarity between different skin lesion types. There is a growing interest in developing Artificial Intelligence (AI)-based systems for automated skin lesion classification. However, current AI models are not transparent, leading to a lack of trust from clinicians who struggle to interpret and validate AI decisions. To this end, in this paper, a fine tuned EfficientNet-B0-based classifier is first developed to classify dermoscopic images of Melanoma (MEL), Nevus (NV) and Seborrheic Keratosis (SK) skin lesions gathered from the International Skin Imaging Collaboration (ISIC) dataset. Next, the explainability of the model is investigated. In particular, a new Trustworthiness Index for eXplainable AI, herein referred to as TIxAI , is proposed. The TIxAI is based on the difference between the relevance degree of the lesion and non-lesion areas, leading to the conclusion that the higher the TIxAI , the more trustworthy the classifier is expected to be. Experimental results support the use of the proposed TIxAI to assess and benchmark the reliability of classifiers also in other real-world applications. • Skin lesion classification using EfficientNet-B0 and a revised ISIC-17 dataset. • Explainability of results via xAI techniques to assess model reliability. • Proposal of TIxAI , a new index for measuring xAI trustworthiness in classification. • Development of a trustworthy classification system for potential clinical deployment.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.