Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Discernibility in explanations: Designing more acceptable and meaningful machine learning models for medicine
2
Zitationen
10
Autoren
2025
Jahr
Abstract
Although the benefits of machine learning are undeniable in healthcare, explainability plays a vital role in improving transparency and understanding the most decisive and persuasive variables for prediction. The challenge is to identify explanations that make sense to the biomedical expert. This work proposes <i>discernibility</i> as a new approach to faithfully reflect human cognition, based on the user's perception of a relationship between explanations and data for a given variable. A total of 50 participants (19 biomedical experts and 31 data scientists) evaluated their perception of the discernibility of explanations from both synthetic and human-based datasets (National Health and Nutrition Examination Survey). The low inter-rater reliability of discernibility (Intraclass Correlation Coefficient < 0.5), with no significant difference between areas of expertise or levels of education, highlights the need for an objective metric of discernibility. Thirteen statistical coefficients were evaluated for their ability to capture, for a given variable, the relationship between its values and its explanations using Passing-Bablok regression. Among these, dcor was shown to be a reliable metric for assessing the discernibility of explanations, effectively capturing the clarity of the relationship between the data and their explanations, and providing clues to underlying pathophysiological mechanisms not immediately apparent when examining individual predictors. Discernibility can also serve as an evaluation metric for model quality, helping to prevent overfitting and aiding in feature selection, ultimately providing medical practitioners with more accurate and persuasive results.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.
Autoren
Institutionen
- Centre National de la Recherche Scientifique(FR)
- Inserm(FR)
- École Nationale Vétérinaire de Toulouse(FR)
- Université Fédérale de Toulouse Midi-Pyrénées(FR)
- Université Toulouse III - Paul Sabatier(FR)
- Université Toulouse-I-Capitole(FR)
- Institut de Recherche en Informatique de Toulouse(FR)
- Université Toulouse - Jean Jaurès(FR)
- Institut Polytechnique de Bordeaux(FR)
- Centre Hospitalier Universitaire de Toulouse(FR)
- Toulouse Mathematics Institute(FR)
- Université de Tours(FR)
- Institut de Mathématiques de Toulouse(FR)