Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards inclusive explainable artificial intelligence: a thematic analysis and scoping review on tools for persons with disabilities
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Findings reveal a strong concentration on neurological conditions - such as Alzheimer's disease, autism spectrum disorder and Parkinson's disease - with limited focus on orthopaedic, sensory and spinal impairments. SHAP was the most common explanation model, followed by LIME, LRP-B and Grad-CAM. Accessibility goals centred around clinical transparency, user comprehension, sensory/cognitive adaptation and trust in low-resource settings. Thematic analysis identified three overarching dimensions: modelling techniques, decision-making and trust and diverse application contexts. Expanding XAI to underrepresented impairments and embedding multimodal, user-centred explanations into rehabilitation workflows - through participatory design, ethical oversight and standardised evaluation - can enhance autonomy, improve personalisation and support more effective, equitable care.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.