Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainability in AI-enabled medical neurotechnology: a scoping review
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Artificial Intelligence (AI) approaches, including Machine Learning (ML), and other complex algorithms, are driving progress in medical closed-loop neurotechnology, including neurostimulation systems and brain–computer interfaces (BCIs). These advances are transforming the treatment landscape for neurological and psychiatric conditions. However, the inherent opacity of many AI models raises clinical, epistemological and ethical challenges. Explainability is widely recognized as a critical requirement for addressing these challenges, yet its concrete application in neurotechnology remains insufficiently explored. Objective. This scoping review maps how Explainable AI (XAI) methods are implemented in AI-enabled closed-loop neurotechnologies and examines how explainability is conceptualized and operationalized in this domain. Approach. Following JBI guidance and PRISMA-ScR, we systematically searched five databases for original research on AI-enabled medical closed-loop neurotechnologies targeting neurological and psychiatric conditions. A total of 161 studies were included and analyzed using descriptive statistics and qualitative content analysis to identify the presence and framing of XAI methods. Main results. Explainable AI adoption in medical neurotechnology is limited: only 14 studies (9%) employed explicit XAI techniques. Thematic analysis of the full corpus identified three recurring barriers and focal points: (A) technical constraints and proposed workarounds, (B) challenges in explanation quality and system transparency, and (C) the relationship between explainability and user trust. Significance. Although closed-loop neurotechnologies are rapidly advancing, explainability is rarely implemented in practice, constraining transparency, accountability, and clinical usability. Our findings reveal key factors behind the explainability gap and provides a framework to guide future research and development. Addressing this shortfall is essential for fostering ethically sound, clinically effective, and patient-trusted neurotechnological applications.
Ähnliche Arbeiten
EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis
2004 · 24.383 Zit.
FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data
2010 · 11.079 Zit.
Principles of neural science
1982 · 9.169 Zit.
Nonparametric statistical testing of EEG- and MEG-data
2007 · 8.992 Zit.
The human brain is intrinsically organized into dynamic, anticorrelated functional networks
2005 · 8.762 Zit.