OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 06:35

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainability in AI-enabled medical neurotechnology: a scoping review

2026·0 Zitationen·Journal of NeuroEngineering and RehabilitationOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

Artificial Intelligence (AI) approaches, including Machine Learning (ML), and other complex algorithms, are driving progress in medical closed-loop neurotechnology, including neurostimulation systems and brain–computer interfaces (BCIs). These advances are transforming the treatment landscape for neurological and psychiatric conditions. However, the inherent opacity of many AI models raises clinical, epistemological and ethical challenges. Explainability is widely recognized as a critical requirement for addressing these challenges, yet its concrete application in neurotechnology remains insufficiently explored. Objective. This scoping review maps how Explainable AI (XAI) methods are implemented in AI-enabled closed-loop neurotechnologies and examines how explainability is conceptualized and operationalized in this domain. Approach. Following JBI guidance and PRISMA-ScR, we systematically searched five databases for original research on AI-enabled medical closed-loop neurotechnologies targeting neurological and psychiatric conditions. A total of 161 studies were included and analyzed using descriptive statistics and qualitative content analysis to identify the presence and framing of XAI methods. Main results. Explainable AI adoption in medical neurotechnology is limited: only 14 studies (9%) employed explicit XAI techniques. Thematic analysis of the full corpus identified three recurring barriers and focal points: (A) technical constraints and proposed workarounds, (B) challenges in explanation quality and system transparency, and (C) the relationship between explainability and user trust. Significance. Although closed-loop neurotechnologies are rapidly advancing, explainability is rarely implemented in practice, constraining transparency, accountability, and clinical usability. Our findings reveal key factors behind the explainability gap and provides a framework to guide future research and development. Addressing this shortfall is essential for fostering ethically sound, clinically effective, and patient-trusted neurotechnological applications.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

EEG and Brain-Computer InterfacesNeurological disorders and treatmentsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen