Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Systematic Review of Explainable Artificial Intelligence for Epileptic Seizure Onset Early Warning: Towards Responsible Artificial Intelligence
0
Zitationen
3
Autoren
2026
Jahr
Abstract
A substantial amount of literature has been published on epileptic seizures. However, adequate evidence is still lacking to demonstrate that utilising explainable artificial intelligence for epileptic seizures can ensure an individual's safety. Furthermore, there is a need to define the fundamental challenges and opportunities present in the current state-of-the-art solutions and guide efforts towards responsible artificial intelligence. To identify fundamental challenges and opportunities in the existing state-of-the-art solutions available for explainable artificial intelligence-based epileptic seizure onset early warning: towards responsible artificial intelligence. The PRISMA checklist was utilised to develop this report. Papers were extracted from original articles and prior conference studies published in reputable databases such as PubMed, IEEE Xplore, ScienceDirect, Scopus and Google Scholar from January 2019 to 17 November 2024. Rayyan's online platform was used to identify duplicates, inclusions and exclusions of papers. This systematic review protocol was registered with the PROSPERO database. The included papers were assessed based on Microsoft's Responsible Artificial Intelligence template. The Responsible AI Impact Assessment Template, Principle 3 (transparency and explainability), determined a high-risk rating. A total of 26 studies are included based on the established inclusion and exclusion criteria. This study investigated 14.29% of responsible artificial intelligence principles applied in at least one paper with a high-risk rate. The results indicate that to transform researched solutions into practical applications, epileptic monitoring applications should be tested within the eight principles set by Microsoft. The black box explanation lacks insight into the deep internal features and operational methods, suggesting that further investigation is necessary. Systematic Review Registration ID: CRD42024544.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.968 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.360 Zit.
"Why Should I Trust You?"
2016 · 14.714 Zit.
Generative adversarial networks
2020 · 13.338 Zit.