Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Requirements Driven Explainable Artificial Intelligence Framework for Secure and Transparent Clinical Decision Support Systems
0
Zitationen
6
Autoren
2026
Jahr
Abstract
In the medical field, where clinical decision support system have a significant impact on vital medical decisions, there is an urgent need for transparent and secure artificial intelligence solutions. This research offers a thorough framework that combines explainable artificial intelligence methods with requirement engineering concepts to improve clinical decision support system security and transparency. The framework uses concern separation goal modeling (Knowledge Acquisition in automated specification), stakeholder analysis (Use Case Modeling), and concern separation (Aspect-Oriented Requirement Engineering) to ensure that system explanations are aligned with stakeholder needs while addressing privacy, compliance, and safety requirements. The proposed approach is evaluated using a real-world medical dataset demonstrating improvements in explanation consistency, requirement alignment, and robustness under security constraints. These results highlight the potential of integrating Requirements Engineering with XAI to support secure, interpretable, and accountable AI-driven clinical decision-making.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.326 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.218 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.111 Zit.