OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 21:15

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Requirements Driven Explainable Artificial Intelligence Framework for Secure and Transparent Clinical Decision Support Systems

2026·0 Zitationen·IEEE AccessOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

In the medical field, where clinical decision support system have a significant impact on vital medical decisions, there is an urgent need for transparent and secure artificial intelligence solutions. This research offers a thorough framework that combines explainable artificial intelligence methods with requirement engineering concepts to improve clinical decision support system security and transparency. The framework uses concern separation goal modeling (Knowledge Acquisition in automated specification), stakeholder analysis (Use Case Modeling), and concern separation (Aspect-Oriented Requirement Engineering) to ensure that system explanations are aligned with stakeholder needs while addressing privacy, compliance, and safety requirements. The proposed approach is evaluated using a real-world medical dataset demonstrating improvements in explanation consistency, requirement alignment, and robustness under security constraints. These results highlight the potential of integrating Requirements Engineering with XAI to support secure, interpretable, and accountable AI-driven clinical decision-making.

Ähnliche Arbeiten