Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Trust-Aware XAI (TAXAI) framework: a quantitative model for interpretable and reliable clinical AI systems
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Explainable Artificial Intelligence (XAI) performs a vital role in ensuring transparency, trust, and accountability in clinical decision support systems. However, most existing XAI techniques such as SHAP, LIME, Grad-CAM, and DeepLIFT provide post-hoc explanations without quantifying trust, ethical alignment, and governance readiness. As a result, interpretability alone does not reliably translate into dependable AI-based clinical decision support systems. This research work presents Trust-Aware Explainable Artificial Intelligence (TAXAI), an innovative framework that operationalizes explainability as a quantifiable and governance-focused concept. TAXAI combines algorithmic transparency to evaluate the fidelity of explanations, interpretability alignment to reflect consistency with expert reasoning, and compliance and reliability to assess fairness, robustness, and reproducibility. These components are unified through a mathematically robust, normalized Trust Index, allowing systematic and comparable trust evaluation across different models and datasets. The framework is demonstrated across representative radiology and pathology benchmarks using machine-learning and deep-learning models coupled with established XAI methods. The proposed framework demonstrates stable Trust Index values (0.85–0.94) across diverse medical tasks in illustrative benchmark settings. The experimental results demonstrate that TAXAI provides stable, reproducible, and mathematically interpretable trust quantification across diverse explainability methods and benchmark datasets.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.800 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.335 Zit.
"Why Should I Trust You?"
2016 · 14.610 Zit.
Generative adversarial networks
2020 · 13.218 Zit.