Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI-Powered Autonomous Systems: Enhancing Trust and Transparency in Critical Applications
1
Zitationen
6
Autoren
2025
Jahr
Abstract
Explainable Artificial Intelligence (XAI) is pivotal in enhancing trust and transparency in autonomous systems deployed in critical applications such as healthcare, transportation, and defense. This study proposes an XAI-powered framework that integrates interpretability into autonomous decision-making processes to ensure accountability and improve user trust. By leveraging methods such as SHapley Additive exPlanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), and counterfactual reasoning, the framework provides clear and actionable insights into the decisions made by autonomous systems. Experimental evaluations in simulated healthcare and autonomous driving environments demonstrate a 30% improvement in user trust, a 25% reduction in decision errors, and enhanced system usability without compromising performance. The framework's ability to explain complex decisions in real-time makes it well-suited for critical applications requiring high stakes and stringent compliance standards. This study emphasizes the need for XAI in fostering collaboration between humans and machines, highlighting its potential to minimize the black-box nature of AI and facilitate adoption in safety-critical domains. Future work will focus on scaling XAI frameworks to multi-agent autonomous systems and exploring domain-specific customization of explanations. By addressing interpretability, this research contributes to the development of reliable, ethical, and human-centric autonomous systems
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.