Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Opportunities for Explainable Artificial Intelligence in Aerospace Predictive Maintenance
41
Zitationen
3
Autoren
2020
Jahr
Abstract
This paper aims to look at the value and the necessity of XAI (Explainable Artificial Intelligence) when using DNNs (Deep Neural Networks) in PM (Predictive Maintenance). The context will be the field of Aerospace IVHM (Integrated Vehicle Health Management) when using DNNs. An XAI (Explainable Artificial Intelligence) system is necessary so that the result of an AI (Artificial Intelligence) solution is clearly explained and understood by a human expert. This would allow the IVHM system to use XAI based PM to improve effectiveness of predictive model. An IVHM system would be able to utilize the information to assess the health of the subsystems, and their effect on the aircraft. Even if the underlying mathematical principles are understood, they lack an understandable insight, hence have difficulty in generating the underlying explanatory structures (i.e. black box). This calls for a process, or system, that enables decisions to be explainable, transparent, and understandable. It is argued that research in XAI would generally help to accelerate the implementation of AI/ML (Machine Learning) in the aerospace domain, and specifically help to facilitate compliance, transparency, and trust. This paper explains the following areas: Challenges & benefits of AI based PM in aerospace Why XAI is required for DNNs in aerospace PM? Evolution of XAI models and industry adoption Framework for XAI using XPA (Explainability Parameters) Discussion about future research in adopting XAI & DNNs in improving IVHM.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.305 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.204 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.103 Zit.