Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Role of Multi-modal Machine Learning, Explainable AI and Human-AI Teaming in Trusted Intelligent Systems for Remote Digital Towers
2
Zitationen
8
Autoren
2024
Jahr
Abstract
Remote digital towers (RDTs) represent a transformative advancement in air traffic management (ATM), leveraging cutting-edge technology to enable remote operation by air traffic controllers (ATCOs) while improving efficiency and safety. In the context of RDTs, artificial intelligence (AI), Multimodal Machine Learning (MML) and eXplainable AI (XAI) are playing an increasingly pivotal role in enhancing operational efficiency and safety. However, several challenges need to be addressed, including the development of AI, MML and XAI, research into functional requirements, and the identification of inputs for user and machine interfaces, as well as customization options. This study explores the use of XAI in addressing specific air traffic control challenges and by offering transparent, comprehensible, and actionable insights, XAI fosters resilience, efficient, and closer collaboration between human operators and AI systems. Here, the study defines the specifications for taxiway and runway monitoring and decision support within the RDT domain. It outlines the functional requirements for customized solutions, including XAI, human-centred XAI, human-machine interfaces (HMI), and human-AI teaming (HAIT). A systematic literature review is conducted to assess transparency in AI, with a focus on explainability, HMI, and graphical user interfaces (GUI) within human-centred XAI for RDTs. Additionally, the research identifies state-of-the-art techniques for interactive data visualization, human-centric AI model development, hAIi interfaces, and HAIT, providing a multi-modal agent framework for future development in the RDT domain.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.