Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
XAI in Medicine : Analysis and Evaluation of XAI Tools and Legal Liability for Neural Networks ; A Case Study on Tumor Image Classification
2
Zitationen
1
Autoren
2024
Jahr
Abstract
Artificial Intelligence (AI) and Machine Learning (ML) have immense potential to revolutionize various fields, especially in the domain of medicine. Deep learning models are increasingly used in the healthcare sector for image classification and disease diagnosis. However, the prob- lem of explainability remains a major concern. It is still unclear how much explanatory power current XAI tools have. Therefore, this analysis aims to evaluate explainable AI (XAI) tools based on the current Ethics Guidelines of Trustworthy AI from the European Commission. The purpose is to determine the extent to which current explanation algorithms provide trustworthy, transparent, and explanatory support for black box models. The issue of liability arises due to the lack of traceability and increased use of black box models. It is unclear which organization, individuals, or groups of people are liable in the event of a claim. This thesis analyzes the current draft legislation on the AI Act and the Legal Liability Directive with regard to the question of liability. Additionally, it examines the role of XAI tools in this context. XAI tools currently provide extensive capabilities for visualising model decisions and explaining the factors that are most likely to have contributed to the outcome in an understandable manner. However, the analysis and evaluation of XAI tools revealed that there are some op- portunities for improvement. Additionally, the availability of XAI tools heavily influences the issue of liability, as traceability and transparency are crucial elements for the legal implementation of new technologies.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.496 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.386 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.848 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.562 Zit.