Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
IntelliLung AI-DSS Trustworthiness Evaluation Framework
0
Zitationen
11
Autoren
2024
Jahr
Abstract
The Artificial Intelligence Act was adopted in the European Parliament in March 2024 to establish a uniform legal framework for the development and uptake of human-centric and trustworthy artificial intelligence (AI) in Europe. Considering that AI may generate risks and cause harm, the approach for evaluating the newly developed AI and decision support systems (DSS) will vary from domain to domain. This paper presents the approach the European Union (EU) funded project IntelliLung is currently implementing in the healthcare domain. The IEC 62559-2:2015 methodology was used to structure the IntelliLung AI-DSS requirements and define key performance indicators relevant for the functional parts of the system (data integration, pre-processing, AI modelling). "Ethics-by-design" and "Transparent AI design" methodologies have been used for the IntelliLung AI-DSS implementation. The compliance requirements analysis has shown that the trustworthiness of AI-DSS relies on a wide set of measures including information and communication security, explainable AI algorithms, data governance and privacy considerations, as well as risk-management throughout the entire lifecycle of AI-DSS as a high-risk AI system.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.469 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.358 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.803 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.542 Zit.