Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI trustworthiness in prostate cancer imaging: a look at algorithmic and system transparency<sup>*</sup>
2
Zitationen
17
Autoren
2023
Jahr
Abstract
A responsible approach to artificial intelligence and machine learning technologies, grounded in sound scientific foundations, technical robustness, rigorous testing and validation, risk-based continuous monitoring and alignment with human values is imperative to guarantee their favorable impact and prevent any adverse effects they may have on individuals and communities. An essential aspect of responsible development is transparency, which constitutes a fundamental principle of the European approach towards artificial intelligence. Transparency can be achieved at different levels, such as data origin and use, system development, operation and usage. In this paper, we present the techniques implemented and delivered in the EU H2020 ProCAncer-I project to meet the transparency requirements at the different levels required.Clinical Relevance: This paper examines the primary transparency hurdles in artificial intelligence for medical imaging diagnostics, and presents the approaches that the EU H2020 project ProCAncer-I is taking to address them.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.