Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Documenting high-risk AI: an European regulatory perspective
3
Zitationen
5
Autoren
2022
Jahr
Abstract
<p>The increasing adoption of Artificial Intelligence (AI) systems in high-stakes applications brings new opportunities for innovation, economic growth and the digital transformation of society. However, this often comes with associated risks to the safety, health or fundamental rights of people, highlighting an urgent need for the systematic adoption of trustworthy AI practices. Transparency is key for building trust in AI systems, as it facilitates their understanding and scrutiny. This article discusses transparency obligations introduced in the AI Act, the recently proposed European regulatory framework for Artificial Intelligence. Specifically, we look at requirements for providers of high-risk AI systems in terms of provision of information to users and technical documentation. An analysis of the extent to which current</p> <p>approaches for AI documentation satisfy these requirements is presented, assessing their suitability as a basis for future technical standards and making recommendations for their potential development in this direction.</p>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.