Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Toward Global Validation Standards for Health AI
12
Zitationen
2
Autoren
2020
Jahr
Abstract
Machine learning (ML) and artificial intelligence (AI) methods hold great potential for healthcare, for example, for purposes of diagnosis or prognosis that include a wide range of pattern recognition tasks. Ensuring that health ML/AI models are trustworthy will consequently become increasingly important soon. The ITU/WHO focus group on "AI for Health" is working on validation standards for health AI that can help to assess the quality of the powerful but complex technologies in a comparable and transparent manner. In particular, standardized benchmarking can serve as a valuable tool to determine the merits and limits of different health ML/AI models. In this article, ongoing work of the ITU/WHO initiative is introduced and set into perspective with related digital health and AI standardization efforts.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.380 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.243 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.671 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.496 Zit.