Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Metrics reloaded: Recommendations for image analysis validation
62
Zitationen
74
Autoren
2022
Jahr
Abstract
Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. Particularly in automatic biomedical image analysis, chosen performance metrics often do not reflect the domain interest, thus failing to adequately measure scientific progress and hindering translation of ML techniques into practice. To overcome this, our large international expert consortium created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. The framework was developed in a multi-stage Delphi process and is based on the novel concept of a problem fingerprint - a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), data set and algorithm output. Based on the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as a classification task at image, object or pixel level, namely image-level classification, object detection, semantic segmentation, and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool, which also provides a point of access to explore weaknesses, strengths and specific recommendations for the most common validation metrics. The broad applicability of our framework across domains is demonstrated by an instantiation for various biological and medical image analysis use cases.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.197 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.047 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.410 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
- Lena Maier‐Hein
- Annika Reinke
- Patrick Godau
- Minu D. Tizabi
- Florian Buettner
- Evangelia Christodoulou
- Ben Glocker
- Fabian Isensee
- Jens Kleesiek
- Michal Kozubek
- Mauricio Reyes
- Michael A. Riegler
- Manuel Wiesenfarth
- Ali Emre Kavur
- Carole H. Sudre
- Michael Baumgartner
- Matthias Eisenmann
- Doreen Heckmann-Nötzel
- Tim Rädsch
- Laura Ación
- Michela Antonelli
- Tal Arbel
- Spyridon Bakas
- Arriel Benis
- Matthew B. Blaschko
- M. Jorge Cardoso
- Veronika Cheplygina
- Beth A. Cimini
- Gary S. Collins
- Keyvan Farahani
- Luciana Ferrer
- Adrián Galdrán
- Bram van Ginneken
- Robert Haase
- Daniel A. Hashimoto
- Michael M. Hoffman
- Merel Huisman
- Pierre Jannin
- Charles E. Kahn
- Dagmar Kainmueller
- Bernhard Kainz
- Alexandros Karargyris
- Alan Karthikesalingam
- Hannes Kenngott
- Florian Kofler
- Annette Kopp‐Schneider
- Anna Kreshuk
- Tahsin Kurç
- Bennett A. Landman
- Geert Litjens
- Amin Madani
- Klaus Maier‐Hein
- Anne L. Martel
- Peter Mattson
- Erik Meijering
- Bjoern Menze
- Karel G. M. Moons
- Henning Müller
- Brennan Nichyporuk
- Felix Nickel
- Jens Petersen
- Nasir Rajpoot
- Nicola Rieke
- Julio Sáez-Rodríguez
- Clara I. Sánchez
- Shravya Shetty
- Maarten van Smeden
- Ronald M. Summers
- Abdel Aziz Taha
- Aleksei Tiulpin
- Sotirios A. Tsaftaris
- Ben Van Calster
- Gaël Varoquaux
- Paul F. Jäger