OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 00:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluation metrics in medical imaging AI: fundamentals, pitfalls, misapplications, and recommendations

2025·19 Zitationen·European Journal of Radiology Artificial IntelligenceOpen Access
Volltext beim Verlag öffnen

19

Zitationen

11

Autoren

2025

Jahr

Abstract

Robust assessment of artificial intelligence (AI) models in medical imaging is paramount for reliable clinical integration. This international collaborative review paper provides an overview of key evaluation metrics across diverse tasks, including classification, regression, survival analysis, detection, and segmentation, as well as specialized metrics for calibration, foundation models, large language models, and synthetic images. Challenges of comparing models statistically and translating metric scores to clinical practice are also discussed. For each section, the paper outlines fundamental metrics, identifies common pitfalls and misapplications, and offers recommendations for more robust evaluations. Key recommendations often involve utilizing multiple, complementary metrics tailored to the specific task and dataset properties, transparent reporting of methodology, and critically, considering the clinical utility and real-world implications of model performance. Ultimately, effective evaluation requires a comprehensive, context-aware approach that goes beyond statistical metrics to ensure model trust and clinical relevance. The authors hope this review will serve as a practical reference for researchers aiming to implement robust and clinically meaningful AI evaluations in medical imaging. • This review outlines the key metrics for evaluating medical imaging AI. • Common pitfalls and misapplications are critically examined, with corresponding recommendations provided for each. • Appropriate metric selection depends on the specific AI task. • Foundation and generative models require broader evaluation methods, beyond traditional evaluation metrics. • A multi-metric, context-aware evaluation is essential for reliability.

Ähnliche Arbeiten