Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
CLEVA-Compass: A Continual Learning EValuation Assessment Compass to\n Promote Research Transparency and Comparability
10
Zitationen
4
Autoren
2021
Jahr
Abstract
What is the state of the art in continual machine learning? Although a\nnatural question for predominant static benchmarks, the notion to train systems\nin a lifelong manner entails a plethora of additional challenges with respect\nto set-up and evaluation. The latter have recently sparked a growing amount of\ncritiques on prominent algorithm-centric perspectives and evaluation protocols\nbeing too narrow, resulting in several attempts at constructing guidelines in\nfavor of specific desiderata or arguing against the validity of prevalent\nassumptions. In this work, we depart from this mindset and argue that the goal\nof a precise formulation of desiderata is an ill-posed one, as diverse\napplications may always warrant distinct scenarios. Instead, we introduce the\nContinual Learning EValuation Assessment Compass: the CLEVA-Compass. The\ncompass provides the visual means to both identify how approaches are\npractically reported and how works can simultaneously be contextualized in the\nbroader literature landscape. In addition to promoting compact specification in\nthe spirit of recent replication trends, it thus provides an intuitive chart to\nunderstand the priorities of individual systems, where they resemble each\nother, and what elements are missing towards a fair comparison.\n
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.463 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.259 Zit.
"Why Should I Trust You?"
2016 · 14.314 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.138 Zit.