OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 15:48

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

CLEVA-Compass: A Continual Learning EValuation Assessment Compass to\n Promote Research Transparency and Comparability

2021·10 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

10

Zitationen

4

Autoren

2021

Jahr

Abstract

What is the state of the art in continual machine learning? Although a\nnatural question for predominant static benchmarks, the notion to train systems\nin a lifelong manner entails a plethora of additional challenges with respect\nto set-up and evaluation. The latter have recently sparked a growing amount of\ncritiques on prominent algorithm-centric perspectives and evaluation protocols\nbeing too narrow, resulting in several attempts at constructing guidelines in\nfavor of specific desiderata or arguing against the validity of prevalent\nassumptions. In this work, we depart from this mindset and argue that the goal\nof a precise formulation of desiderata is an ill-posed one, as diverse\napplications may always warrant distinct scenarios. Instead, we introduce the\nContinual Learning EValuation Assessment Compass: the CLEVA-Compass. The\ncompass provides the visual means to both identify how approaches are\npractically reported and how works can simultaneously be contextualized in the\nbroader literature landscape. In addition to promoting compact specification in\nthe spirit of recent replication trends, it thus provides an intuitive chart to\nunderstand the priorities of individual systems, where they resemble each\nother, and what elements are missing towards a fair comparison.\n

Ähnliche Arbeiten

Autoren

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationMachine Learning and Data Classification
Volltext beim Verlag öffnen