Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
CARLA: A Python Library to Benchmark Algorithmic Recourse and\n Counterfactual Explanation Algorithms
25
Zitationen
5
Autoren
2021
Jahr
Abstract
Counterfactual explanations provide means for prescriptive model explanations\nby suggesting actionable feature changes (e.g., increase income) that allow\nindividuals to achieve favorable outcomes in the future (e.g., insurance\napproval). Choosing an appropriate method is a crucial aspect for meaningful\ncounterfactual explanations. As documented in recent reviews, there exists a\nquickly growing literature with available methods. Yet, in the absence of\nwidely available opensource implementations, the decision in favor of certain\nmodels is primarily based on what is readily available. Going forward - to\nguarantee meaningful comparisons across explanation methods - we present CARLA\n(Counterfactual And Recourse LibrAry), a python library for benchmarking\ncounterfactual explanation methods across both different data sets and\ndifferent machine learning models. In summary, our work provides the following\ncontributions: (i) an extensive benchmark of 11 popular counterfactual\nexplanation methods, (ii) a benchmarking framework for research on future\ncounterfactual explanation methods, and (iii) a standardized set of integrated\nevaluation measures and data sets for transparent and extensive comparisons of\nthese methods. We have open-sourced CARLA and our experimental results on\nGithub, making them available as competitive baselines. We welcome\ncontributions from other research groups and practitioners.\n
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.962 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.358 Zit.
"Why Should I Trust You?"
2016 · 14.704 Zit.
Generative adversarial networks
2020 · 13.328 Zit.