OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 23:15

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Which LIME Should I Trust? Concepts, Challenges, and Solutions

2025·5 Zitationen·Communications in computer and information scienceOpen Access
Volltext beim Verlag öffnen

5

Zitationen

4

Autoren

2025

Jahr

Abstract

Abstract As neural networks become dominant in essential systems, Explainable Artificial Intelligence (XAI) plays a crucial role in fostering trust and detecting potential misbehavior of opaque models. LIME (Local Interpretable Model-agnostic Explanations) is among the most prominent model-agnostic approaches, generating explanations by approximating the behavior of black-box models around specific instances. Despite its popularity, LIME faces challenges related to fidelity, stability, and applicability to domain-specific problems. Numerous adaptations and enhancements have been proposed to address these issues, but the growing number of developments can be overwhelming, complicating efforts to navigate LIME-related research. To the best of our knowledge, this is the first survey to comprehensively explore and collect LIME’s foundational concepts and known limitations. We categorize and compare its various enhancements, offering a structured taxonomy based on intermediate steps and key issues. Our analysis provides a holistic overview of advancements in LIME, guiding future research and helping practitioners identify suitable approaches. Additionally, we provide a continuously updated interactive website, Which LIME Should I Trust? , offering a concise and accessible overview of the survey.

Ähnliche Arbeiten