OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.03.2026, 22:28

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Disentangling LLM Predictions: A Framework for Transparent Decision-Making in NLP

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs) excel in Natural Language Processing (NLP) but face challenges in trust and transparency, limiting their use in critical domains like healthcare and law. We present a new framework, Disentangled LLM Analysis (DLA), that can be used to increase the interpretability of LLMs without compromising accuracy. DLA combines at-tention mechanism analysis and SHAP-based feature attribution to break down its predictions into clear, token-level decision paths that display how input features impact outputs. Compared to benchmark datasets (sentiment analysis, question answering, text classification, and natural language inference), DLA obtains scores on interpretability (e.g., 0.78 average) that are 20–25% higher and an 81% overlap in attribution with human judgments, and has predictive performance that is comparable to state-of-the-art LLMs (e.g., 88.3% accuracy on IMDb). In contrast to baselines such as LIME and SHAP, DLA includes a balance of granularity and efficiency and provides clear actionable insights to practitioners. A DLA can be implemented to result in trust and accountability by clarifying model reasoning, making it possible to deploy it responsibly for high-stakes applications, such as medical diagnostics and legal document analysis.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Topic Modeling
Volltext beim Verlag öffnen