OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 05:41

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Enhancing Governance and Explainability in Large Language Models: A Framework for Interpretability-Driven Decision-Making

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs) have demonstrated high performance in text classification, particularly in specialized domains such as healthcare. However, their opacity raises concerns regarding interpretability, reliability, and governance. This paper explores integrating explainable artificial intelligence techniques with structured review to improve transparency and decision-making in LLM-based classification systems. We propose a framework that combines saliency-based methods, such as LIME and SHAP, with expert-in-the-loop validation to refine predictions and enhance interpretability. Through experiments on medical text classification, we study the effectiveness of integrating explainability with governance mechanisms. Results indicate that explainability-guided refinement improves classification accuracy while ensuring more interpretable and accountable outputs. This study provides insights into balancing performance and interpretability in high-stakes applications, supporting the adoption of LLMs in environments where transparency is critical.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Computational and Text Analysis Methods
Volltext beim Verlag öffnen