OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.03.2026, 20:57

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainability of Encoder-Based LLMs in Medical Text Classification

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

Recent advancements in Natural Language Processing (NLP) and large language models (LLMs) have revolutionized clinical and healthcare applications. Given the domain's need for precision, transparency, and trust, interpreting black-box models are essential. This study systematically evaluates the interpretability of two state-of-the-art transformer models, BERT and RoBERTa, for medical text classification. The paper applies model-agnostic explainable AI techniques, LIME, and SHAP, to examine global and local decision behaviors on real-world medical transcripts. While both models achieve strong performance, their decision rationales often diverge from clinically relevant features, especially in misclassifications. Our findings show how LIME and SHAP can uncover model biases, highlight underused domain-specific terms, and focus on procedural terms. The results emphasize the need for explainability in medical AI and offer practical insights for building interpretable clinical decision support systems.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Machine Learning in HealthcareArtificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen