Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainability of Encoder-Based LLMs in Medical Text Classification
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Recent advancements in Natural Language Processing (NLP) and large language models (LLMs) have revolutionized clinical and healthcare applications. Given the domain's need for precision, transparency, and trust, interpreting black-box models are essential. This study systematically evaluates the interpretability of two state-of-the-art transformer models, BERT and RoBERTa, for medical text classification. The paper applies model-agnostic explainable AI techniques, LIME, and SHAP, to examine global and local decision behaviors on real-world medical transcripts. While both models achieve strong performance, their decision rationales often diverge from clinically relevant features, especially in misclassifications. Our findings show how LIME and SHAP can uncover model biases, highlight underused domain-specific terms, and focus on procedural terms. The results emphasize the need for explainability in medical AI and offer practical insights for building interpretable clinical decision support systems.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.227 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.601 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.387 Zit.