Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards Transparent Deep Learning in Medicine: Feature Contribution and Attention Mechanism-Based Explainability
2
Zitationen
6
Autoren
2025
Jahr
Abstract
Abstract Artificial intelligence (AI) techniques are increasingly employed in mental health for remote patient monitoring, enabling the prediction of vital signs and classification of physical activities, which are essential for proactive patient care. However, the black-box nature of deep learning models limits their explainability, a critical factor in clinical applications where clinicians require transparent, reliable decision-making tools to support clinical interventions. In non-invasive monitoring, sensor data and clinical attributes serve as input features for predicting patient health outcomes. Understanding how these features contribute to model predictions is crucial for informed clinical decisions in a mental health context. This study proposes a novel quantitative explainability framework (QEF) that provides both post-hoc and intrinsic explainability for regression and classification tasks within deep learning models. The framework combines Shapley values to elucidate feature contributions and attention mechanisms to enhance interpretability. Two deep learning models—artificial neural networks (ANN) and attention-based bidirectional long short-term memory (BiLSTM)—were applied to predict heart rate and classify physical activities using sensor data, achieving state-of-the-art performance. Attention weights and Shapley values were computed for each input feature to provide global and local explanations, offering insights into the models’ behavior and feature importance. The QEF framework was evaluated using the PPG-DaLiA dataset for heart rate prediction and the MHEALTH dataset for physical activity classification. To address the computational complexity of Shapley value calculations, a Monte Carlo approximation method was implemented, reducing time and resource demands. This study introduces the QEF framework as a practical solution to balance model performance with explainability, providing clinicians with interpretable insights from deep learning models in the field of psychiatry and mental health.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.156 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.543 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Analysis of Survival Data.
1985 · 4.379 Zit.