Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Provenance-Aware Explainable Digital Twin for Personalized Health Management
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Abstract AI can now help with personalized prediction, tracking, and decision-making thanks to progress in data-driven health analytics. However, many models are still hard to understand in clinical settings. To address these limitations, our work presents the Provenance-Aware and Explainable Digital Twin (PA-XDT) framework, integrating a digital twin, explainable AI techniques, and transparent provenance tracking for patient-centered health management.PA-XDT uses a compact LSTM-based twin trained on short temporal sequences to model near-term physiological dynamics and quantify uncertainty. In the implemented system, this twin works alongside a Gradient Boosting classifier that provides stable risk predictions, supported by global and local SHAP analyses and twin-validated counterfactual checks. A lightweight provenance layer records hashed inputs, outputs, and explanation metadata, enabling verifiable audit trails. Experiments on a gallstone risk dataset show that this combined pipeline improves physiological coherence and maintains strong predictive performance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.303 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.155 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.555 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.453 Zit.