Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI-XAI-LLM: Interpretable Insights into Stroke Risk Prediction
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Early identification of individuals at stroke risk is critical for implementing timely preventive measures. Although machine learning models have demonstrated potential in predicting stroke risk, their clinical adoption is limited due to a lack of transparency. Explainable AI (XAI) techniques provide insights into predictions, but their technical metrics are often complex and difficult for clinicians to interpret. This research proposes an integrated AI-XAI-LLM pipeline that generates accurate, patientspecific stroke predictions and provides post-hoc explainability to highlight key factors influencing the prediction, which are then translated into clear, clinician-friendly narratives using a promptengineered large language model. Evaluations show that this approach enhances prediction transparency and interpretability, fostering trust and encouraging the use of predictions to support decision-making.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.682 Zit.
Generative Adversarial Nets
2023 · 19.895 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.318 Zit.
"Why Should I Trust You?"
2016 · 14.528 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.191 Zit.