Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Developing an Explainable AI Model for Predicting Patient Readmissions in Hospitals
2
Zitationen
6
Autoren
2023
Jahr
Abstract
The objective of this study is to develop an AI model that can correctly identify which patients are most likely to require hospital readmission within a predetermined window of time after being discharged. Given that readmissions are linked to higher healthcare costs and poorer patient outcomes; this is a crucial problem in healthcare. The model must, nonetheless, also be explicable, which means that healthcare professionals must be able to comprehend the rationale behind why it made certain predictions. This is essential for establishing the model's credibility and making sure it is being used properly. To do this, the study may employ a range of machine learning methods renowned for their interpretability, like decision trees or random forests. Additionally, the study could investigate how to generate feature importance plots or partial dependence plots to visualize the model's decision-making process. Overall, by enhancing patient outcomes and fostering openness and confidence in the use of AI, this research subject has the potential to have a significant impact on healthcare.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.210 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.586 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.382 Zit.