Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable artificial intelligence (XAI) in medical decision systems (MDSSs): healthcare systems perspective
2
Zitationen
7
Autoren
2022
Jahr
Abstract
The healthcare sector is very interested in machine learning (ML) and artificial intelligence (AI). Nevertheless, applying AI applications in scientific contexts is difficult due to explainability issues. Explainable AI (XAI) has been studied as a potential remedy for the problems with current AI methods. The usage of ML with XAI may be capable of both explaining models and making judgments, in contrast to AI techniques like deep learning. Computer applications called medical decision support systems (MDSS) affect the decisions doctors make regarding certain patients at a specific moment. MDSS has played a crucial role in systems' attempts to improve patient safety and the standard of care, particularly for non-communicable illnesses. They have moreover been a crucial prerequisite for effectively utilizing electronic healthcare (EHRs) data. This chapter offers a broad overview of the application of XAI in MDSS toward various infectious diseases, summarizes recent research on the use and effects of MDSS in healthcare with regard to non-communicable diseases, and offers suggestions for users to keep in mind as these systems are incorporated into healthcare systems and utilized outside of contexts for research and development.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.204 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.582 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.382 Zit.