Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Human-Centered Approach to Interpretable Machine Learning in Clinical Decision Support Systems
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Interpretable machine learning (ML) paired with clinical decision support systems (CDSS) is revolutionizing healthcare in the context of enhancing transparency, personalization, and trust. This survey of 25 recent articles demonstrates that explainable artificial intelligence (XAI) is essential to enhance clinician adoption and patient outcomes in different clinical settings, such as oncology, emergency medicine, and mental health. Scholars note that to make AI human-centric, promote cooperation between clinicians and algorithms, and provide the transparency of data, researchers need to design AI in a more human-centric manner, which will help to align the use of ML tools with ethical, fair, and realistic principles. Although these have been made, there are still difficulties in the field, including the need to balance model accuracy and interpretability, deal with biases, and incorporate them into current clinical practice. New approaches are being designed, such as the use of electronic health records, engaging end-users in the development of designs, and the use of new approaches of explanations, such as data-centric and proximity-informed explanations.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.373 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.259 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.125 Zit.