Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Review of Human-Centered Explainable AI in Healthcare
1
Zitationen
5
Autoren
2024
Jahr
Abstract
With the development of Artificial Intelligence (AI), “black box” models have demonstrated significant capabilities that now approach, or even surpass, human performance. However, ensuring the explainability of AI is crucial for users to trust and understand its applications in their daily lives, particularly in high-risk scenarios like healthcare. Although previous research has introduced numerous direct and post-hoc explainable AI methods, many of them adhere to a “one-fits-all” approach, disregarding the multidimensional understanding and trust requirements of diverse users in different contexts. In recent years, there has been growing attention from researchers worldwide towards human-centered explainable AI, which aims to provide explainable analyses of AI models based on the specific needs of users. This article examines literature reviews published over the last five years at top-tier global conferences in the field of human-computer interaction, with a specific emphasis on healthcare. It reviews existing human-centered, explainable AI methods and systems used for computer-aided diagnosis, computer-aided treatment, and preventive disease warning. Based on this review, it explores and identifies explainability needs from three perspectives: decision time constraints, user expertise levels, and diagnosis workflow processes. Additionally, the article lists four classic user persona types along with respective examples and provides suggestions for designing explainable medical diagnostic systems, considering resource constraints, varying user needs across different stakeholders, and integration with existing clinical workflows.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.