Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Enhancing AI Clinical Decision Support Trust: Design Workshop Insights from General Practitioners
3
Zitationen
2
Autoren
2025
Jahr
Abstract
Artificial Intelligence (AI) predictive models are increasingly integrated into Clinical Decision Support Systems (CDSS). However, real-world implementation is lagging due to a lack of trust and acceptance by healthcare providers. To investigate factors related to trust and acceptance of AI-based CDSS, we conducted a workshop with General Practitioners to explore user, model, and organizational factors that could affect trust and recommendation acceptance. The workshop discussions revealed that while explainability is crucial for understanding AI decisions, current explainable AI (XAI) visualizations appeared to provide limited value to clinicians without technical backgrounds. Participants suggested that transparency in model training information, including data sources and elements, combined with clear alignment between model decisions and established clinical research findings and guidelines, would enhance their trust and acceptance of AI-based recommendations. These insights provide valuable direction for developing user interfaces that enhance clinician trust and acceptance of AI-based CDSS in clinical practice.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.929 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.356 Zit.
"Why Should I Trust You?"
2016 · 14.688 Zit.
Generative adversarial networks
2020 · 13.316 Zit.