Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Modeling the Dynamics of Trust in Digital Pathology Using Explainable AI
0
Zitationen
5
Autoren
2025
Jahr
Abstract
The adoption of artificial intelligence (AI) in medical diagnostics demands rigorous examination and a robust understanding of human trust dynamics in explainable artificial intelligence (XAI) frameworks. This research presents a quantitative model for evaluating the evolution of trust among pathologists based on their sequential experiences with AI-driven diagnostic recommendations and explanations. By monitoring and analysing interactions characterised by AI’s false positives and false negatives, this study captures nuanced shifts in trust. An empirical evaluation using digital pathology scenarios reveals that initial trust levels remain stable, independent of AI accuracy, but subsequent interactions significantly adjust trust based on accumulated experience. Moreover, diagnostic performance improves notably when pathologists collaborate with XAI systems, underscoring the utility of such integrations. The study identifies limitations, including restricted sample sizes and homogeneous case diversity, advocating for future research involving broader participation and personalized trust modelling to encapsulate variability among pathologists.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.284 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.233 Zit.
"Why Should I Trust You?"
2016 · 14.179 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.096 Zit.