Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trust in AI Is a “Fluid Process”: Building Trust of AI Through Clinicians’ Needs in the BreastScreen Victoria Program—A Qualitative Study
2
Zitationen
6
Autoren
2025
Jahr
Abstract
Research on trust in healthcare AI has grown significantly over the last five years, underscoring its vital role in AI adoption within healthcare services. While the multi-dimensional nature of trust in AI is well-documented, the literature lacks an integrative framework to fully understanding its dynamics. This study explores clinicians' perceptions of using AI in breast screening, focusing on the evolving nature of trust in AI within a complex clinical environment. Through thematic analysis of focus groups and interviews with 27 clinicians from the population-based BreastScreen program in Victoria, Australia, we highlight that trust in healthcare AI is fluid and multi-layered. Clinicians considered the broader care context when evaluating the potential of AI in their clinical practice. Their conflicting views coexisted-seeing "AI as an opportunity" to improve service delivery and client experiences and recognizing "uncertainties" surrounding its use. Optimism about AI, framed as opportunity, was tempered by skepticism stemming from factors, such as distrust in AI's performance, uncertainty regarding its role in their clinical practice, personal experiences with AI, and organizational barriers. Ethical, legal, and regulatory considerations also significantly influenced trust. We draw on the <i>Trust and Acceptance of Artificial Intelligence Technology</i> framework developed by Stevens and Stetson (2023) to interpret the paradoxical combination of optimism and skepticism observed in our participants. We argue that trust in AI is not a fixed attribute but a dynamic process, shaped by the interplay of technology-related, human-related, and context-related factors. Our findings have practical implications for AI adoption in healthcare.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.