Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring the risks of automation bias in healthcare artificial intelligence applications: A Bowtie analysis
63
Zitationen
4
Autoren
2024
Jahr
Abstract
This study conducts an in-depth review and Bowtie analysis of automation bias in AI-driven Clinical Decision Support Systems (CDSSs) within healthcare settings. Automation bias, the tendency of human operators to over-rely on automated systems, poses a critical challenge in implementing AI-driven technologies. To address this challenge, Bowtie analysis is employed to examine the causes and consequences of automation bias affected by over-reliance on AI-driven systems in healthcare. Furthermore, this study proposes preventive measures to address automation bias during the design phase of AI model development for CDSSs, along with effective mitigation strategies post-deployment. The findings highlight the imperative role of a systems approach, integrating technological advancements, regulatory frameworks, and collaborative endeavors between AI developers and healthcare practitioners to diminish automation bias in AI-driven CDSSs. We further identify future research directions, proposing quantitative evaluations of the mitigation and preventative measures.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.