Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Navigating uncertainties of introducing artificial intelligence (AI) in healthcare: The role of a Norwegian network of professionals
21
Zitationen
1
Autoren
2023
Jahr
Abstract
Artificial Intelligence (AI) technologies are expected to solve pressing challenges in healthcare services worldwide. However, the current state of introducing AI is characterised by several issues complicating and delaying their deployments. These issues concern topics such as ethics, regulations, data access, human trust, and limited evidence of AI technologies in real-world clinical settings. They further encompass uncertainties, for instance, whether AI technologies will ensure equal and safe patient treatment or whether the AI results will be accurate and transparent enough to establish user trust. Collective efforts by actors from different backgrounds and affiliations are required to navigate this complex landscape. This article explores the role of such collective efforts by investigating how an informally established network of professionals works to enable AI in the Norwegian public healthcare services. The study takes a qualitative longitudinal case study approach and is based on data from non-participant observations of digital meetings and interviews. The data are analysed by drawing on perspectives and concepts from Science and Technology Studies (STS) dealing with innovation and sociotechnical change, where collective efforts are conceptualised as actor mobilisation. The study finds that in the case of the ambiguous sociotechnical phenomenon of AI, some of the uncertainties related to the introduction of AI in healthcare may be reduced as more and more deployments occur, while others will prevail or emerge. Mobilising spokespersons representing actors not yet a part of the discussions, such as AI users or researchers studying AI technologies in use, can enable a ‘stronger’ hybrid knowledge production. This hybrid knowledge is essential to identify, mitigate and monitor existing and emerging uncertainties, thereby ensuring sustainable AI deployments.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.