Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Understanding the integration of artificial intelligence in health systems through the NASSS framework: A qualitative study in a leading Canadian academic centre
3
Zitationen
6
Autoren
2023
Jahr
Abstract
<title>Abstract</title> <bold>Background</bold>Artificial intelligence (AI) technologies are expected to “revolutionise” healthcare. However, despite their promises, their integration within healthcare organisations and systems remains limited. The objective of this study is to explore and understand the systemic challenges and implications of their integration in a leading Canadian academic hospital.<bold>Methods</bold>Semi-structured interviews were conducted with 29 stakeholders concerned by the integration of a large set of AI technologies within the organisation (e.g., managers, clinicians, researchers, patients, technology providers). Data were collected and analysed using the Non-Adoption, Abandonment, Scale-up, Spread, Sustainability (NASSS) framework.<bold>Results</bold>Among enabling factors and conditions, our findings highlight: the reforms aiming to improve the effectiveness and efficiency of healthcare in Quebec; a supportive organisational culture and leadership leading to a coherent organisational innovation narrative; mutual trust and transparent communication between senior management and frontline teams; the presence of champions, translators and boundary spanners for AI able to build bridges and trust; and the capacity to attract technical and clinical talents and expertise. Constraints and barriers include: contrasting definitions of the value of AI technologies and ways to measure such value; lack of real-life and context-based evidence; varying patients’ digital and health literacy capacities; misalignments between organisational dynamics, clinical and administrative processes, infrastructures, and AI technologies; lack of funding mechanisms covering the implementation, adaptation, and expertise required; challenges arising from practice change, new expertise development, and professional identities; lack of official professional, reimbursement, and insurance guidelines; lack of pre- and post-market approval legal and governance frameworks; diversity of the business and financing models for AI technologies; and misalignments between investors’ priorities and the needs and expectations of healthcareorganisations and systems.<bold>Conclusion</bold>Thanks to the multidimensional NASSS framework, this study provides original insights and a detailed learning base for analysing AI technologies in healthcare from a thorough socio-technical perspective. Our findings highlight the importance of considering the complexity characterising healthcare organisations and systems in current efforts to introduce AI technologies within clinical routines. This study adds to the existing literature and can inform decision-making towards a judicious, responsible, and sustainable integration of these technologies in healthcare organisations and systems.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.