Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The need to strengthen the evaluation of the impact of Artificial Intelligence-based decision support systems on healthcare provision
20
Zitationen
15
Autoren
2023
Jahr
Abstract
Despite the renewed interest in Artificial Intelligence-based clinical decision support systems (AI-CDS), there is still a lack of empirical evidence supporting their effectiveness. This underscores the need for rigorous and continuous evaluation and monitoring of processes and outcomes associated with the introduction of health information technology. We illustrate how the emergence of AI-CDS has helped to bring to the fore the critical importance of evaluation principles and action regarding all health information technology applications, as these hitherto have received limited attention. Key aspects include assessment of design, implementation and adoption contexts; ensuring systems support and optimise human performance (which in turn requires understanding clinical and system logics); and ensuring that design of systems prioritises ethics, equity, effectiveness, and outcomes. Going forward, information technology strategy, implementation and assessment need to actively incorporate these dimensions. International policy makers, regulators and strategic decision makers in implementing organisations therefore need to be cognisant of these aspects and incorporate them in decision-making and in prioritising investment. In particular, the emphasis needs to be on stronger and more evidence-based evaluation surrounding system limitations and risks as well as optimisation of outcomes, whilst ensuring learning and contextual review. Otherwise, there is a risk that applications will be sub-optimally embodied in health systems with unintended consequences and without yielding intended benefits.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
Institutionen
- University of Edinburgh(GB)
- Keele University(GB)
- Macquarie University(AU)
- University of Wales Trinity Saint David(GB)
- Aalborg University(DK)
- The University of Texas Health Science Center at San Antonio(US)
- St. Luke's International University(JP)
- University of Utah(US)
- UMIT - Private Universität für Gesundheitswissenschaften, Medizinische Informatik und Technik(AT)
- Amsterdam University Medical Centers(NL)
- University of Amsterdam(NL)
- Tampere University(FI)