Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Safety and acceptability of a natural-language AI assistant to deliver clinical follow-up to cataract surgery patients: Proposal for a pragmatic evaluation
0
Zitationen
7
Autoren
2021
Jahr
Abstract
Background: Due to an ageing population, the demand for many services is exceeding the capacity of the clinical workforce. As a result, staff are facing a crisis of burnout from being pressured to deliver high-volume workloads, driving increasing costs for providers. Artificial intelligence, in the form of conversational agents, presents a possible opportunity to enable efficiencies in the delivery of care.\nAims and Objectives: This study aims to evaluate the effectiveness, usability, and acceptability of Dora - an AI-enabled autonomous telemedicine call - for detection of post-operative cataract surgery patients who require further assessment. The study’s objectives are to: 1) establish Dora’s efficacy in comparison to an expert clinician, 2) determine baseline sensitivity and specificity for detection of true complications, 3) evaluate patient acceptability, 4) collect evidence for cost-effectiveness, and 5) capture data to support further development and evaluation.\nMethods: Based on implementation science, the interdisciplinary study will be a mixed-methods phase one pilot establishing inter-observer reliability of the system, usability, and acceptability. This will be done using using the following scales and frameworks: the system usability scale; assessment of Health Information Technology Interventions in Evidence-Based Medicine Evaluation Framework; the telehealth usability questionnaire (TUQ); the Non-Adoption, Abandonment and Challenges to the Scale-up, Spread and Suitability (NASSS) framework.\nResults: The results will be included in the final evaluation paper, which we aim to publish in 2022. The study will last eighteen months: seven months of evaluation and intervention refinement, nine months of implementation and follow-up, and two months of post-evaluation analysis and write-up.\nConclusions: The project’s key contributions will be evidence on artificial intelligence voice conversational agent effectiveness, and associated usability and acceptability.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.