Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Feasibility and safety of an AI-driven postoperative telephone system for cataract surgery follow-up in Canada
0
Zitationen
7
Autoren
2026
Jahr
Abstract
To evaluate the safety and effectiveness of an artificial intelligence (AI) telephone agent, Dora-CA1, in managing routine postoperative follow-up after cataract surgery. Prospective, single-centre study including patients undergoing routine cataract surgery at a high-volume surgical center in Ontario, Canada. All participants received a Dora-CA1 call one day before their scheduled postoperative week 1 (POW1) visit. The AI system assessed symptoms, addressed patient questions, and provided care recommendations. At POW1, patients were invited to complete a survey on demographics, usability, acceptability, and travel burden. The primary outcome was agreement between Dora-CA1’s recommendations and optometric clinical evaluations. Secondary outcomes included patient satisfaction and usability. Of 326 patients, 272 answered the call, and 161 (59%) completed the full interaction. Non-completion was 51%, mainly due to missed or unreceived calls. Sensitivity and specificity for detecting clinically relevant concerns (elevated IOP, iritis, or other findings requiring review) were 96% and 76.6%, respectively. Patients reported high acceptability of the system, giving a mean of 8.22 (SD 1.96) when asked, “between 1 and 10, with 10 being the highest, how likely would you be to recommend this automated system to a friend or colleague?” Only 29.57% felt the system matched in-person care, despite generally positive usability scores. Dora-CA1 demonstrated strong alignment with clinician assessments and was well received by patients who completed the call. However, real-world implementation challenges, including early hang-ups and onboarding barriers, limit its current reach. Further refinement and integration could allow Dora-CA1 to play a valuable role in cataract surgery follow-up, easing the burden on clinical teams. A key limitation is the exclusion of non-English-speaking participants, which limits the applicability of the system in multicultural populations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.