Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Utility of an LLM-powered experts-in-the-loop chatbot for pre- and post-operative care of cataract surgery patients
0
Zitationen
8
Autoren
2025
Jahr
Abstract
PurposeTo evaluate the utility of <i>CataractBot</i>, an LLM (Large Language Model)-powered chatbot that provides doctor-verified answers to patient questions about cataract surgery. We examine its use by both end-users (patients and attendants) and medical experts.MethodsA 24-week study was conducted to evaluate <i>CataractBot</i> among patients, their attendants, doctors, and patient coordinators. The bot responded instantly to questions by querying a knowledge base curated by medical professionals. Each response was asynchronously verified by an ophthalmologist (for medical questions) or a patient coordinator (for logistical questions), and their edits contributed to updating the knowledge base, thereby minimizing future expert intervention. A mixed-methods analysis was conducted on interaction logs, including patient and attendant questions, chatbot answers, and expert verifications.ResultsA total of 318 patients and attendants sent 1,992 messages, and LLM-generated answers were verified by five doctors and two coordinators. Questions asked pre-surgery were significantly more than post-surgery <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mo>(</mml:mo><mml:mi>p</mml:mi><mml:mo><</mml:mo><mml:mn>0.001</mml:mn><mml:mo>)</mml:mo></mml:math>. Participants asked significantly more medical than logistical questions <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mo>(</mml:mo><mml:msub><mml:mi>t</mml:mi><mml:mn>309</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>7.3</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo><</mml:mo><mml:mn>0.001</mml:mn><mml:mo>)</mml:mo></mml:math>. Doctors rated 84.5% of <i>CataractBot</i>'s answers to medical questions as accurate and complete. Their edits, which mainly involved adding information, increased the acceptance of the bot's answers by 19.0% over time.Conclusion<i>CataractBot</i> was predominantly used to address medical questions. It incorporated expert corrections to improve its answers and reduce the experts' bot-related workload over time. This study highlights the potential of LLM-powered chatbots to support patient-provider communication in ophthalmology.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.