Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A prospective clinical feasibility study of a conversational diagnostic AI in an ambulatory primary care clinic
0
Zitationen
47
Autoren
2026
Jahr
Abstract
Large language model (LLM)-based AI systems have shown promise for patient-facing diagnostic and management conversations in simulated settings. Translating these systems into clinical practice requires assessment in real-world workflows with rigorous safety oversight. We report a prospective, single-arm feasibility study of an LLM-based conversational AI, the Articulate Medical Intelligence Explorer (AMIE), conducting clinical history taking and presentation of potential diagnoses for patients to discuss with their provider at urgent care appointments at a leading academic medical center. 100 adult patients completed an AMIE text-chat interaction up to 5 days before their appointment. We sought to assess the conversational safety and quality, patient and clinician experience, and clinical reasoning capabilities compared to primary care providers (PCPs). Human safety supervisors monitored all patient-AMIE interactions in real time and did not need to intervene to stop any consultations based on pre-defined criteria. Patients reported high satisfaction and their attitudes towards AI improved after interacting with AMIE (p < 0.001). PCPs found AMIE's output useful with a positive impact on preparedness. AMIE's differential diagnosis (DDx) included the final diagnosis, per chart review 8 weeks post-encounter, in 90% of cases, with 75% top-3 accuracy. Blinded assessment of AMIE and PCP DDx and management (Mx) plans suggested similar overall DDx and Mx plan quality, without significant differences for DDx (p = 0.6) and appropriateness and safety of Mx (p = 0.1 and 1.0, respectively). PCPs outperformed AMIE in the practicality (p = 0.003) and cost effectiveness (p = 0.004) of Mx. While further research is needed, this study demonstrates the initial feasibility, safety, and user acceptance of conversational AI in a real-world setting, representing crucial steps towards clinical translation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.
Autoren
- Peter Brodeur
- Jacob M. Koshy
- Anil Palepu
- Khaled Saab
- Ava Homiar
- Roma Ruparel
- Charles Wu
- Ryutaro Tanno
- Joseph Xu
- Amy Wang
- David Stutz
- Hannah Ferrera
- David Barrett
- Lindsey Crowley
- Jihyeon Lee
- Spencer Rittner
- Ellery Wulczyn
- Selena K. Zhang
- Elahe Vedadi
- Christine G. Kohn
- K. Kulkarni
- Vinay B. Kadiyala
- Sara Mahdavi
- Wendy Du
- Jessica Williams
- David Feinbloom
- Renee Wong
- Tao Tu
- Petar Sirkovic
- Alessio Orlandi
- Christopher Semturs
- Yun Liu
- Juraj Gottweis
- Dale R. Webster
- Joëlle Barral
- Katherine Chou
- Pushmeet Kohli
- Avinatan Hassidim
- Yossi Matias
- James Manyika
- Rob Fields
- Jonathan Li
- Marc L. Cohen
- Vivek Natarajan
- Mike Schaekermann
- Alan Karthikesalingam
- Adam Rodman