Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
An LLM chatbot to facilitate primary-to-specialist care transitions: a randomized controlled trial
0
Zitationen
17
Autoren
2026
Jahr
Abstract
Patient-facing large language models (LLMs) hold potential to streamline inefficient transitions from primary to specialist care. We developed the preassessment (PreA), an LLM chatbot co-designed with local stakeholders, to perform the general medical consultations for history-taking, preliminary diagnoses, and test ordering that would normally be performed by primary care providers and to generate referral reports for specialists. PreA was tested in a randomized controlled trial involving 111 specialists from 24 medical disciplines across two health centers, where 2,069 patients (1,141 women; 928 men) were randomly assigned to use PreA independently (PreA-only), use it with staff support (PreA-human), or not use it (No-PreA) before specialist consultation. The trial met its primary end points with the PreA-only group showing significantly reduced physician consultation duration (28.7% reduction; 3.14 ± 2.25 min) compared to the No-PreA group (4.41 ± 2.77 min; P < 0.001), alongside significant improvements in physician-perceived care coordination (mean scores 113.1% increase; 3.69 ± 0.90 versus 1.73 ± 0.95; P < 0.001) and patient-reported communication ease (mean scores 16.0% increase; 3.99 ± 0.62 versus 3.44 ± 0.97; P < 0.001). Equivalent outcomes between the PreA-only and PreA-human groups confirmed the autonomous operation capability. Co-designed PreA outperformed the same model with additional fine-tuning on local dialogues across clinical decision-making domains. Co-design with local stakeholders, compared to passive local data collecting, represents a more effective strategy for deploying LLMs to strengthen health systems and enhance patient-centered care in resource-limited settings. Chinese Clinical Trial Registry identifier: ChiCTR2400094159 .
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.
Autoren
Institutionen
- Chinese Academy of Medical Sciences & Peking Union Medical College(CN)
- Guilin Medical University(CN)
- Guangxi Academy of Special Crops(CN)
- Guangxi Zhuang Autonomous Region Department of Education(CN)
- Guangxi Zhuang Autonomous Region Health and Family Planning(CN)
- Peking University(CN)
- Beijing Normal University(CN)
- Harbin Institute of Technology(CN)
- East China Normal University(CN)
- Pingliang People's Hospital(CN)
- Tencent (China)(CN)
- Peking Union Medical College Hospital(CN)
- Kermanshah University of Medical Sciences(IR)
- Ministry of Education(RO)