Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
SRS121 - Analysing the efficacy of large language models in simulating a virtual sarcoma patient
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Abstract Introduction Large language models (LLMs), a form of artificial intelligence used to process and generate human-like text, have shown promise in simulating virtual patients (VPs), enhancing medical education through realistic dialogue. While their application has expanded across various specialties, their utility in sarcoma care has not been explored. This study aims to evaluate the efficacy of LLMs in creating sarcoma VPs that mirror the lived experiences of real patients. Methods Three LLMs (ChatGPT Plus, Copilot Pro, and Mistral Pro) were prompted to simulate VPs based on three distinct sarcoma cases. Data were drawn from verified patient stories and forums. Each LLM was asked a standard set of questions about patient experience, and their responses were analysed qualitatively. Additionally, validated outcome measures (MSTS and TESS) were used to compare LLM-generated responses to published patient-reported outcomes. Results The LLMs generated emotionally rich and varied VP dialogues. ChatGPT excelled in emotional depth, Copilot in interactivity, and Mistral in clarity and conciseness. All models produced MSTS and TESS scores that closely aligned with published data, indicating good validity. However, TESS scores tended to be slightly lower than real-world values. Discussion LLMs can realistically replicate sarcoma patient experiences, offering a valuable tool for teaching empathy and communication in medical training. Given the rarity and psychological burden of sarcomas, VPs can help bridge educational gaps and prepare trainees for complex patient interactions. This model is scalable for broader educational use and could support the development of LLM-powered training platforms. Further validation in educational settings is recommended.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.553 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.444 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.943 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.