Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing Linguistic and Structural Reliability in Text-Based LLM Simulations for Medical ESP
0
Zitationen
1
Autoren
2026
Jahr
Abstract
This exploratory study investigates the use of large language models in English for Medical Purposes (EMP) and broader ESP instruction by examining the linguistic and structural reliability of text-based medical consultation simulations. Using ChatGPT (GPT-5) with a fixed prompt, 26 simulated outpatient dialogues were generated in which the model acted as a patient. The analysis shows that the dialogues are highly regular, structurally coherent, and strongly patterned, with a clear dominance of closed questions and a limited range of recurring identities and scenarios. These findings are important for medical communication, as they suggest that LLM-generated consultations can provide stable, repeatable practice for routine interactional tasks such as history-taking, symptom elicitation, and basic diagnostic discussion. From an educational perspective, the study highlights the potential of LLMs in AI in education and autonomous learning, especially as a scaffolded resource for learners who need repeated exposure to medical discourse. At the same time, the limited variability of the interactions indicates that such simulations are best used as a supplementary tool rather than a replacement for human-mediated communication practice. The study contributes to current discussion on the pedagogical value and limitations of large language models in English for Medical Purposes, ESP, and digital language learning.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.635 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.543 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.051 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.844 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.