OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 10:15

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Safety and user perception of general-purpose large language models in pediatric healthcare: Evaluations of ChatGPT by doctors and parents

2026·0 Zitationen·Digital HealthOpen Access
Volltext beim Verlag öffnen

0

Zitationen

14

Autoren

2026

Jahr

Abstract

Background/Objective Although the internet has broadened access to medical resources, concerns persist regarding the quality and accuracy of available content. ChatGPT, a general-purpose large language model, may help bridge this gap. This study evaluates its safety and user perception in addressing pediatric healthcare queries. Methods Nine experts independently evaluated 41 questions, with three experts assigned to each of the following topics: vitamin D (15 questions), food allergies (16 questions), and sleep problems (10 questions). Each question was answered separately by ChatGPT3.5 and ChatGPT4. Ratings were determined by expert consensus or, in cases of disagreement, the lowest rating. Additionally, 27 parents evaluated ChatGPT's responses. Results Experts rated 73.2% of responses from ChatGPT3.5 as “completely correct” or “correct but not comprehensive,” while 26.8% were rated as “partially incorrect” or “completely incorrect.” For ChatGPT4, these figures were 68.3% and 31.7%, respectively. The difference in accuracy ratings between the two versions was not statistically significant (chi-square test, p = .819). Over 80% of parents rated the responses as “completely clear with no further doubts” or “very clear with few doubts,” with no significant difference found between versions (generalized mixed-effects model, p = .617). A total of 73.1% of parents expressed trust in ChatGPT's medical information, and 88.0% indicated a likelihood of continued use. Rating trends between parents and clinicians were consistent for both ChatGPT3.5 and ChatGPT4 responses (McNemar's test, ChatGPT3.5: p = .481; ChatGPT4: p = .143). Conclusion As over a quarter of responses contained expert-identified inaccuracies, the current performance of ChatGPT is insufficient for safe and reliable use in clinical decision-making. Nevertheless, it has potential to expand health information access for parents. However, these findings should be interpreted with caution given the small sample size and potential selection bias regarding parents’ educational backgrounds. Future improvements should enhance accuracy, clarity, and integration between parents and healthcare professionals.

Ähnliche Arbeiten