Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Validation of AI-Generated Toxicology Vignettes in Singapore: A Cross-Sectional Expert Review
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Background: Generative artificial intelligence (AI) holds promise for medical education, yet the realism and contextual relevance of AI-generated toxicology vignettes in Southeast Asia are not well established. This study evaluated the face and content validity of vignettes produced by ChatGPT-4.0, to assess their plausibility and relevance for use in Singaporean emergency medicine education, training, and clinical decision support.Methods: Ten vignettes were generated using ChatGPT-4.0 in March 2025 and independently evaluated by five Singapore-based clinical toxicologists from four public hospitals. A six-domain rubric, adapted from established validity frameworks, scored presentation realism, typicality of exposure, toxidrome representation, clinical progression, appropriateness for toxicology consultation, and alignment with local practice. Inter-rater reliability was calculated using a two-way random-effects intraclass correlation coefficient [ICC (2, k)].Results: The mean total score was 20.1/24 (SD = 1.8). Inter-rater agreement was excellent (ICC = 0.87; 95% CI: 0.80–0.94). Face validity averaged 4.4/5 (SD = 0.5) and content validity averaged 4.2/5 (SD = 0.6). Most vignettes reflected common regional poisoning patterns, with some depicting rare but plausible exposures relevant to local practice.Conclusion: ChatGPT-4.0 can generate toxicology vignettes with high expert-rated realism and contextual relevance when tailored to Singaporean practice. These findings support its potential role in medical education, simulation, and decision-support tools. Further research should compare AI-generated and clinician-authored materials to determine educational impact and applicability in real-world clinical settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.