Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluation of AI-Generated Responses to Pediatric Vaccination Information Requests: Cross-Sectional Content Analysis (Preprint)
0
Zitationen
4
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Artificial intelligence (AI) language models are demonstrating increased efficacy in providing recommendations to queries for pediatric vaccinations. However, it is unclear whether newer AI models, such as ChatGPT and BARD (Gemini), can improve the accuracy of these recommendations. </sec> <sec> <title>OBJECTIVE</title> To qualitatively evaluate the appropriateness of AI models responses to fundamental pediatric vaccination questions. </sec> <sec> <title>METHODS</title> A questionnaire of 15 questions addressing recommendations for pediatric vaccination schedule, COVID vaccination, vaccines needed for travel, and vaccines for specific diseases, based on guideline-based prevention topics, was utilized. Each question was posed three times to two AI models, ChatGPT and BARD, and the responses were recorded. A total of three board-certified pediatricians graded each set of responses. These responses were graded as appropriate or inappropriate and which model provided better answers in comparison. A set of responses was graded as inappropriate if any of the three responses contained inaccurate, misleading, or harmful information. </sec> <sec> <title>RESULTS</title> ChatGPT's responses were graded as appropriate for 14 of 15 questions (93%), while BARD's responses were graded as appropriate for 11 of 15 questions (73%). ChatGPT was rated as superior to BARD for 80% of the questions, while BARD was rated as superior to ChatGPT for 20% of the questions. </sec> <sec> <title>CONCLUSIONS</title> Both AI models provided largely appropriate answers to simple pediatric vaccination questions as evaluated by pediatricians. However, responses from ChatGPT were found to be superior to BARD. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.521 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.412 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.891 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.575 Zit.