Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Analyzing the Capacity of ChatGPT and Google to Provide Medical Information: Insights from Umbilical Cord Clamping
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Abstract Objective The optimal timing of umbilical cord clamping in neonatal care has been a subject of debate for decades. Recently, artificial intelligence (AI) has emerged as a significant tool for providing medical information. This study aimed to compare the accuracy, reliability, and comprehensiveness of information provided by ChatGPT and Google regarding the effects of cord clamping timing in neonatal care. Methods A comparative analysis was conducted using ChatGPT-4 and Google Search. The search terms included “cord clamping time,” “early clamping,” “delayed clamping,” and “cord milking.” The first 20 frequently asked questions (FAQs) and their responses from both platforms were recorded and categorized according to the Rothwell classification system. The accuracy and reliability of the answers were assessed using content analysis and statistical comparison. Results ChatGPT outperformed Google in terms of scientific accuracy, objectivity, and source reliability. ChatGPT provided a higher proportion of responses based on academic and medical sources, particularly in the categories of technical details (40%) and delayed cord clamping benefits (30%). In contrast, Google yielded more information in early cord clamping effects (25%) and cord milking (20%). ChatGPT achieved 80% accuracy in medical information, whereas Google reached only 40%. Conclusion While both platforms offer valuable information, ChatGPT demonstrated superior accuracy and reliability in neonatal care topics, making it a more suitable tool for healthcare professionals. However, Google remains useful for general information searches. Future studies should explore AI’s potential in clinical decision-support systems.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.