OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.04.2026, 22:47

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Comparison of AI Platforms in Answering Patient Questions About Fibroids

2025·0 Zitationen·Obstetrics and Gynecology
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

INTRODUCTION: The use of artificial intelligence (AI) platforms has become increasingly popular and widespread. Its utility in basic sciences, medical education, research, and clinical practice are currently being explored. These public platforms are also available to patients, who may resort to AI chatbots to learn more about their health. This study aims to investigate the accuracy and reliability of various AI platforms in answering frequently asked patient questions about fibroids. OBJECTIVE: To determine the accuracy and reliability of various AI platforms in answering frequently asked patient questions about uterine fibroids METHODS: Frequently asked questions about uterine fibroids were searched on artificial intelligence platforms, and a compilation of answers was assessed by three expert gynecologists. Answers to these questions from reliable, non-AI sources compiled from academic medical centers, D and E, were also assessed. Gynecologists were blinded to the source of the answers. The accuracy of the answers was assessed on a scale of 1 (least accurate) to 5 (most accurate). The answers were also assessed for their language, word count, and understandability. RESULTS: The median scores for ChatGPT, DoximityGPT, and Gemini were 5.00, 5.00, and 5.00, respectively. In contrast, the reliable, non-AI sources D and E were less accurate, with median scores of 3.00 and 4.00, respectively. There was a statistically significant difference in median scores when comparing all three chatbot answers to source D. There was also a statistically significant difference for the question, “Do fibroids need treatment,” when comparing all three chatbot answers to source D. CONCLUSIONS: AI-generated answers were overall accurate and reliable in answering patient questions about uterine fibroids. Reviewers found the AI-generated answers to be easy to read and understand. In contrast, reliable sources from academic medical centers were less accurate with more differences in reviewer ratings in the accuracy of their answers. Chatbot-generated answers can provide patients with general information, but further evaluation by an expert gynecologist is needed to provide comprehensive, individualized patient care (Table 1).

Ähnliche Arbeiten