OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 23:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Aligning <scp>AI</scp> and clinical expertise: A collaborative path for patient education

2026·0 Zitationen·Acta Obstetricia Et Gynecologica ScandinavicaOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

We are grateful for Prof. Matsubara's insightful remarks regarding our recent publication assessing patient perceptions of ChatGPT in urogynecology1 and for his engagement in the broader discussion on AI-generated educational materials.2 We welcome the opportunity to clarify our interpretation and expand on the implications of our findings. Our study showed that patients consistently rated ChatGPT's answers as more understandable, helpful, and reassuring compared with consultant-generated responses. These results demonstrate meaningful potential for improving how information is communicated to women with pelvic floor disorders, an area where stigma, embarrassment, and health-literacy challenges often persist. Prof. Matsubara noted that these positive results could support a more direct endorsement of ChatGPT for patient education. Our prior work,3 in which expert urogynecologists systematically evaluated AI's responses to urinary incontinence inquiries, similarly showed generally favorable ratings for accuracy, comprehensiveness, and safety, while still highlighting areas for improvement. We agree that these findings are encouraging, yet we emphasize the difference between patient preference and validated clinical safety. The absence of inaccuracies in this dataset does not eliminate the possibility of erroneous or overly confident responses elsewhere. Vigilance is not a limitation of AI; it is an ethical obligation in all patient communication. He also questioned our emphasis on inaccuracy risk when none was observed. This truly reflects the scope of our study: we evaluated perception—not comprehension, retention, or behavior. Whether enhanced reassurance results in more accurate understanding or, conversely, misplaced confidence must still be formally tested. In addition, we appreciate the call to explore why patients preferred ChatGPT's style. We are currently pursuing linguistic and methodological analyses to identify the elements that drive clarity and patient engagement and how these principles may be incorporated into clinician-written materials so that educational communication, regardless of the author, becomes more accessible. We believe the best way to encourage the medical community to embrace large language models is not by asserting that AI outperforms humans, but by demonstrating the specific practical advantages that complement clinical practice and directly benefit patients. When framed as a supportive tool rather than a substitute, LLMs become less intimidating to colleagues and more easily integrated into care pathways. More broadly, healthcare stands at a crossroads. The rapid evolution of AI challenges conventional roles in knowledge generation and communication. While this may cause apprehension, our findings highlight that what patients truly value is clarity and reassurance—qualities AI can amplify but may not replace. By engaging with LLMs thoughtfully, clinicians ensure that these technologies develop with us and for our patients, empowering women with urogynecologic concerns to better understand their conditions and confidently participate in shared decision-making. We once again thank Prof. Matsubara for stimulating this important dialogue. Our findings support a future in which artificial intelligence becomes a valuable complementary resource in patient education, marrying technological innovation with the irreplaceable strengths of human clinical expertise. R.R. drafted the responses to reviewers and coordinated the revision. O.E.O.S. reviewed the responses and approved the final version. Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareElectronic Health Records Systems
Volltext beim Verlag öffnen