Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Aligning <scp>AI</scp> and clinical expertise: A collaborative path for patient education
0
Zitationen
2
Autoren
2026
Jahr
Abstract
We are grateful for Prof. Matsubara's insightful remarks regarding our recent publication assessing patient perceptions of ChatGPT in urogynecology1 and for his engagement in the broader discussion on AI-generated educational materials.2 We welcome the opportunity to clarify our interpretation and expand on the implications of our findings. Our study showed that patients consistently rated ChatGPT's answers as more understandable, helpful, and reassuring compared with consultant-generated responses. These results demonstrate meaningful potential for improving how information is communicated to women with pelvic floor disorders, an area where stigma, embarrassment, and health-literacy challenges often persist. Prof. Matsubara noted that these positive results could support a more direct endorsement of ChatGPT for patient education. Our prior work,3 in which expert urogynecologists systematically evaluated AI's responses to urinary incontinence inquiries, similarly showed generally favorable ratings for accuracy, comprehensiveness, and safety, while still highlighting areas for improvement. We agree that these findings are encouraging, yet we emphasize the difference between patient preference and validated clinical safety. The absence of inaccuracies in this dataset does not eliminate the possibility of erroneous or overly confident responses elsewhere. Vigilance is not a limitation of AI; it is an ethical obligation in all patient communication. He also questioned our emphasis on inaccuracy risk when none was observed. This truly reflects the scope of our study: we evaluated perception—not comprehension, retention, or behavior. Whether enhanced reassurance results in more accurate understanding or, conversely, misplaced confidence must still be formally tested. In addition, we appreciate the call to explore why patients preferred ChatGPT's style. We are currently pursuing linguistic and methodological analyses to identify the elements that drive clarity and patient engagement and how these principles may be incorporated into clinician-written materials so that educational communication, regardless of the author, becomes more accessible. We believe the best way to encourage the medical community to embrace large language models is not by asserting that AI outperforms humans, but by demonstrating the specific practical advantages that complement clinical practice and directly benefit patients. When framed as a supportive tool rather than a substitute, LLMs become less intimidating to colleagues and more easily integrated into care pathways. More broadly, healthcare stands at a crossroads. The rapid evolution of AI challenges conventional roles in knowledge generation and communication. While this may cause apprehension, our findings highlight that what patients truly value is clarity and reassurance—qualities AI can amplify but may not replace. By engaging with LLMs thoughtfully, clinicians ensure that these technologies develop with us and for our patients, empowering women with urogynecologic concerns to better understand their conditions and confidently participate in shared decision-making. We once again thank Prof. Matsubara for stimulating this important dialogue. Our findings support a future in which artificial intelligence becomes a valuable complementary resource in patient education, marrying technological innovation with the irreplaceable strengths of human clinical expertise. R.R. drafted the responses to reviewers and coordinated the revision. O.E.O.S. reviewed the responses and approved the final version. Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.