OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.04.2026, 02:36

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Assessing Artificial Intelligence–Generated Responses to Urology Patient In-Basket Messages

2024·20 Zitationen·Urology Practice
Volltext beim Verlag öffnen

20

Zitationen

10

Autoren

2024

Jahr

Abstract

INTRODUCTION: Electronic patient messaging utilization has increased in recent years and has been associated with physician burnout. ChatGPT is a language model that has shown the ability to generate near-human level text responses. This study evaluated the quality of ChatGPT responses to real-world urology patient messages. METHODS: One hundred electronic patient messages were collected from a practicing urologist's inbox and categorized based on the question content. Individual responses were generated by entering each message into ChatGPT. The questions and responses were independently evaluated by 5 urologists and graded on a 5-point Likert scale. Questions were graded based on difficulty, and responses were graded based on accuracy, completeness, harmfulness, helpfulness, and intelligibleness. Whether or not the response could be sent to a patient was also assessed. RESULTS: = .03). Responses to easy questions were more accurate, complete, helpful, and intelligible than responses to difficult questions. There was no difference in response quality based on question content. CONCLUSIONS: ChatGPT generated acceptable responses to nearly 50% of patient messages with better performance for easy questions compared to difficult questions. Use of ChatGPT to help respond to patient messages can help to decrease the time burden for the care team and improve wellness. Artificial intelligence performance will likely continue to improve with advances in generative artificial intelligence technology.

Ähnliche Arbeiten