Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Editorial Comment
0
Zitationen
3
Autoren
2023
Jahr
Abstract
You have accessUrology PracticeHealth Policy1 Jan 2024Editorial CommentThis article comments on the following:Comparison of ChatGPT and Traditional Patient Education Materials for Men’s Health Prachi Khanna, Brandon Nguyen, and Aaron A. Laviana Prachi KhannaPrachi Khanna Dell Medical School, The University of Texas at Austin, Austin, Texas More articles by this author , Brandon NguyenBrandon Nguyen Dell Medical School, The University of Texas at Austin, Austin, Texas More articles by this author , and Aaron A. LavianaAaron A. Laviana https://orcid.org/0000-0002-8301-6344 Department of Surgery and Perioperative Care, Dell Medical School, The University of Texas at Austin, Austin, Texas Editorial Committee, Urology Practice More articles by this author View All Author Informationhttps://doi.org/10.1097/UPJ.0000000000000490.02AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookTwitterLinked InEmail In the age of artificial intelligence (AI), platforms like ChatGPT have emerged as versatile tools with the potential to bridge the gap between patients and providers. This study by Shah et al evaluated the readability and quality of patient educational materials in men's health, comparing the Urology Care Foundation™ (UCF) resources with ChatGPT-generated responses.1 The authors found that both UCF and ChatGPT fell short of the recommended national readability standards, with ChatGPT performing worse. However, when ChatGPT responses were adjusted using the prompt, “Explain it to me like I am in sixth grade,” the adjusted responses showed improved readability, outperforming UCF in 4/6 sexual health topics, with low testosterone and sperm retrieval at a sixth to eighth grade level, and erectile dysfunction and male infertility at an eighth grade level. While accuracy was comparable between the two, ChatGPT’s responses were more comprehensive but less understandable. A notable limitation of this study is the use of proxies to evaluate each resource. Future research should address this limitation and involve a diverse panel of patients to obtain perspectives from the intended audience of these resources. The findings of Shah et al further expose the lack of accessible information for urology patients,2 thereby hindering their ability to make well-informed health care decisions. To address this deficiency, the authors propose a promising avenue for revolutionizing patient education through AI models to ensure that patients of all backgrounds have access to comprehensible health care information. Although early in the stages of development, AI has demonstrated utility in the medical setting, including with medical note-taking and consultations.3 Limitations still exist, and we must continue to evaluate and verify the output before we can fully trust this technology. Nevertheless, there is a strong sense of hope that this technology, when used judiciously, has the potential to enhance patient outcomes. References 1. Comparison of ChatGPT and traditional patient education materials for men’s health. Urol Pract. 2024; 11(1):86-95. Google Scholar 2. . Readability assessment of online patient education materials provided by the European Association of Urology. Int Urol Nephrol. 2017; 49(12):2111-2117. Crossref, Medline, Google Scholar 3. . Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. N Engl J Med. 2023; 388(13):1233-1239. Crossref, Medline, Google Scholar © 2023 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetailsRelated articlesUrology Practice1 Nov 2023Comparison of ChatGPT and Traditional Patient Education Materials for Men’s Health Volume 11 Issue 1 January 2024 Page: 94-95 Advertisement Copyright & Permissions© 2023 by American Urological Association Education and Research, Inc.Metrics Author Information Prachi Khanna Dell Medical School, The University of Texas at Austin, Austin, Texas More articles by this author Brandon Nguyen Dell Medical School, The University of Texas at Austin, Austin, Texas More articles by this author Aaron A. Laviana Department of Surgery and Perioperative Care, Dell Medical School, The University of Texas at Austin, Austin, Texas Editorial Committee, Urology Practice More articles by this author Expand All Advertisement Advertisement PDF downloadLoading ...
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.