Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bridging the gap: Evaluating ChatGPT-generated, personalized, patient-centered prostate biopsy reports
4
Zitationen
17
Autoren
2025
Jahr
Abstract
OBJECTIVE: The highly specialized language used in prostate biopsy pathology reports coupled with low rates of health literacy leave some patients unable to comprehend their medical information. Patients' use of online search engines can lead to misinterpretation of results and emotional distress. Artificial intelligence (AI) tools such as ChatGPT (OpenAI) could simplify complex texts and help patients. This study evaluates patient-centered prostate biopsy reports generated by ChatGPT. METHODS: Thirty-five self-generated prostate biopsy reports were synthesized using National Comprehensive Cancer Network guidelines. Each report was entered into ChatGPT, version 4, with the same instructions, and the explanations were evaluated by 5 urologists and 5 pathologists. RESULTS: Respondents rated the AI-generated reports as mostly accurate and complete. All but 1 report was rated complete and grammatically correct by the majority of physicians. Pathologists did not rate any reports as having severe potential for harm, but 1 or more urologists rated severe concern in 20% of the reports. For 80% of the reports, all 5 pathologists felt comfortable sharing them with a patient or another clinician, but all 5 urologists reached the same consensus for only 40% of reports. Although every report required edits, all physicians agreed that they could modify the ChatGPT report faster than they could write an original report. CONCLUSIONS: ChatGPT can save physicians substantial time by generating patient-centered reports appropriate for patient and physician audiences with low potential to cause harm. Surveyed physicians have confidence in the overall utility of ChatGPT, supporting further investigation of how AI could be integrated into physicians' workflows.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.545 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.436 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.935 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.589 Zit.