Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
PD60-03 PATIENTS’ TRUST IN AI BASED DECISION MAKING IN DIAGNOSTICS AND THERAPY OF LOCALIZED PROSTATE CANCER—RESULTS FROM A PROSPECTIVE TRIAL
0
Zitationen
11
Autoren
2024
Jahr
Abstract
You have accessJournal of UrologyEducation Research III (PD60)1 May 2024PD60-03 PATIENTS' TRUST IN AI BASED DECISION MAKING IN DIAGNOSTICS AND THERAPY OF LOCALIZED PROSTATE CANCER—RESULTS FROM A PROSPECTIVE TRIAL Severin Rodler, Rega Kopliku Kopliku, Daniel Ulrich, Annika Kaltenhauser, Jozefina Casuscelli, Lennert Eismann, Raphaela Waidelich, Alexander Buchner, Andreas Butz, Giovanni E. Cacciamani, and Thilo Westhofen Severin RodlerSeverin Rodler , Rega Kopliku KoplikuRega Kopliku Kopliku , Daniel UlrichDaniel Ulrich , Annika KaltenhauserAnnika Kaltenhauser , Jozefina CasuscelliJozefina Casuscelli , Lennert EismannLennert Eismann , Raphaela WaidelichRaphaela Waidelich , Alexander BuchnerAlexander Buchner , Andreas ButzAndreas Butz , Giovanni E. CacciamaniGiovanni E. Cacciamani , and Thilo WesthofenThilo Westhofen View All Author Informationhttps://doi.org/10.1097/01.JU.0001009460.27205.df.03AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookLinked InTwitterEmail Abstract INTRODUCTION AND OBJECTIVE: Artificial intelligence (AI) has the potential to enhance diagnostic accuracy and improve treatment outcomes. However, the integration of AI into clinical workflows and patient perspectives remain unclear. We therefore aimed to determine patients´ trust in AI, perception on urologists relying on AI and future diagnostic and therapeutic applications for patients. METHODS: A prospective trial was conducted, involving patients who received diagnostic or therapeutic interventions for prostate cancer (PC). Survey prior MRI, prostate biopsy or radical prostatectomy was performed based on a validated questionnaire. Primary outcome is trust in AI. Secondary outcomes are choice of AI in treatment settings and attributed traits of AI and urologists. RESULTS: 466 patients were analyzed. Cumulative affinity to technology had a positive correlation with trust in AI (correlation coefficient 0.094, p=0.04) whereas patient age, level of education and subjective perception of illness had not (p>0.05). The difference in trust in the capability of physicians compared to AI is high to respond individualized when communicating the diagnosis (4.51 [SD 0.76] vs 3.38 [SD 1.07]; mean difference 1.130; 95% CI 1.010 to 1.250; t924=18.52; p<0.001, Cohen d=1.040) and to explain information in an understandable way (4.57 [SD 0.69] vs 3.18 [SD 1.09]; mean difference 1.392; 95% CI 1.275 to 1.509; t921=27.27; p<0.001, Cohen d=1.216). Patients state higher trust in a diagnosis made by an AI which was controlled by a physician compared to a diagnosis by an AI which was not controlled by a physician (with control 4.31 [SD 0.88] vs. without control 1.75 [SD 0.93]; mean difference 2.561; 95% CI 2.444 to 2.678; t925=42.89; p<0.001, Cohen d=2.818). AI assisted physicians (66.74%) are preferred over physicians alone (29.61%), physicians controlled by AI (2.36%) and AI alone (0.64%) for treatment in the current clinical scenario (Figure 1). CONCLUSIONS: Trust in future diagnostic and therapeutic AI-based treatment relies on an optimal integration with urologists as the human-machine interface to leverage human and AI capabilities. Download PPT Source of Funding: None © 2024 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetails Volume 211Issue 5SMay 2024Page: e1273 Advertisement Copyright & Permissions© 2024 by American Urological Association Education and Research, Inc.Metrics Author Information Severin Rodler More articles by this author Rega Kopliku Kopliku More articles by this author Daniel Ulrich More articles by this author Annika Kaltenhauser More articles by this author Jozefina Casuscelli More articles by this author Lennert Eismann More articles by this author Raphaela Waidelich More articles by this author Alexander Buchner More articles by this author Andreas Butz More articles by this author Giovanni E. Cacciamani More articles by this author Thilo Westhofen More articles by this author Expand All Advertisement PDF downloadLoading ...
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.