Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring Patient Perspectives, Engagement, and Output Quality in Doctor-Supervised Use of Artificial Intelligence During Informed Consent Consultation With ChatGPT and Retrieval Augmented Generation (RAG): Quantitative Exploratory Study (Preprint)
0
Zitationen
7
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Comprehensive preoperative education is essential for optimizing outcomes and ensuring informed consent in patients undergoing total hip arthroplasty (THA). Emerging artificial intelligence (AI) tools, such as ChatGPT, offer scalable support for patient education, but their clinical application requires rigorous evaluation to ensure accuracy, safety, and trust. </sec> <sec> <title>OBJECTIVE</title> This study assessed patients’ preferences and satisfaction with AI-assisted informed consent in THA, comparing traditional physician consultations to those supported by native ChatGPT and a customized version enhanced with retrieval-augmented generation (RAG). It also examined how state anxiety and general attitudes toward AI affect preferences for AI-supported consent and whether RAG integration improves ChatGPT response quality. </sec> <sec> <title>METHODS</title> A total of 36 patients scheduled for elective THA were assigned to one of three groups (12 each): (1) standard physician-only consultations (control), (2) physician-assisted consultations supported by native ChatGPT, and (3) supported by ChatGPT enhanced through RAG. Data collection involved standardized Likert scale questionnaires assessing patient satisfaction with the consent process, perceived informedness, anxiety levels, and attitudes toward AI. The ChatGPT responses were independently evaluated by physicians for relevance, accuracy, clarity, completeness, adherence to evidence-based guidelines, and appropriate length. Instances of hallucinations, factually incorrect or misleading outputs, were identified and rated by severity. Statistical analyses compared outcomes across groups and explored associations. </sec> <sec> <title>RESULTS</title> Patients interacting with the ChatGPT+RAG model reported significantly higher satisfaction levels with information delivery (<i>P</i>=.01) and perceived level of informedness (<i>P</i>=.01) than those using the native ChatGPT model. The mean number of patient questions in the control group was 20, compared with 39 in the native ChatGPT group (<i>P</i>=.06) and 52 in the ChatGPT+RAG group (<i>P</i>=.002). The majority of participants across all groups preferred a human clinician providing less accurate information over a more accurate AI-only assistant. These preferences were not influenced by sociodemographic variables (age, gender, and education), health literacy, state anxiety, or general attitudes toward AI. The ChatGPT+RAG model outperformed the native ChatGPT model across all evaluated response quality dimensions (all <i>P</i>&lt;.01) and exhibited a significantly lower hallucination rate (5/52, 10% versus 15/39, 38%; <i>P</i>=.002). </sec> <sec> <title>CONCLUSIONS</title> Integrating RAG with ChatGPT significantly improves the quality, clarity, and reliability of preoperative information, enhancing patient satisfaction and engagement beyond native ChatGPT. However, patients maintain a strong preference for physician-led informed consent, underscoring the role of AI chatbots as complementary tools rather than replacements. These findings support the cautious adoption of customized AI assistants to augment, not substitute, human interaction in surgical consent processes. </sec> <sec> <title>CLINICALTRIAL</title> <p/> </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.