Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence Physician Avatars for Patient Education: A Pilot Study
0
Zitationen
10
Autoren
2025
Jahr
Abstract
<b>Background:</b> Generative AI and synthetic media have enabled realistic human Embodied Conversational Agents (ECAs) or avatars. A subset of this technology replicates faces and voices to create realistic likenesses. When combined with avatars, these methods enable the creation of "digital twins" of physicians, offering patients scalable, 24/7 clinical communication outside the immediate clinical environment. This study evaluated surgical patient perceptions of an AI-generated surgeon avatar for postoperative education. <b>Methods:</b> We conducted a pilot feasibility study with 30 plastic surgery patients at Mayo Clinic, USA (July-August 2025). A bespoke interactive surgeon avatar was developed in Python using the HeyGen IV model to reproduce the surgeon's likeness. Patients interacted with the avatar through natural voice queries, which were mapped to predetermined, pre-recorded video responses covering ten common postoperative topics. Patient perceptions were assessed using validated scales of usability, engagement, trust, eeriness, and realism, supplemented by qualitative feedback. <b>Results:</b> The avatar system reliably answered 297 of 300 patient queries (99%). Usability was excellent (mean System Usability Scale score = 87.7 ± 11.5) and engagement high (mean 4.27 ± 0.23). Trust was the highest-rated domain, with all participants (100%) finding the avatar trustworthy and its information believable. Eeriness was minimal (mean = 1.57 ± 0.48), and 96.7% found the avatar visually pleasing. Most participants (86.6%) recognized the avatar as their surgeon, although many still identified it as artificial; voice resemblance was less convincing (70%). Interestingly, participants with prior exposure to deepfakes demonstrated consistently higher acceptance, rating usability, trust, and engagement 5-10% higher than those without prior exposure. Qualitative feedback highlighted clarity, efficiency, and convenience, while noting limitations in realism and conversational scope. <b>Conclusions:</b> The AI-generated physician avatar achieved high patient acceptance without triggering uncanny valley effects. Transparency about the synthetic nature of the technology enhanced, rather than diminished, trust. Familiarity with the physician and institutional credibility likely played a key role in the high trust scores observed. When implemented transparently and with appropriate safeguards, synthetic physician avatars may offer a scalable solution for postoperative education while preserving trust in clinical relationships.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.