Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Balancing Anthropomorphic Design in Healthcare AI Agents
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Balancing human-likeness in healthcare AI agents is critical to ensuring trust, engagement, and safe adoption. Guided by social response theory and uncanny valley research, this study explores how varying degrees of anthropomorphic design influence user trust and comfort. We conducted a literature review on generative AI systems to identify key anthropomorphic features. Seven domain experts independently evaluated these features, selecting features to cluster into three progressive bundles ranging from zero to medium risk for possible uncanny valley to build three prototype designs for an AI patient-intake agent.The prototypes will be assessed in a vignette study using an adapted Primary Care Assessment Survey to measure trust and patient-centeredness. Early findings show that some technically advanced features (e.g., adaptive context responses) are less likely to trigger uncanny valley effects, whereas others (e.g., human-like avatars, humour) heighten discomfort. Results provide design guidance for healthcare AI agents that optimize trust while avoiding over-humanization.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.