Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trustworthy enough? Examining trustworthiness assessments of large language model-based medical agents.
0
Zitationen
6
Autoren
2025
Jahr
Abstract
This research advances trust theory by examining factors shaping the development of a trustor’s perceived trustworthiness in the context of real-world interactions with a large language model-driven virtual doctor (VD). Employing a qualitative approach to elaborate the trustworthiness assessment model, we conducted 51 interviews with 65 participants. Our findings reveal a heterogeneity in the trustworthiness perceptions of and reported trust in VDs, ranging from a complete absence to a complete presence of trust, with many participants expressing conditional trust. The key factors contributing to this heterogeneity were participants’ benchmarks for trustworthiness, naïve theories, risk–benefit assessments, individual standards, and strategies for cue detection and utilization in assessing the trustworthiness of the VD. Our findings also highlight the crucial influence of third-party involvement in artificial intelligence system development and testing on trustworthiness assessments. These insights underscore the trustworthiness assessment model’s utility in understanding trust development processes.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.303 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.155 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.555 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.453 Zit.