Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards a Model for Building Trust and Acceptance of Artificial Intelligence Aided Medical Assessment Systems
2
Zitationen
5
Autoren
2020
Jahr
Abstract
This study aims to identify determinants for the emergence of trust in AI-based medical assessment systems consisting of chatbots and telemedicine. Existing studies have been failing to create a holistic understanding due to focusing on single trust antecedents. Our study closes this research gap by conducting semi-structured interviews and standardized questionnaires to identify relevant variables and their relationship to each other. Participants (n = 40) take part in a laboratory experiment interacting with a chatbot (vs. chatbot + human agent) for initial medical assessment. The first results indicate the importance of the chatbot’s purpose and the transparency of underlying data base. Furthermore, communication patterns conveying uncertainty reduction are found to be more important than chatbot’s social skills. The additional human expert complements the chatbot due to the possibility of more specific and detailed questioning and patients’ wish of having a responsible person.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.418 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.288 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.726 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.516 Zit.