Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trust in the system and human autonomy in customer service chatbots
2
Zitationen
2
Autoren
2023
Jahr
Abstract
When Artificial Intelligence systems are not explained clearly to users, it can negatively affect their interactions and compromise their perceptions of a brand. When designing and developing conversational agents that deal with the client, it is crucial to consider that they are a service and follow human-centered Artificial Intelligence (HCAI) approaches. This study discusses two HCAI frameworks, relate them to trust in the system and human autonomy and define how these guidelines could be met in customer service chatbot. A survey was conducted to determine if users' views about their interactions with chatbots aligned with the recommended guidelines and how this affected their senses mentioned above. The analysis of the responses indicates that those human-centered Artificial Intelligence approaches still need to be prioritized or even met in customer service chatbot development. Users have reported unpleasant experiences with such services, leading to a decrease in their trust and autonomy.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.632 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.552 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.548 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.317 Zit.