Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Response Quality in Human-Chatbot Collaborative Systems
44
Zitationen
2
Autoren
2020
Jahr
Abstract
We report the results of a crowdsourcing user study for evaluating the effectiveness of human-chatbot collaborative conversation systems, which aim to extend the ability of a human user to answer another person's requests in a conversation using a chatbot. We examine the quality of responses from two collaborative systems and compare them with human-only and chatbot-only settings. Our two systems both allow users to formulate responses based on a chatbot's top-ranked results as suggestions. But they encourage the synthesis of human and AI outputs to a different extent. Experimental results show that both systems significantly improved the informativeness of messages and reduced user effort compared with a human-only baseline while sacrificing the fluency and humanlikeness of the responses. Compared with a chatbot-only baseline, the collaborative systems provided comparably informative but more fluent and human-like messages.
Ähnliche Arbeiten
Internet of Things (IoT): A vision, architectural elements, and future directions
2013 · 11.882 Zit.
Fog computing and its role in the internet of things
2012 · 5.938 Zit.
From Louvain to Leiden: guaranteeing well-connected communities
2019 · 5.023 Zit.
Advances and Open Problems in Federated Learning
2020 · 4.533 Zit.
Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk
2012 · 4.066 Zit.