Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Achieving Trustworthy Artificial Intelligence: Multi-Source Trust Transfer in Artificial Intelligence-capable Technology
0
Zitationen
5
Autoren
2021
Jahr
Abstract
Contemporary research focuses on examining trustworthy AI but neglects to consider trust transfer processes, proposing that users’ established trust in a familiar source (e.g., a technology or person) may transfer to a novel target. We argue that such trust transfer processes also occur in the case of novel AI-capable technologies, as they are the result of the convergence of AI with one or more base technologies. We develop a model with a focus on multi-source trust transfer while including the theoretical framework of trustduality (i.e., trust in providers and trust in technologies) to advance our understanding about trust transfer. A survey among 432 participants confirms that users transfer their trust from known technologies and providers (i.e., vehicle and AI technology) to AI-capable technologies and their providers. The study contributes by providing a novel theoretical perspective on establishing trustworthy AI by validating the importance of the duality of trust.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.