Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Context-Dependency of Trust in AI-based Systems
0
Zitationen
3
Autoren
2025
Jahr
Abstract
With the advent of Large Language Models (LLMs), it is possible to receive advice from Artificial Intelligence (AI) that is evaluated as qualitatively higher and more authentic than human advice. [1] However, trust in AI-based advice seems to be highly context-dependent. In recent studies, it has been found that trust in AI-based advice in epistemic contexts is low due to egocentric discounting—a behavioral phenomenon where people place too much weight on their own judgment and too little on the advice they receive. [2] In contrast, in moral contexts, people rely overly strongly on AI-based advice, even in cases where the trustworthiness of the AI-based system has been experimentally reduced. [3], [4] The trustworthiness of AI-based systems is commonly seen as a necessary condition for relying on their decision support. However, psychological factors appear to play an important role as well, and their influence seems to vary depending on the context. This is pressing since AI-based systems can have a corrupting effect on humans [5] or influence moral judgment [6]. Yet, the distinction between epistemic and moral contexts and the varying impact of psychological factors has been overlooked so far in assessing trust in AI-based systems and, hence, must be systematically developed and empirically tested. The paper will make three contributions. First, studies from epistemic and moral contexts are reviewed briefly to suggest conditioning factors for the contexts. Second, possible explanations based on the role of various psychological factors (like biases) are discussed to shed light on the different potential behavioral influences of AI-based advice that should be tested. Third, specific future lines of research will be outlined.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.514 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.386 Zit.
Fairness through awareness
2012 · 3.269 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.