Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How do we assess the trustworthiness of AI? Introducing the trustworthiness assessment model (TrAM)
27
Zitationen
6
Autoren
2025
Jahr
Abstract
Designing trustworthy AI-based systems and enabling external parties to accurately assess the trustworthiness of these systems are crucial objectives. Only if trustors assess system trustworthiness accurately, they can base their trust on adequate expectations about the system and reasonably rely on or reject its outputs. However, the process by which trustors assess a system's actual trustworthiness to arrive at their perceived trustworthiness remains underexplored. In this paper, we conceptually distinguish between actual and perceived trustworthiness, trust propensity, trust, and trusting behavior. Drawing on psychological models of how humans assess other people's characteristics, we present the two-level Trustworthiness Assessment Model (TrAM). At the micro level, we propose that trustors assess system trustworthiness based on cues associated with the system. The accuracy of this assessment depends on cue relevance and availability on the system's side, and on cue detection and utilization on the human's side. At the macro level, we propose that individual micro-level trustworthiness assessments propagate across different trustors – one stakeholder's trustworthiness assessment of a system affects other stakeholders' trustworthiness assessments of the same system. The TrAM advances existing models of trust and sheds light on factors influencing the (accuracy of) trustworthiness assessments. It contributes to theoretical clarity in trust research, has implications for the measurement of trust-related variables, and practical implications for system design, stakeholder training, AI alignment, and AI regulation related to trustworthiness assessments.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.566 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.865 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.411 Zit.
Fairness through awareness
2012 · 3.276 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.