Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Building Trust in Artificial Intelligence: A Systematic Review through the Lens of Trust Theory
1
Zitationen
4
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) is reshaping industries by enhancing efficiency and accuracy, yet its adoption remains contingent on user trust, which is frequently undermined by concerns over privacy, algorithmic bias, and security vulnerabilities. Trust in AI depends on principles such as transparency, accountability, safety, privacy, robustness, and reliability, all of which are central to user confidence. However, existing studies often overlook the interdependencies among these factors and their collective influence on user engagement. Guided by Trust Theory and a systematic literature review employing the PRISMA protocol, this study examines the trust indicators most relevant to high-stakes applications. The review reveals that transparency and communication are consistently prioritised, while adaptability and affordability remain underexplored, highlighting gaps in current scholarship. Trust in AI evolves as users gain experience with these systems, with reliability, predictability, and ethical alignment emerging as critical determinants. Addressing persistent challenges such as bias, data protection, and fairness is essential for reinforcing trust and enabling broader adoption of AI across industries.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.663 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.879 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.488 Zit.
Fairness through awareness
2012 · 3.297 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.