Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
“Trust, but Verify”: A Reflexive Thematic Analysis of Human–AI Interaction
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Artificial Intelligence (AI) has become deeply integrated into professional workflows, offering efficiency, scalability, and decision-support across sectors. Yet, questions remain about how users calibrate trust in AI and how reliance on these systems shapes human cognition. This study explores the psychological dimensions of trust, transparency, and cognitive load in human–AI interaction. Semi-structured interviews were conducted with twelve professionals across psychology, technology, and leadership domains. Data were analysed using Braun and Clarke’s reflexive thematic analysis, revealing two superordinate themes: (1) trust as conditional, shaped by verification practices and expectations of source transparency, and (2) AI’s dual role in reducing cognitive load while raising concerns about diminishing creativity and imagination. Findings highlight that professionals value AI as a supportive assistant that saves time and streamlines tasks but remain cautious about accuracy, hallucinations, and overreliance. The study contributes to qualitative research on human–AI interaction by emphasising the need for explainability, verifiable outputs, and safeguards against cognitive complacency. It recommends psychologically informed design strategies that balance efficiency with transparency and preserve users’ epistemic agency.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.495 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.853 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.372 Zit.
Fairness through awareness
2012 · 3.265 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.