OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 23:53

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Perceiving AI as an Epistemic Authority or Algority: A User Study on the Human Attribution of Authority to AI

2026·0 Zitationen·Machine Learning and Knowledge ExtractionOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

The increasing integration of artificial intelligence (AI) in decision-making processes has amplified discussions surrounding algorithmic authority—the perceived epistemic legitimacy of AI systems over human judgment. This study investigates how individuals attribute epistemic authority to AI, focusing on psychological, contextual, and sociotechnical factors. Existing research highlights the importance of trust in automation, perceived performance, and moral frameworks in shaping such attributions. Unlike prior conceptual or philosophical accounts of algorithmic authority, our study adopts a relational and empirically grounded perspective by operationalizing algority through psychometric measures and contextual assessments. To address knowledge gaps in the micro-level dynamics of this phenomenon, we conducted an empirical study using psychometric tools and scenario-based assessments. Here, we report key findings from a survey of 610 participants, revealing significant correlations between trust in automation (TiA), perceptions of automated performance (PAS), and the propensity to defer to AI, particularly in high-stakes scenarios like criminal justice and job-matching. Trust in automation emerged as a primary factor, while moral attitudes moderated deference in ethically sensitive contexts. Our findings highlight the practical relevance of transparency and explainability for supporting critical engagement with AI outputs and for informing the design of contextually appropriate decision support. This study contributes to understanding algorithmic authority as a multidimensional construct, offering empirically grounded insights for designing AI systems that are trustworthy and context-sensitive.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen