OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.03.2026, 06:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Epistemic Injustice in Generative AI: Probabilistic Generation, Trust Erosion, and the Structural Conditions of Algorithmic Knowledge Harm

2026·0 Zitationen·Knowledge Commons (Lakehead University)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Generative artificial intelligence, and large language models (LLMs) in particular, are increasingly deployed as epistemic intermediaries across domains ranging from legal research to clinical decision support. This paper argues that LLMs produce a distinctive form of epistemic injustice- understood, following Fricker, as a wrong done to agents specifically in their capacity as knowers-through three interrelated structural mechanisms: (1) automation-induced testimonial injustice (AITI), whereby the confident and fluent outputs of LLMs systematically deflate the credibility accorded to competing human testimony; (2) interpretive erasure, whereby the probabilistic compression of training corpora marginalises minority epistemic frameworks through a dynamic process that simultaneously reflects and actively widens existing hermeneutical gaps; and (3) epistemic debt accumulation, whereby sustained reliance on AI-mediated knowledge progressively erodes individual and collective epistemic competence. Drawing on Fricker's canonical framework, Dotson's account of epistemic violence, and Bender et al.'s critique of stochastic parrots, the paper further introduces the concept of Algorithmic Gettier Cases (AGCs)-instances in which LLM outputs are accidentally true yet epistemically defective-as an analytical device for exposing the structural failure modes of probabilistic generation. The paper concludes with a fourcomponent governance framework oriented towards testimonial transparency, hermeneutical pluralism, competence preservation, and structural accountability in high-stakes deployment contexts, with each principle linked explicitly to the mechanism it is designed to address.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationArtificial Intelligence in Law
Volltext beim Verlag öffnen