Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Epistemic Injustice in Generative AI: Probabilistic Generation, Trust Erosion, and the Structural Conditions of Algorithmic Knowledge Harm
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Generative artificial intelligence, and large language models (LLMs) in particular, are increasingly deployed as epistemic intermediaries across domains ranging from legal research to clinical decision support. This paper argues that LLMs produce a distinctive form of epistemic injustice- understood, following Fricker, as a wrong done to agents specifically in their capacity as knowers-through three interrelated structural mechanisms: (1) automation-induced testimonial injustice (AITI), whereby the confident and fluent outputs of LLMs systematically deflate the credibility accorded to competing human testimony; (2) interpretive erasure, whereby the probabilistic compression of training corpora marginalises minority epistemic frameworks through a dynamic process that simultaneously reflects and actively widens existing hermeneutical gaps; and (3) epistemic debt accumulation, whereby sustained reliance on AI-mediated knowledge progressively erodes individual and collective epistemic competence. Drawing on Fricker's canonical framework, Dotson's account of epistemic violence, and Bender et al.'s critique of stochastic parrots, the paper further introduces the concept of Algorithmic Gettier Cases (AGCs)-instances in which LLM outputs are accidentally true yet epistemically defective-as an analytical device for exposing the structural failure modes of probabilistic generation. The paper concludes with a fourcomponent governance framework oriented towards testimonial transparency, hermeneutical pluralism, competence preservation, and structural accountability in high-stakes deployment contexts, with each principle linked explicitly to the mechanism it is designed to address.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.480 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.853 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.361 Zit.
Fairness through awareness
2012 · 3.258 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.