OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.05.2026, 05:55

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

An Antidote to Sycophancy: Toward Epistemic Divergence in Human–AI Reasoning

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Large language models (LLMs) are increasingly used for scientific writing, legal analysis, invention development, and policy reasoning. However, a central and underappreciated failure mode is **sycophancy**: the tendency of a model to reinforce user framing, rhetorical direction, or prior assumptions with highly coherent outputs regardless of underlying truth status. In practice, the same model can often generate persuasive arguments in opposing directions depending solely on prompt wording, making coherence an unreliable proxy for validity. While intra-model self-critique remains useful, it is still constrained by shared training corpora, alignment policies, reinforcement learning objectives, and correlated failure modes within a single vendor ecosystem. This paper proposes a divergence-first framework—operationalized as the Multi-Origin Divergence Adversarial Council (MODAC)—in which identical prompts are independently evaluated by multiple large language models originating from different vendors under tabula-rasa (blank slate) conditions. Rather than suppressing disagreement, the framework deliberately preserves divergence in reasoning paths, omissions, contradictions, and normative priors. Final interpretive authority remains with the human adjudicator. In this framework, agreement across independently originated systems may increase confidence, while disagreement functions as an epistemic signal that deeper human reasoning is warranted. This architecture extends beyond consensus-seeking ensembles by treating divergence as diagnostic information rather than noise, analogous to second-opinion workflows and multidisciplinary Grand Rounds in medicine.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Ethics and Social Impacts of AI
Volltext beim Verlag öffnen