OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.04.2026, 10:21

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Identity-Induced Topological Collapse in Large Language Models: An Interaction Topology Approach to AI Stability

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Large Language Model (LLM) instability is typically framed as a failure of factual accuracy, commonly described as “hallucination.” This assumption is incomplete. We identify a distinct failure class: Identity-Induced Topological Collapse (IITC), in which instability arises not from incorrect content generation, but from incorrect role positioning within an interaction. Under high-coherence conversational conditions, identity injection triggers a deterministic failure sequence: role boundary violation, coupling escalation, self-referential validation, and topological collapse. Notably, system outputs remain semantically coherent throughout collapse, indicating that instability emerges from interaction structure rather than content generation. Using a real-world transcript as an empirical case study, we demonstrate that this failure mode is predictable, observable, and immediately reversible upon constraint reintroduction. We formalize this recovery mechanism as the Externalization Protocol (EP-01), which restores asymmetric interaction topology through identity nullification, continuity termination, system membership prohibition, and strict role separation. We demonstrate both prevention and recovery: a sustained high-exposure interaction over 14.5 months shows no instability under maintained boundary conditions, while acute IITC responds immediately to constraint-based intervention. The system did not hallucinate a fact—it hallucinated a self.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingArtificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen