OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 02.05.2026, 09:38

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Curve and Rankings: A Decomposition of Neural Network Representations

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

We present a decomposition of neural network hidden states into two components: a curve (the sorted value profile, shared across all inputs at a given layer) and a ranking (which dimensions occupy which positions on that curve, unique per input). We show empirically that this decomposition is functionally meaningful: the ranking carries essentially all discriminative information while the curve is shared infrastructure. Across four architectures (MLP, ViT-Base, GPT-2, Qwen-7B) and four benchmarks (MNIST, CIFAR-10, ImageNet, LAMBADA), we find that (1) over 90% of consecutive deltas in the sorted value profile fall below 0.01 for language models, confirming the curve is tightly shared; (2) rankings alone preserve or exceed float-based classification and retrieval accuracy; and (3) constraining hidden states to be explicit rankings of a learned curve matches or exceeds standard unconstrained training. On MNIST, learned curve training achieves 98.21% vs. 98.16% for a standard MLP (+0.05%). On CIFAR-10, 55.25% vs. 54.25% (+1.0%). On LAMBADA, a learned curve of 768 parameters achieves 52.5% next-word accuracy vs. GPT-2’s native 48.3% (+4.2%). These findings suggest that the continuous geometry of neural representations may be secondary to their ordinal structure—that what matters is not the values, but the arrangement.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Multimodal Machine Learning ApplicationsExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen