OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 10:53

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Scaling Laws, Foundation Models, and the AI Singularity: A Critical Appraisal of 2023– 2025 Evidence

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

This paper critically reviews evidence from 2023–2025 on scaling laws and foundation models. It also examines claims about an AI Singularity. Here, the Singularity means recursive self-improvement that leads to sudden capability jumps, not just broad automation. The paper asks what scaling results truly support and what they do not. It also explains how technical findings become institutional strategies and long-term commitments. The method used is a narrative synthesis of peer-reviewed studies, technical reports, and governance frameworks. The paper follows from concepts and history to technical limits, then to evaluation and agents, to narratives and counter-narratives, and finally to governance, productivity, and future research. The analysis finds that scaling laws can still predict training loss in stable settings. However, real-world capability often improves in jumps rather than in smooth gains. These gains also correlate weakly with perplexity. Public benchmarks now act like short-lived public goods. They are easily contaminated and shaped by Goodhart pressures. Inference-time reasoning can raise accuracy on some tasks. Nevertheless, it does not reliably reduce hallucinations. It can even make wrong answers sound more convincing. This weakens the idea that more compute per answer creates trustworthy autonomy. Singularity forecasts also face bottlenecks. Software engineering is one, because architecture, verification, and maintenance are complex. Trust is another, as synthetic content floods the web and degrades confidence in text. Physical limits matter too, especially grid capacity and the slow pace of infrastructure build-out. The paper argues that peak hype may come before peak impact. Even if scaling slows, adoption will still take years. Governance should focus on measurable precaution, auditability, competition, procurement tools, and plural infrastructures for global equity. Future research should prioritise process supervision, human-AI epistemics, and an energy–intelligence exchange rate.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIInnovation, Sustainability, Human-Machine SystemsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen