Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Scaling Laws, Foundation Models, and the AI Singularity: A Critical Appraisal of 2023– 2025 Evidence
0
Zitationen
2
Autoren
2026
Jahr
Abstract
This paper critically reviews evidence from 2023–2025 on scaling laws and foundation models. It also examines claims about an AI Singularity. Here, the Singularity means recursive self-improvement that leads to sudden capability jumps, not just broad automation. The paper asks what scaling results truly support and what they do not. It also explains how technical findings become institutional strategies and long-term commitments. The method used is a narrative synthesis of peer-reviewed studies, technical reports, and governance frameworks. The paper follows from concepts and history to technical limits, then to evaluation and agents, to narratives and counter-narratives, and finally to governance, productivity, and future research. The analysis finds that scaling laws can still predict training loss in stable settings. However, real-world capability often improves in jumps rather than in smooth gains. These gains also correlate weakly with perplexity. Public benchmarks now act like short-lived public goods. They are easily contaminated and shaped by Goodhart pressures. Inference-time reasoning can raise accuracy on some tasks. Nevertheless, it does not reliably reduce hallucinations. It can even make wrong answers sound more convincing. This weakens the idea that more compute per answer creates trustworthy autonomy. Singularity forecasts also face bottlenecks. Software engineering is one, because architecture, verification, and maintenance are complex. Trust is another, as synthetic content floods the web and degrades confidence in text. Physical limits matter too, especially grid capacity and the slow pace of infrastructure build-out. The paper argues that peak hype may come before peak impact. Even if scaling slows, adoption will still take years. Governance should focus on measurable precaution, auditability, competition, procurement tools, and plural infrastructures for global equity. Future research should prioritise process supervision, human-AI epistemics, and an energy–intelligence exchange rate.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.495 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.853 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.372 Zit.
Fairness through awareness
2012 · 3.265 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.