OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.04.2026, 23:08

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Reasoning in Large Language Models: From Chain-of-Thought to Massively Decomposed Agentic Processes

2025·0 Zitationen·Preprints.orgOpen Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs) have demonstrated remarkable capabilities in reasoning tasks, yet their ability to execute long-horizon processes with sustained accuracy remains a fundamental challenge. This survey provides a comprehensive examination of reasoning in LLMs, spanning from foundational prompting techniques to emerging massively decomposed agentic processes. We first establish a taxonomy that categorizes reasoning approaches into three primary paradigms: prompting-based methods including Chain-of-Thought, Tree of Thoughts, and Graph of Thoughts; training-based methods encompassing reinforcement learning from human feedback, process reward models, and self-taught reasoning; and multi-agent systems that leverage decomposition and collaborative error correction. We analyze the persistent error rate problem that prevents LLMs from scaling to extended sequential tasks, where recent experiments demonstrate performance collapse beyond a few hundred dependent steps. We then examine MAKER, a breakthrough framework that achieves over one million LLM steps with zero errors through extreme task decomposition combined with multi-agent voting schemes. Our analysis reveals that massively decomposed agentic processes represent a promising paradigm shift from relying solely on improving individual model capabilities toward orchestrating ensembles of focused microagents. We synthesize empirical findings across major benchmarks including GSM8K, MATH, MMLU, and PlanBench, and identify critical open challenges including compositional generalization, error propagation mitigation, and the computational costs of inference-time scaling. This survey aims to provide researchers and practitioners with a unified perspective on the landscape of LLM reasoning and illuminate pathways toward solving problems at organizational and societal scales.

Ähnliche Arbeiten

Autoren

Themen

Topic ModelingArtificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen