OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.04.2026, 23:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Ballad of LLM Agents: Philosophical Reasoning for Chemistry

2026·0 Zitationen·ChemRxivOpen Access
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2026

Jahr

Abstract

Large language models (LLMs) show remarkable potential for scientific reasoning but often produce unreliable or scientifically unactionable outputs when faced with multi-step logic, domain grounding, and interpretability challenges, especially in complex fields like chemistry and materials science. Here, we introduce a framework of philosophical reasoning agents, inspired by canonical thinkers such as Socrates, Descartes, Kant, and Hume, to guide LLM behavior via structured prompt engineering. These agents embody distinct reasoning paradigms,(dialectical inquiry, deductive logic, rule-based judgment, and empirical validation),and are evaluated across multiple chemistry subdomains (such as physical, organic, and general chemistry) using the ChemBench benchmark. Our agentic prompting approach yields substantial accuracy gains on open-ended numerical chemistry questions, with up to +18.4% for GPT-4o using the Hume agent, +14.1% for GPT-5 with Kant, and +11.7% for GPT-5.1 with Descartes, relative to zero-shot baselines. Beyond accuracy, we reveal distinct “reasoning signatures” across models, reflecting latent epistemic biases in how LLMs approach scientific problem solving. These findings demonstrate that embedding philosophy-of-science principles into multi-agent frameworks can improve and produce interpretable, adaptive, and domain-aligned scientific LLMs.

Ähnliche Arbeiten