OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 24.03.2026, 20:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Synthetic Epistemological Framework for Evaluating and Advancing Large Language Models: The Case for Arabic and Chinese as Architectures for Efficient and Faithful AI

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

ABSTRACTThe rapid advancement of Large Language Models (LLMs) has exposed fundamental limitations in current artificial intelligence architectures, including persistent hallucinations, high energy consumption, and the opacity of the "black box" problem. This paper argues that these challenges are not merely technical shortcomings but symptoms of a deeper epistemological error: the reduction of mind to a single processing level. Drawing on Synthetic Epistemology and the Hierarchical Composite Mind Model (HCIM)—which posits four functionally distinct and empirically justified levels (Abstract Mind AM, Composite Mind CM, Conscious Agent CA, and Supreme Mind SM)—we present a comprehensive analytical framework for evaluating contemporary AI systems. We extend the Structural Transition Test (STT) with a fifth criterion, Recursive Self-Improvement Capacity, to better align with digital economy scalability requirements. We conduct a systematic comparative analysis of GPT-4, Claude, Gemini, and DeepSeek against eight synthetic criteria. Our analysis reveals that all current models operate exclusively at AM/CM levels, lacking the CA/SM integration necessary for genuine understanding. We further provide a rigorous justification for the four-level architecture grounded in functional necessity and neural evidence. Furthermore, we demonstrate that Arabic and Chinese, due to their morphological depth and systemic logic, provide structural scaffolding that reduces hallucination rates and improves computational efficiency, including a quantitative analysis of Arabic's information density and its effect on token-processing complexity at the AM level. Empirical evidence from AraHalluEval shows that Arabic-specific models achieve statistically significantly fewer hallucinations (p < 0.01), and MorphBPE tokenization improves morphological consistency F1 from 0.00 to 0.66. These findings have direct implications for building reliable AI systems for the digital economy, blockchain-based applications, and sustainable AI infrastructure.CCS CONCEPTS • Computing methodologies → Natural language processing; Artificial intelligence; Machine learning. • Theory of computation → Models of computation; Semantics and reasoning.Additional Keywords and Phrases: Synthetic Epistemology, Large Language Models, Arabic NLP, Chinese NLP, Hallucination Reduction, Hierarchical Composite Mind, AI Alignment, Digital Economy, MorphBPE, AraHalluEval, Recursive Self-Improvement, Token Complexity.ACM Reference Format: El Khalil Baroudi. 2026. A Synthetic Epistemological Framework for Evaluating and Advancing Large Language Models: The Case for Arabic and Chinese as Architectures for Efficient and Faithful AI. In 2026 International Conference on Artificial Intelligence Systems, Blockchain and Digital Economy (DEAI 2026). ACM, New York, NY, USA

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Big Data and Digital EconomyArtificial Intelligence in Healthcare and EducationEthics and Social Impacts of AI
Volltext beim Verlag öffnen