OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 02:28

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Breaking the Turing Test: Testing the relevance of the Turing Test against modern LLMs

2026·0 Zitationen·International Journal for Research in Engineering Application & ManagementOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

The Turing Test has long served as a benchmark for evaluating whether machines can exhibit human-like intelligence through conversation. However, the rapid advancement of large language models (LLMs) trained on billions of parameters and vast textual datasets raises fundamental questions about the continued relevance of this test. In this study, we examine whether the Turing Test remains a meaningful measure of intelligence in the era of generative AI. Using exclusively existing datasets and peer-reviewed experimental results, this paper analyzes documented Turing Test evaluations comparing humans with modern LLMs under varying conditions. The analysis focuses on the effects of model scale, prompt engineering, sampling temperature, and modified test structures on human–AI indistinguishability. Results indicate that state-of-the-art LLMs can pass classical Turing Tests when optimized through persona conditioning and controlled randomness, in some cases being judged human more frequently than actual human participants. However, this success is shown to be fragile: extended conversations, expert evaluators, and adversarial testing conditions significantly reduce AI pass rates.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIAI in Service InteractionsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen