OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 07:30

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

ChatGPT: Precision Answer Comparison and Evaluation Model

2026·0 Zitationen·Iraqi journal of data science.Open Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

Artificial Intelligence (AI) has made advancements, among other things, OpenAI created the sophisticated model ChatGPT. Conversational, ChatGPT supports natural interactions, providing human-like responses to queries across myriad topics. But it is not infallible, and the degree of accuracy also depends on the complexity of the queries, the context, and how often the prompts are repeated. This work thus proposes a new model, the Precision Answer Comparison and Evaluation Model (PACEM), to systematically address these types of questions and assess ChatGPT's performance. PACEM assesses the correctness and coherence of ChatGPT's answers across numerous fields, including literature, history, law, ethics, and sports. By providing these analyses and comparisons, PACEM goes on record with a detailed understanding of what ChatGPT does well and poorly as a source of reliable information. On top of that, it includes an assessment of response time, considering ChatGPT's speed in producing answers in relation to real or expected ones. The findings show that ChatGPT's answers are usually substantially accurate and often of superior quality compared to those written by the user and other alternatives. Response time generally increases with the complexity or length of the answer. Finally, the study reviews notable takeaways from PACEM's deployment and offers suggestions for future research to address the evolving challenges in AI-driven response assessment.

Ähnliche Arbeiten