OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 04.05.2026, 10:13

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Generative AI Paradox in Evaluation: “What It Can Solve, It May Not Evaluate”

2024·1 ZitationenOpen Access
Volltext beim Verlag öffnen

1

Zitationen

4

Autoren

2024

Jahr

Abstract

This paper explores the assumption that Large Language Models (LLMs) skilled in generation tasks are equally adept as evaluators.We assess the performance of three LLMs and one open-source LM in Question-Answering (QA) and evaluation tasks using the TriviaQA (Joshi et al., 2017) dataset.Results indicate a significant disparity, with LLMs exhibiting lower performance in evaluation tasks compared to generation tasks.Intriguingly, we discover instances of unfaithful evaluation where models accurately evaluate answers in areas where they lack competence, underscoring the need to examine the faithfulness and trustworthiness of LLMs as evaluators.This study contributes to the understanding of "the Generative AI Paradox" (West et al., 2023), highlighting a need to explore the correlation between generative excellence and evaluation proficiency, and the necessity to scrutinize the faithfulness aspect in model evaluations.

Ähnliche Arbeiten

Autoren

Themen

Impact of AI and Big Data on Business and SocietyArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen