Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Generative AI Paradox in Evaluation: “What It Can Solve, It May Not Evaluate”
1
Zitationen
4
Autoren
2024
Jahr
Abstract
This paper explores the assumption that Large Language Models (LLMs) skilled in generation tasks are equally adept as evaluators.We assess the performance of three LLMs and one open-source LM in Question-Answering (QA) and evaluation tasks using the TriviaQA (Joshi et al., 2017) dataset.Results indicate a significant disparity, with LLMs exhibiting lower performance in evaluation tasks compared to generation tasks.Intriguingly, we discover instances of unfaithful evaluation where models accurately evaluate answers in areas where they lack competence, underscoring the need to examine the faithfulness and trustworthiness of LLMs as evaluators.This study contributes to the understanding of "the Generative AI Paradox" (West et al., 2023), highlighting a need to explore the correlation between generative excellence and evaluation proficiency, and the necessity to scrutinize the faithfulness aspect in model evaluations.
Ähnliche Arbeiten
Structural equation modeling with AMOS: basic concepts, applications, and programming
2000 · 18.097 Zit.
Multilevel analysis : an introduction to basic and advanced multilevel modeling
1999 · 6.917 Zit.
Modern Methods for Business Research
1998 · 6.491 Zit.
Structural Equation Modeling With Lisrel, Prelis, and Simplis: Basic Concepts, Applications, and Programming
1998 · 3.875 Zit.
Innovation characteristics and innovation adoption-implementation: A meta-analysis of findings
1982 · 3.240 Zit.