OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 00:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

LLM-Assisted Relevance Assessments: When Should We Ask LLMs for Help?

2025·3 ZitationenOpen Access
Volltext beim Verlag öffnen

3

Zitationen

4

Autoren

2025

Jahr

Abstract

Test collections are information retrieval tools that allow researchers to quickly and easily evaluate ranking algorithms. While test collections have become an integral part of IR research, the process of data creation involves significant manual annotation effort, which often makes it very expensive and time-consuming. Consequently, test collections could become too small when the budget is limited, which may lead to unstable evaluations. As a cheaper alternative, recent studies have proposed the use of large language models (LLMs) to completely replace human assessors. However, while LLMs seem to somewhat correlate with human judgments, their predictions are not perfect and often show bias. Thus, a complete replacement with LLMs is argued to be too risky and not fully reliable.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in LawArtificial Intelligence in Healthcare and EducationLibrary Science and Information Systems
Volltext beim Verlag öffnen