OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 21:06

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

LLM-Based Evaluation of Utterances with Implicature Understanding

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

Although Large Language Models (LLMs) have recently shown remarkable performance in many language comprehension tasks, they struggle to perform adequately in communicative contexts involving implicature. In our previous study, we proposed LLM-based agents by integrating LLMs with cognitive models. In three dialogue scenarios, these agents generated appropriate utterances as if inferring the speaker’s intentions (i.e., implicature). Further investigation of the agents’ performances requires an examination of their utterances in a large number of scenarios. In addition, it is also important to consistently evaluate the agents’ utterances. Thus, this study proposes a method in which LLMs evaluate agents’ generated utterances in the same way as human evaluators. Using our pilot prompt, we demonstrated that the evaluations of LLMs and human evaluators were similar.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingMultimodal Machine Learning ApplicationsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen