OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 26.04.2026, 21:35

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Leveraging a large language model for error analysis-based automatic feedback in interpreter training

2026·0 Zitationen·Translation Cognition & BehaviorOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Abstract Feedback enables learners to improve performance and teachers to refine instruction. With advances in large language models (LLMs), automatic feedback has emerged as an efficient and innovative complement to traditional sources such as teacher, peer, and self-feedback. This study explores the integration of error analysis–based feedback generated by ChatGPT-4o into Chinese–Portuguese interpreter training. The model was prompted to detect and explain interpreting errors in aligned sentence pairs and to offer reference translations. We then evaluated the accuracy of these feedback components and the perceived usefulness of feedback through a questionnaire administered to two groups of stakeholders: interpreting teachers (as feedback providers) and interpreting trainees (as feedback users). Findings indicated that for the test set of sentences used, the LLM-generated feedback was rated as high quality, and both evaluator cohorts expressed favorable views on its usefulness in interpreter training. These results provide preliminary evidence that LLM-based feedback can serve as a valuable complement to human feedback in pedagogical contexts.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationInterpreting and Communication in HealthcareText Readability and Simplification
Volltext beim Verlag öffnen