Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Leveraging a large language model for error analysis-based automatic feedback in interpreter training
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Abstract Feedback enables learners to improve performance and teachers to refine instruction. With advances in large language models (LLMs), automatic feedback has emerged as an efficient and innovative complement to traditional sources such as teacher, peer, and self-feedback. This study explores the integration of error analysis–based feedback generated by ChatGPT-4o into Chinese–Portuguese interpreter training. The model was prompted to detect and explain interpreting errors in aligned sentence pairs and to offer reference translations. We then evaluated the accuracy of these feedback components and the perceived usefulness of feedback through a questionnaire administered to two groups of stakeholders: interpreting teachers (as feedback providers) and interpreting trainees (as feedback users). Findings indicated that for the test set of sentences used, the LLM-generated feedback was rated as high quality, and both evaluator cohorts expressed favorable views on its usefulness in interpreter training. These results provide preliminary evidence that LLM-based feedback can serve as a valuable complement to human feedback in pedagogical contexts.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.527 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.419 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.909 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.578 Zit.