Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Understanding AI interpreting in context: A comprehension-based evaluation of human vs. machine-generated interpretations in a real-world setting.
0
Zitationen
1
Autoren
2026
Jahr
Abstract
The rise of AI in the interpreting industry poses pressing questions about the sustainability of interpreting as a profession. While commercial platforms promise real-time multilingual communication at scale, their functional effectiveness in high-stakes professional contexts remains underexplored. This study presents a comprehension-based evaluation comparing human and AI interpreting of a climate-related press conference. Following Reithofer’s (2013, 2014) methodology, 56 journalists were divided into two groups: one listening to professional human interpretation and the other to a cutting-edge AI service (KUDO AI Speech Translator). Results showed that the human group achieved higher comprehension scores (mean 4.5/10) than the AI group (mean 3.7/10), with the latter exhibiting a 17.9% “Don’t Know” rate. Qualitative feedback highlighted that AI’s lack of prosodic salience increased cognitive load, hindering deep information synthesis. These findings suggest that human intervention remains essential for ensuring semantic adequacy and effective information transfer in professional journalistic settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.