OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 00:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

SCSF: A Self-Checking and Selection Framework for Machine Translation based on Large Language Models

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

Large language models (LLMs) demonstrate tremendous potential in the field of machine translation, but currently, these models often exhibit issues such as unstable output, semantic drift, and even omission or repetition of information during translation. Therefore, how to properly and efficiently utilize and evaluate their translation capabilities has become a key challenge. This paper presents an innovative Self-Checking and Selection Framework (SCSF) for machine translation tasks based on LLMs [1]. Through designing unique prompting strategies, the framework enables models to assess their own translation quality and select optimal results from multiple translation attempts. We conducted extensive experiments on five mainstream LLMs (including GPT-4 and Claude-3) [2] to test the framework's performance across multiple language pairs. The experimental results show that: ( 1) Having large language models self-evaluate and select better translations can improve translation performance through multiple translation attempts. The more generations produced, the better the translation performance becomes. (2) Introducing more refined and detailed prompts can make the large language model's evaluation more accurate and precise. This research provides a practical solution for enhancing the application of LLMs in machine translation.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Natural Language Processing TechniquesTopic ModelingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen