Prompting Neural Machine Translation with Translation Memories

Authors: Abudurexiti Reheman, Tao Zhou, Yingfeng Luo, Di Yang, Tong Xiao, Jingbo Zhu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on several datasets demonstrate that our system significantly outperforms strong baselines. In order to verify the validity of our proposed method, we conducted several experiments on TM specialized translation task and domain adaptation task, respectively. We also put our approach into practice on a commercial NMT system to assess its usability in the practical setting. In the end, we investigated the impact of the NMT model, TM similarity, and input sentence length on translation quality. Datasets and Models For TM specialized translation tasks, we evaluated our method on two datasets: 1) DGT-TM, the entire body of European legislation in 22 European languages, on German-English in both directions (En-De and De-En) and 2) United Nations Parallel Corpus (UNPC), consisting of United Nations General Assembly Resolutions with translations in the six official languages, on English-Chinese (En Zh), Russian-Chinese (Ru-Zh) and French-Chinese (Fr-Zh).
Researcher Affiliation Collaboration Abudurexiti Reheman1, Tao Zhou1, Yingfeng Luo1, Di Yang2, Tong Xiao1,2, Jingbo Zhu1,2* 1School of Computer Science and Engineering, Northeastern University, Shenyang, China 2Niu Trans Research, Shenyang, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link to the source code for the methodology described.
Open Datasets Yes For TM specialized translation tasks, we evaluated our method on two datasets: 1) DGT-TM, the entire body of European legislation in 22 European languages, on German-English in both directions (En-De and De-En) and 2) United Nations Parallel Corpus (UNPC), consisting of United Nations General Assembly Resolutions with translations in the six official languages, on English-Chinese (En Zh), Russian-Chinese (Ru-Zh) and French-Chinese (Fr-Zh). These two datasets are relatively easy to retrieve TM sentences with a high degree of similarity. For the test set and TM database, we cleaned the above corpora first, then randomly selected 3,000 sentence pairs for the test dataset, whereas the remaining corpora were utilized as the TM database.
Dataset Splits Yes For the test set and TM database, we cleaned the above corpora first, then randomly selected 3,000 sentence pairs for the test dataset, whereas the remaining corpora were utilized as the TM database.
Hardware Specification No The paper does not specify any hardware details like GPU or CPU models used for the experiments.
Software Dependencies No The paper mentions software tools like Niu Trans (Xiao et al. 2012) word segmentation tool and Moses toolkit (Koehn et al. 2007), Apache Lucene (Bialecki, Muir, and Ingersoll 2012), Mask-Align (Chen, Sun, and Liu 2021) and BPE (Sennrich, Haddow, and Birch 2016) but does not provide specific version numbers for these software dependencies or any other libraries.
Experiment Setup No The paper describes the general approach and some dataset preparation steps but lacks specific hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) or detailed system-level training configurations for the NMT models. It references pre-trained models but doesn't detail their fine-tuning setup.