Maximum Expected Likelihood Estimation for Zero-resource Neural Machine Translation

Authors: Hao Zheng, Yong Cheng, Yang Liu

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on two zero-resource language pairs show that the proposed approach yields substantial gains over baseline methods.
Researcher Affiliation Academia Beihang University, Beijing, China + Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology, Tsinghua University, Beijing, China
Pseudocode No The paper describes methods in narrative text and mathematical formulations but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code or provide links to a code repository for the described methodology.
Open Datasets Yes We use Spanish-English, German-English and English French parallel corpora from the Europarl dataset. ... All sentences are tokenized by the tokenize.perl script [Koehn et al., 2007].
Dataset Splits Yes The shared task 2006 datasets are used as development and test sets.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No We implement all methods on top of the state-of-the-art open-source NMT system GROUNDHOG [Bahdanau et al., 2014].
Experiment Setup Yes All neural translation models use the default setting of network hyper-parameters of GROUNDHOG. ... Each time the sentence z(n) is selected in a mini-batch, we only randomly sample one sentence x in consideration of the limited GPU memory. ... For the single embedding approach, we find the probability weight P(x|z; ˆθz x) is usually very small, making the training extremely slow. Therefore, in practice we instead take its q-th root qq P(x|z; ˆθz x) and set q = 10 for speed-up.