Assessing Translation Ability through Vocabulary Ability Assessment

Authors: Yo Ehara, Yukino Baba, Masao Utiyama, Eiichiro Sumita

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted two experiments indicating that the proposed method accurately estimates translation ability.
Researcher Affiliation Academia Yo Ehara Yukino Baba Masao Utiyama, Eiichiro Sumita AIST Kyoto University NICT y-ehara@aist.go.jp baba@i.kyoto-u.ac.jp {mutiyama,eiichiro.sumita}@nict.go.jp
Pseudocode No No pseudocode or algorithm blocks were found.
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets Yes We used the Japanese-English Bilingual Corpus of Wikipedia s Kyoto Articles1. We randomly selected 104 English sentences that have more than ten words and their corresponding translations given in Japanese.
Dataset Splits Yes We used five-fold nested cross validation throughout the experiments. All the models have hyper-parameters. We conducted grid-search over the validation sets for tuning the hyper-parameters.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were provided.
Software Dependencies No We used the Python NLTK library for calculating the kappa coefficient. http://nltk.org/ and Model was created using standard procedure for creating English language model in Moses toolkit using News-Commentary Corpus 4. No version numbers are provided for NLTK or Moses toolkit.
Experiment Setup No The hyper-parameters of all the models were chosen from 10 3.0, 10 2.4, 10 1.8, 10 1.2, 10 0.6, 100.0, 100.6, 101.2, 101.8, 102.4, and 103.0. This specifies the grid for hyperparameter tuning, but not the specific final values used in the best model.