Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
From Feature To Paradigm: Deep Learning In Machine Translation
Authors: Marta R. Costa-jussà
JAIR 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This manuscript focuses on collecting and describing research done in introducing deep learning in MT. Differently from previous surveys (Zhang & Zong, 2015), we do not detail deep learning techniques, instead we just provide a briefly description to make this manuscript self-contained. We center our attention on: Overviewing the integration of deep learning in MT and reporting the MT aspects that have been improved with the different types of neural networks; Detailing the new neural MT architecture, citing its foundational works as well as discussing recent advances that face challenging aspects encountered in the neural MT architecture; Depicting an analysis of strengths and weaknesses of deep learning in MT. |
| Researcher Affiliation | Academia | Marta R. Costa-juss a EMAIL TALP Research Center, Universitat Polit ecnica de Catalunya, 08034 Barcelona |
| Pseudocode | No | The paper describes various neural network architectures (Feed-Forward, Recurrent, Encoder-Decoder) and their applications in MT but does not include any explicit pseudocode or algorithm blocks. It uses natural language descriptions and diagrams like Figure 1. |
| Open Source Code | No | The paper is a survey of deep learning in Machine Translation. It does not present new methodology for which its own code would be released, nor does it provide any links to source code repositories for the work described within this manuscript. |
| Open Datasets | No | The paper is a survey and does not conduct its own experiments using specific datasets. While it references evaluations like WMT 2015 and WMT 2016, it does not provide concrete access information or citations for datasets used within this paper's own research. |
| Dataset Splits | No | The paper is a survey and does not describe any specific dataset splits for its own experimental work. It discusses findings from other research that might have used dataset splits, but this paper does not provide such details. |
| Hardware Specification | No | The paper is a survey and does not report on specific hardware used for its own experimental work. It makes general statements about the computational requirements of deep learning, such as the need for GPUs, but does not provide specific model numbers or configurations. |
| Software Dependencies | No | The paper is a survey and does not present any specific software dependencies or version numbers for its own experimental work. It discusses deep learning techniques and frameworks in general terms. |
| Experiment Setup | No | The paper is a survey of deep learning in Machine Translation and does not present specific experimental setup details, hyperparameter values, or training configurations for its own research. |