Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Domain Adaptation and Multi-Domain Adaptation for Neural Machine Translation: A Survey
Authors: Danielle Saunders
JAIR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We survey approaches to domain adaptation for NMT, particularly where a system may need to translate across multiple domains. We divide techniques into those revolving around data selection or generation, model architecture, parameter adaptation procedure, and inference procedure. We finally highlight the benefits of domain adaptation and multidomain adaptation techniques to other lines of NMT research. |
| Researcher Affiliation | Academia | Danielle Saunders EMAIL Cambridge University Engineering Department Cambridge, United Kingdom |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. The methods are described textually. |
| Open Source Code | No | The paper is a survey and does not describe any new methodology for which code would be released. There is no statement about providing open-source code for the work described. |
| Open Datasets | No | The paper is a survey of existing research and does not conduct its own experiments on datasets. While it references many datasets used by other researchers (e.g., WMT shared tasks, Paracrawl corpus), it does not provide access information for a dataset used in its own methodology. |
| Dataset Splits | No | The paper is a survey and does not conduct its own experiments. Therefore, it does not provide any training/test/validation dataset splits. |
| Hardware Specification | No | The paper is a survey and does not describe any experimental setup that would require specific hardware. Therefore, no hardware specifications are provided. |
| Software Dependencies | No | The paper is a survey and does not describe any experimental setup that would require specific software dependencies with version numbers. While it refers to various software/models developed by others, it does not specify dependencies for its own work. |
| Experiment Setup | No | The paper is a survey of existing research and does not conduct its own experiments. Therefore, no experimental setup details, hyperparameters, or system-level training settings are provided. |