Tree-structured decoding with doubly-recurrent neural networks
Authors: David Alvarez-Melis, Tommi S. Jaakkola
ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results show the effectiveness of this architecture at recovering latent tree structure in sequences and at mapping sentences to simple functional programs. |
| Researcher Affiliation | Academia | Computer Science and Artificial Intelligence Lab MIT {davidam,tommi}@csail.mit.edu |
| Pseudocode | No | The paper describes mathematical formulations and procedural steps for the method, but does not provide structured pseudocode or a clearly labeled algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of its own source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | The IFTTT dataset (Quirk et al., 2015) is a simple testbed for language-to-program mapping. ... For this reason, we consider a setting with limited data: a subset of the WMT14 dataset consisting of about 50K English French sentence pairs (see the Appendix for details) along with dependency parses of the target (English) side. |
| Dataset Splits | Yes | We create a dataset of 5,000 trees with this procedure, and split it randomly into train, validation and test sets (with a 80%,10%,10% split). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper mentions software components like ADAM, Open NMT library, and Stanford Core NLP toolkit, but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | Full training details are provided in the Appendix. The best parameters for all tasks are chosen by performance on the validation sets. We perform early stopping based on the validation loss. ... The parameter configurations that yielded the best results and were used for the final models are shown in Table 3. |