Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Hierarchical Decision Making by Generating and Following Natural Language Instructions
Authors: Hengyuan Hu, Denis Yarats, Qucheng Gong, Yuandong Tian, Mike Lewis
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We gather a dataset of 76 thousand pairs of instructions and executions from human play, and train instructor and executor models. Experiments show that models generate intermediate plans in natural langauge significantly outperform models that directly imitate human actions. |
| Researcher Affiliation | Collaboration | Hengyuan Hu Facebook AI Research EMAIL Denis Yarats New York University & Facebook AI Research EMAIL Qucheng Gong Facebook AI Research EMAIL Yuandong Tian Facebook AI Research EMAIL Mike Lewis Facebook AI Research EMAIL |
| Pseudocode | No | The paper describes the models and methods in text and through diagrams, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is open-sourced at www.github.com/facebookresearch/minirts |
| Open Datasets | Yes | We gather a dataset of 76 thousand pairs of instructions and executions from human play, and train instructor and executor models. Our dataset is available. For more details, refer to the Appendix. Our code, models and data. |
| Dataset Splits | No | The paper mentions the total dataset size and a frame sampling strategy for training, but does not provide specific percentages, counts, or references to predefined splits for training, validation, and test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory amounts, or types of computing resources used for the experiments. |
| Software Dependencies | No | The paper mentions using 'Parl AI [12]' but does not provide specific version numbers for ParlAI or any other software libraries or dependencies used in the experiments. |
| Experiment Setup | Yes | For each unit, we consider a history of recent N instructions (N = 5 in all our experiments)... We use the RNN DISCRIMINATIVE instructor with 500 instructions. (Table 2 caption). Instructor Model Negative Log Likelihood Win/Lose/Draw rate (%) (with N instructions) N=50 N=250 N=500 (Table 3 header) |