Neural Program Synthesis from Diverse Demonstration Videos

Authors: Shao-Hua Sun, Hyeonwoo Noh, Sriram Somasundaram, Joseph Lim

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate our model in two environments: a fully observable, third-person environment (Karel) and a partially observable, egocentric game (Vi ZDoom). Our experiments in both environments with a variety of settings present the strength of explicitly modeling programs for reasoning underlying conditions and the necessity of the proposed components (the summarizer module and the auxiliary tasks).
Researcher Affiliation Academia 1Department of Computer Science, University of Southern California, California, USA 2Department of Computer Science and Engineering, POSTECH, Pohang, Korea. Correspondence to: Shao-Hua Sun <shaohuas@usc.edu>.
Pseudocode No The paper defines a domain-specific language (Figure 2) but does not include pseudocode or an algorithm block for its proposed method.
Open Source Code Yes The code is available at https://shaohua0116.github.io/demo2program.
Open Datasets No The paper generates its own datasets for Karel and Vi ZDoom environments, stating 'We randomly generate 35,000 unique programs...' and 'We generate 80,000 training programs...', but does not provide concrete access (link, DOI, repository, or citation) to these generated datasets.
Dataset Splits Yes We randomly generate 35,000 unique programs and split them into a training set with 25,000 program, a validation set with 5,000 program, and a testing set with 5,000 programs.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, cloud instances) used for running the experiments.
Software Dependencies No The paper mentions neural network architectures like LSTMs and CNNs but does not specify any software dependencies (e.g., PyTorch, TensorFlow, or specific library versions) that would be needed for replication.
Experiment Setup No The paper states 'The training details are described in the supplementary material,' indicating that specific experimental setup details (such as hyperparameters) are not provided in the main text. Only `alpha = beta = 1` for loss weighting is mentioned explicitly in the main text, which is insufficient for a full setup.