Parametrized Hierarchical Procedures for Neural Programming

Authors: Roy Fox, Richard Shin, Sanjay Krishnan, Ken Goldberg, Dawn Song, Ion Stoica

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show in two benchmarks, Nano Craft and long-hand addition, that PHPs can learn neural programs more accurately from smaller amounts of both annotated and unannotated demonstrations.
Researcher Affiliation Academia Roy Fox, Richard Shin, Sanjay Krishnan, Ken Goldberg, Dawn Song, and Ion Stoica Department of Electrical Engineering and Computer Sciences University of California, Berkeley {royf,ricshin,sanjaykrishnan,goldberg,dawnsong,istoica}@berkeley.edu
Pseudocode No The paper describes algorithms in text and mathematical formulations but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing open-source code or a link to a code repository for the methodology described.
Open Datasets Yes We evaluate our proposed method on the two settings studied by Li et al. (2017): Nano Craft, which involves an agent interacting in a grid world, and long-hand addition, which was also considered by Reed & De Freitas (2016) and Cai et al. (2017). Following Li et al. (2017), we trained our model on execution traces for inputs of each length 1 to 10. We used 16 traces for each input length, for a total of 160 traces. The dataset was generated randomly, but constrained to contain at least 1 example of each column of digits.
Dataset Splits No The paper mentions 'test performance' and 'test accuracy' but does not provide specific percentages or counts for training, validation, and test splits, nor does it refer to predefined validation splits.
Hardware Specification No The paper does not provide any specific hardware details (e.g., exact GPU/CPU models, processor types) used for running its experiments.
Software Dependencies No The paper does not mention specific software dependencies with version numbers.
Experiment Setup Yes We trained each level for 2000 iterations, iteratively from the lowest level to the highest. The results are averaged over 5 trials with independent datasets.