Learning to Combine Per-Example Solutions for Neural Program Synthesis

Authors: Disha Shrivastava, Hugo Larochelle, Daniel Tarlow

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluation across programs of different lengths and under two different experimental settings reveal that when given the same time budget, our technique significantly improves the success rate over PCCoder [30] and other ablation baselines. The code, data and trained models for our work can be found at: https://github.com/shrivastavadisha/N-PEPS.
Researcher Affiliation Collaboration Disha Shrivastava Mila, Université de Montréal Google Research Hugo Larochelle Mila, Université de Montréal Google Research CIFAR Fellow Daniel Tarlow Mila, Mc Gill University Google Research
Pseudocode No The paper describes algorithms and processes (e.g., PCCoder, CAB), but does not contain a formal 'Pseudocode' or 'Algorithm' block/figure.
Open Source Code Yes The code, data and trained models for our work can be found at: https://github.com/shrivastavadisha/N-PEPS.
Open Datasets Yes The code, data and trained models for our work can be found at: https://github.com/shrivastavadisha/N-PEPS.
Dataset Splits Yes 10% of the training data was used for validation.
Hardware Specification Yes To account for variability across machines, we chose to run a test split on a machine chosen randomly from a collection of 7 machines of similar configuration (Google Cloud instances with 120GB RAM each)4.
Software Dependencies No The paper mentions using the PCCoder implementation but does not list specific software dependencies (e.g., Python, PyTorch, CUDA) with version numbers in the main text.
Experiment Setup No The paper states that 'Complete details of hyperparameters for all methods can be found in Appendix D.' but does not list specific hyperparameters or system-level training settings in the main text.