Automatic Program Synthesis of Long Programs with a Learned Garbage Collector

Authors: Amit Zohar, Lior Wolf

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We ran a series of experiments testing the required runtime to reach a certain level of success. The results are reported in Tab. 1.
Researcher Affiliation Collaboration 1The School of Computer Science , Tel Aviv University 2Facebook AI Research
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Our code, including an implementation of various literature baselines, is publicly available at https: //github.com/amitz25/PCCoder
Open Datasets No We generate our train and test data similarly to [1]. First, we generate random programs from the DSL. Then, we prune away programs that contain redundant variables, and programs for which an equivalent program exists in the dataset (could be shorter). No concrete access information (link, DOI, repository, or citation with authors/year) is provided for this generated dataset.
Dataset Splits No The paper mentions 'train' and 'test' data, but does not explicitly specify a 'validation' dataset split or provide any details about it.
Hardware Specification Yes All our experiments were performed using F64s Azure cloud instances.
Software Dependencies No The paper mentions 'Cython program' and 'Adam [11]' as optimizer, but does not provide specific version numbers for any software dependencies or libraries (e.g., Python, PyTorch, etc.).
Experiment Setup Yes For optimization, we use Adam [11] with a learning rate of 0.001 and batch size of 100. We employ α = 100, β = 10, and c = 10.