Neural Programming by Example

Authors: Chengxun Shu, Hongyu Zhang

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have experimentally evaluated NPBE on a large number of input-output strings for 45 string manipulation tasks and the results are encouraging. and Experiments Experimental Design The NPBE model is required to induce a program consisting of a sequence of functions based on only one inputoutput string pair. In this section, we describe our evaluation of the NPBE model.
Researcher Affiliation Academia Chengxun Shu Beihang University Beijing 100191, China shuchengxun@163.com Hongyu Zhang The University of Newcastle Callaghan, NSW 2308, Australia hongyu.zhang@newcastle.edu.au
Pseudocode No The paper describes the model architecture and its components through text and diagrams, but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code for the methodology, nor does it provide a link to a code repository.
Open Datasets No To obtain training data, we first generate programs at various levels of complexity according to 45 predefined tasks... For all the tasks, we generate a total number of around 69,000 programs for training. For each program Pi we generate a random input string Xi, which should be an valid input to the program Pi. Next, we apply the program Pi on Xi by actually running the Python program implementing Pi and get the output string Yi. We constrain Xi and Yi to be at most 62 characters long. The paper describes a process for generating synthetic data but does not provide access information for this data.
Dataset Splits No The paper mentions generating 69,000 programs for training and separate test sets ('1000 times' for RQ1, '19,000 programs' for RQ2), but does not specify a distinct validation dataset split or its size.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using RMSProp as the optimizer but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or library versions).
Experiment Setup Yes To train NPBE, we choose RMSProp (Tieleman and Hinton 2012) as the optimizer and set the mini-batch size to 200. We set the dimensionality of the transformation embedding t and the history embedding h to 256, the function embedding f to 16, and the arguments embedding a to 64. In our implementation, T = 5, M = 5.