Neural Program Synthesis with Query
Authors: Di Huang, Rui Zhang, Xing Hu, Xishan Zhang, Pengwei Jin, Nan Li, Zidong Du, Qi Guo, Yunji Chen
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the effectiveness and generalization of the proposed query-based framework on the Karel task and the list processing task. Experimental results show that the query-based framework can generate informative input-output examples which achieve and even outperform well-designed input-output examples. |
| Researcher Affiliation | Collaboration | Di Huang1,2,4, Rui Zhang1,4 , Xing Hu1,4, Xishan Zhang1,4, Pengwei Jin1,2,4, Nan Li1,3,4, Zidong Du1,4, Qi Guo1 & Yunji Chen1,2 1SKL of Computer Architecture, Institute of Computing Technology, CAS 2University of Chinese Academy of Sciences 3University of Science and Technology of China 4Cambricon Technologies |
| Pseudocode | Yes | Algorithm 1 Training process |
| Open Source Code | No | The paper does not contain any statement about making the source code publicly available or provide a link to a code repository for the described methodology. |
| Open Datasets | Yes | We evaluate our method on the Karel task and the list processing task which have large input spaces. For example, in Karel (Devlin et al., 2017b; Bunel et al., 2018)... Following PCCoder (Zohar & Wolf, 2018), we generate two datasets with program length 4 as dataset D1 and program length up to 12 as dataset D2. |
| Dataset Splits | No | The paper mentions training on datasets and refers to 'Karel’s validation set' but does not provide specific percentages or counts for training, validation, and test splits needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or specific cloud computing instances used for running experiments. |
| Software Dependencies | No | The paper mentions the 'Adam optimizer' but does not specify any software dependencies (e.g., libraries, frameworks) with version numbers required to replicate the experiments. |
| Experiment Setup | Yes | For training, the learning rate of the query network is set to 10 4 with the Adam optimizer Kingma & Ba (2014) while the learning rate of the synthesis network stays the same with the original methods. The batch size is 128, and the random seed is set to 100. ... The learning rate of the query network is set to 10 4 with a 0.1 decay every 40 epochs and the Adam optimizer Kingma & Ba (2014). The learning rate of the synthesis network is 10 3, which stays the same with the original methods with 0.1 decay every 4 epochs. The batch size of the query process is 64, the batch size of synthesis is 32 for D1 and 100 for D2. The random seed is set to 100. |