Representing Partial Programs with Blended Abstract Semantics
Authors: Maxwell Nye, Yewen Pu, Matthew Bowers, Jacob Andreas, Joshua B. Tenenbaum, Armando Solar-Lezama
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our new approach with program synthesis experiments in three domains: tower construction, list processing, and string editing. We show that our approach outperforms neural synthesis baselines, solving at least 5% more programs in each domain. |
| Researcher Affiliation | Academia | Maxwell Nye Yewen Pu Matthew Bowers Jacob Andreas Joshua B. Tenenbaum Armando Solar-Lezama Massachusetts Institute of Technology |
| Pseudocode | No | The paper describes conceptual steps and formalisms, but does not present any pseudocode or algorithm blocks with formal labeling such as 'Algorithm 1'. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for their methodology is open-source or publicly available. |
| Open Datasets | Yes | Data for this domain was generated by modifying the Deep Coder dataset (Balog et al., 2016). |
| Dataset Splits | No | The paper mentions training data sizes (e.g., '480,000 programs', '500,000 programs', '2 million programs') and discusses testing, but does not specify explicit train/validation/test dataset splits or their proportions. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper mentions optimizers (AMSGrad variant of Adam) and network architectures (RNNs, GRUs, CNNs, LSTMs) but does not provide specific version numbers for software libraries or frameworks (e.g., PyTorch, TensorFlow) used in the implementation. |
| Experiment Setup | Yes | All models are trained with the AMSGrad (Reddi et al., 2018) variant of the Adam optimizer with a learning rate of 0.001. All RNNs are 1-layer and bidirectional GRUs, where the final hidden state is used as the output representation. All neural modules consist of a single linear layer (input dimension 512 nargs and output dimension 512) followed by Re LU activation. |