Bayesian Program Learning by Decompiling Amortized Knowledge
Authors: Alessandro B. Palmarini, Christopher G. Lucas, Siddharth N
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the two dream decompiler variants outlined in Section 4 to DREAMCODER across 6 program synthesis domains. Each domain was used as part of the evaluation of Ellis et al. (2021), where DREAMCODER was found to solve at least as many tasks as the best alternative tested for that domain and did so (mostly) in the least amount of time. |
| Researcher Affiliation | Academia | 1 Santa Fe Institute, Santa Fe, NM, USA 2 School of Informatics, University of Edinburgh, Edinburgh, UK 3 The Alan Turing Institute, UK. |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: 'We use the same implementation3, architecture and settings for the main DREAMCODER model presented in (Ellis et al., 2021), unless explicitly stated.' and provides a footnote link: 'https://github.com/ellisk42/ec'. This links to the base DREAMCODER system, not the specific code for the novel 'dream decompiling' variants developed in this paper. |
| Open Datasets | Yes | Text editing: ...from the Sy Gu S competition (Alur et al., 2017). ...Regexes: ...originally sourced from Hewitt et al. (2020). |
| Dataset Splits | No | The paper consistently mentions 'trained' and 'tested' tasks, and 'held-out tasks' for evaluation, but it does not explicitly define or use a separate 'validation' dataset split for hyperparameter tuning or early stopping. |
| Hardware Specification | Yes | Evaluation time varies across domain, with most taking roughly half a day using 1 NVIDIA A40 and 20 CPUs (40 for LOGO graphics). |
| Software Dependencies | No | The paper mentions using the DREAMCODER system, but does not provide specific version numbers for software dependencies like Python, PyTorch, or other libraries used. |
| Experiment Setup | Yes | Table 2 shows the hyperparameter values (relevant to all systems) that differ in at least one domain compared to those used in Ellis et al. (2021). ... Additionally, DREAMCODER employs a hyperparameter, denoted as λ in Ellis et al. (2021), which controls the prior distribution over libraries. In their work, λ was consistently set to 1.5 for all domains... In our experiments, we maintain uniformity by using the same DREAMCODER model, setting λ to 1.5 for all domains. |