Learning a Meta-Solver for Syntax-Guided Program Synthesis
Authors: Xujie Si, Yuan Yang, Hanjun Dai, Mayur Naik, Le Song
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the framework on 214 cryptographic circuit synthesis tasks. It solves 141 of them in the out-of-box solver setting, significantly outperforming a similar search-based approach but without learning, which solves only 31. The result is comparable to two state-of-the-art classical synthesis engines, which solve 129 and 153 respectively. |
| Researcher Affiliation | Academia | Xujie Si*1, Yuan Yang 2, Hanjun Dai2, Mayur Naik1 & Le Song2 1University of Pennsylvania, 2Georgia Institute of Technology |
| Pseudocode | No | The paper describes the model architecture and training process in text and with diagrams, but it does not include formal pseudocode blocks or algorithms. |
| Open Source Code | Yes | Our code and data are available on Git Hub: https://github.com/PL-ML/metal |
| Open Datasets | Yes | We evaluate the our framework1 on cryptographic circuit synthesis tasks (Eldib et al., 2016) which constitute a challenging benchmark suite from the general track of the Sy Gu S Competition (2017). |
| Dataset Splits | No | The paper mentions splitting data into training and testing sets ('150 tasks for training and the rest 64 tasks for testing'), but does not explicitly mention a separate validation set or its split details. |
| Hardware Specification | Yes | The Sy Gu S 2017 competition gives each solver 4-core 2.4GHz Intel processors with 128 GB memory and wallclock time limit of 1 hour; our evaluation uses AMD Opteron 6220 processor, assigns each solver a single core with 32 GB memory and wallclock time limit of 6 hours. |
| Software Dependencies | No | The paper mentions software like CVC4 and EUSolver as comparison tools, and states the framework is 'implemented in Python' and uses 'Adam optimizer', but it does not specify version numbers for Python, libraries, or the optimizers used. |
| Experiment Setup | Yes | The meta-solver is then trained on the training set for 35000 epochs using methods introduced in Sec 4.1. For each epoch, a batch of 10 tasks are sampled. The gradients of each task are averaged and applied to the model parameters using Adam optimizer. |