Neural Sketch Learning for Conditional Program Generation
Authors: Vijayaraghavan Murali, Letao Qi, Swarat Chaudhuri, Chris Jermaine
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Now we present an empirical evaluation of the effectiveness of our method. The experiments we describe utilize data from an online repository of about 1500 Android apps (and, 2017). ... Figure 6 shows the collated results of this evaluation, where each entry computes the average of the corresponding metric over the 10000 test programs. |
| Researcher Affiliation | Academia | Vijayaraghavan Murali, Letao Qi, Swarat Chaudhuri, and Chris Jermaine Department of Computer Science Rice University Houston, TX 77005, USA. {vijay, letao.qi, swarat, cmj4}@rice.edu |
| Pseudocode | No | Figure 3: Grammar for sketches, Figure 8: Grammar for AML, Figure 9: The abstraction function α., Figure 11: Computing the hidden state and output of the decoder |
| Open Source Code | Yes | BAYOU is publicly available at https://github.com/capergroup/bayou. |
| Open Datasets | Yes | The experiments we describe utilize data from an online repository of about 1500 Android apps (and, 2017). ... Androiddrawer. http://www.androiddrawer.com, 2017. |
| Dataset Splits | Yes | From the extracted data, we randomly selected 10,000 programs to be in the testing and validation data each. |
| Hardware Specification | Yes | The training was performed on an AWS p2.xlarge machine with an NVIDIA K80 GPU with 12GB GPU memory. |
| Software Dependencies | No | We implemented our approach in our tool called BAYOU, using Tensor Flow (Abadi et al., 2015) to implement the GED neural model, and the Eclipse IDE for the abstraction from Java to the language of sketches and the combinatorial concretization. |
| Experiment Setup | Yes | Our hyper-parameters for training the model are as follows. We used 64, 32 and 64 units in the encoder for API calls, types and keywords, respectively, and 128 units in the decoder. The latent space was 32-dimensional. We used a mini-batch size of 50, a learning rate of 0.0006 for the Adam gradient-descent optimizer (Kingma & Ba, 2014), and ran the training for 50 epochs. |