Learning to Learn Programs from Examples: Going Beyond Program Structure
Authors: Kevin Ellis, Sumit Gulwani
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | These techniques are evaluated in two programming-by-example domains, improving the accuracy of program learners. |
| Researcher Affiliation | Collaboration | Kevin Ellis MIT ellisk@mit.edu Sumit Gulwani Microsoft sumitg@microsoft.com |
| Pseudocode | No | Figure 1 shows a DSL grammar, which is a formal language definition, not a pseudocode block for an algorithm. The paper describes processes but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a link to the PROSE library (https://microsoft.github.io/prose/), which is a framework they integrated with, but it does not explicitly state that the code for *their specific methodology* is open-source or provide a link to it. |
| Open Datasets | No | The paper states, 'We used a dataset of 447 string transformation and 488 text extraction problems. The speciļ¬c problems in the experiments are the standard benchmarks maintained by the PROSE team at Microsoft.', but it does not provide concrete access information (e.g., URL, DOI, specific citation with authors and year) for these datasets. |
| Dataset Splits | Yes | Test accuracies determined by 10-fold cross validation. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU models, CPU types, memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks used in the implementation. |
| Experiment Setup | No | The paper mentions aspects of the experimental setup, such as using RMSProp for optimization, but it does not provide specific hyperparameter values (e.g., learning rate, batch size) or detailed system-level training configurations. |