Logic-Based Inductive Synthesis of Efficient Programs
Authors: Andrew Cropper
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results agree with the theoretical optimal predictions and show, for instance, that when learning to sort lists, Metagol O learns an efficient quick sort strategy, rather than an inefficient bubble sort strategy. |
| Researcher Affiliation | Academia | Andrew Cropper Imperial College London, United Kingdom |
| Pseudocode | No | The paper includes Prolog program examples in Figure 2, but these are not pseudocode or algorithm blocks for the research methodology itself. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described. |
| Open Datasets | No | The paper mentions learning from 'initial/final state examples' and 'a set of positive examples' and 'learning to sort lists', but it does not specify a publicly available dataset by name, provide a link, citation, or repository information for accessing any data used. |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or references to standard splits). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions 'Prolog programs' and 'Metagol O', but it does not provide specific version numbers for any software, libraries, or dependencies used in the experiments. |
| Experiment Setup | No | The paper discusses the 'Metagol O' implementation and 'iterative descent' but does not provide specific experimental setup details such as hyperparameter values, training configurations, or system-level settings. |