Learning Higher-Order Logic Programs through Abstraction and Invention
Authors: Andrew Cropper, Stephen H. Muggleton
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate increased accuracy and reduced learning times in all cases. |
| Researcher Affiliation | Academia | Andrew Cropper, Stephen H. Muggleton Imperial College London United Kingdom {a.cropper13,s.muggleton}@imperial.ac.uk |
| Pseudocode | Yes | Figure 5: Prolog code for the Metagol AI meta-interpreter. |
| Open Source Code | No | Section 4 states 'Metagol AI extends Metagol1, an existing MIL implementation, to support Abstractions and Invention by learning with interpreted BK.' and provides a footnote for Metagol1: '1https://github.com/metagol/metagol'. This link is for the *existing* Metagol1 system, and it is not explicitly stated that the code for their extensions (Metagol AI) is also available there. |
| Open Datasets | Yes | Experimental data are available at http://ilp.doc.ic.ac.uk/ijcai16metagolai |
| Dataset Splits | No | The paper states 'We train using m randomly chosen positive examples for each m in the set {1,2,3,4,5}. We test using 40 examples, half positive and half negative'. It does not specify a distinct validation split or detailed splitting methodology. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Metagol AI' and 'Metagol1' but does not specify version numbers for these systems or any other ancillary software dependencies like programming languages or libraries. |
| Experiment Setup | Yes | We train using m randomly chosen positive examples for each m in the set {1,2,3,4,5}. We test using 40 examples, half positive and half negative, so the default accuracy is 50%. We average predictive accuracies and learning times over 20 trials. For each learning task, we enforce a 10-minute timeout. |