Learning Logic Programs by Discovering Where Not to Search

Authors: Andrew Cropper, Céline Hocquette

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on multiple domains (including program synthesis and game playing) show that our approach can (i) substantially reduce learning times by up to 97%, and (ii) scale to domains with millions of facts.
Researcher Affiliation Academia Andrew Cropper and C eline Hocquette, University of Oxford andrew.cropper@cs.ox.ac.uk, celine.hocquette@cs.ox.ac.uk
Pseudocode Yes The appendix includes all the ASP programs we consider.
Open Source Code Yes The experimental code and data are available at https://github.com/logicand-learning-lab/aaai23-disco.
Open Datasets Yes We use six domains... Michalski trains (Larson and Michalski 1977)... IMDB. This real-world dataset (Mihalkova, Huynh, and Mooney 2007)... Chess... Zendo... IGGP (Cropper, Evans, and Law 2020)... Program synthesis. We use a standard synthesis dataset (Cropper and Morel 2021).
Dataset Splits No The paper mentions using 'training examples' and 'testing' hypotheses, but does not explicitly provide details on how the datasets were split into training, validation, and test sets, nor specific percentages or sample counts for these splits.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as CPU/GPU models or memory specifications.
Software Dependencies Yes We use Popper 2.0.0 (Cropper 2022).
Experiment Setup Yes We enforce a timeout of 20 minutes per task. We measure the mean and standard error over 10 trials. We round times over one second to the nearest second. The appendix includes all the experimental details and example solutions.