Learning Logic Programs Though Divide, Constrain, and Conquer

Authors: Andrew Cropper6446-6453

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on three domains (classification, inductive general game playing, and program synthesis) show that our approach can increase predictive accuracies and reduce learning times. 5 Experiments We claim that DCC can reduce search complexity and thus improve learning performance. To evaluate this claim, our experiments aim to answer the question: Q1 Can DCC improve predictive accuracies and reduce learning times?
Researcher Affiliation Academia Andrew Cropper University of Oxford andrew.cropper@cs.ox.ac.uk
Pseudocode Yes Algorithm 1 shows the POPPER algorithm, which solves the LFF problem (Definition 1). Algorithm 2 shows the DCC algorithm.
Open Source Code Yes The experimental code and data are available at https://github.com/logic-and-learning-lab/aaai22-dcc.
Open Datasets Yes Michalski Trains (Larson and Michalski 1977) is a classical problem. In inductive general game playing (IGGP) (Cropper, Evans, and Law 2020) agents are given game traces... We use the program synthesis dataset introduced by Cropper and Morel (2021a).
Dataset Splits No We randomly sample the examples and split them into 80/20 train/test partitions.
Hardware Specification Yes We use a 3.8 GHz 8-Core Intel Core i7 with 32GB of ram.
Software Dependencies No The paper mentions 'Clingo (Gebser et al. 2014), an ASP system' as a component used, but does not provide specific version numbers for software dependencies needed to reproduce the experiments, nor a list of multiple key software components with versions.
Experiment Setup Yes We enforce a timeout of five minutes per task. We repeat all the experiments 20 times and measure the mean and standard deviation. We use a 3.8 GHz 8-Core Intel Core i7 with 32GB of ram. All the systems use a single CPU.