Abductive Knowledge Induction from Raw Data

Authors: Wang-Zhou Dai, Stephen Muggleton

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that Meta Abd not only outperforms the compared systems in predictive accuracy and data efficiency but also induces logic programs that can be re-used as background knowledge in subsequent learning tasks. This section describes the experiments of learning recursive arithmetic and sorting algorithms from images of handwritten digits3, aiming to address the following questions: (1) Can Meta Abd learn first-order logic programs and train perceptual neural networks jointly? (2) Given the same or less amount of domain knowledge, is hybrid modelling, which directly leverages the background knowledge in symbolic form, better than end-to-end learning?
Researcher Affiliation Academia Wang-Zhou Dai , Stephen Muggleton Department of Computing, Imperial College London, London, UK {w.dai, s.muggleton}@imperial.ac.uk
Pseudocode Yes Figure 2: Prolog code for Meta Abd.
Open Source Code Yes Code & data: https://github.com/Abductive Learning/Meta Abd
Open Datasets Yes The inputs of the two tasks are sequences of randomly chosen MNIST digits; the numerical outputs are the sum and product of the digits, respectively. Code & data: https://github.com/Abductive Learning/Meta Abd
Dataset Splits Yes The dataset contains 3000 and 1000 examples for training and validation, respectively; the test data of each length has 10,000 examples.
Hardware Specification No The paper does not specify the hardware used for experiments, such as GPU models, CPU types, or memory.
Software Dependencies No The paper mentions Prolog and refers to 'CLP(Z) is a constraint logic programming package accessible at https://github.com/triska/clpz', but does not provide specific version numbers for its key software dependencies.
Experiment Setup Yes A convnet processes the input images to the recurrent networks and Problog programs, as [Trask et al., 2018] and [Manhaeve et al., 2018] described; it also serves as the perception model of Meta Abd to output the probabilistic facts. Each experiment is carried out five times, and the average of the results are reported. The two tasks are trained sequentially as a curriculum. Meta Abd learns the sub-task in the first five epochs and then re-uses the learned models to learn bogosort. Meta Abd uses an MLP attached to the same untrained convnet as other models to produce dyadic probabilistic facts nn pred([ ]).