Fast Abductive Learning by Similarity-based Consistency Optimization

Authors: Yu-Xuan Huang, Wang-Zhou Dai, Le-Wen Cai, Stephen H Muggleton, Yuan Jiang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that the efficiency of ABLSim is significantly higher than the state-of-the-art neuro-symbolic methods, allowing it to achieve better performance with less labeled data and weaker domain knowledge. This section presents the experimental results on four neuro-symbolic tasks, including two benchmark datasets and two hard tasks with increased perception difficulty, to demonstrate that ABLSim can perform more efficient and effective abduction than previous state-of-the-art methods by leveraging the similarity among samples.
Researcher Affiliation Academia 1National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China {huangyx, cailw, jiangy}@lamda.nju.edu.cn 2Department of Computing, Imperial College London, London SW7 2AZ, UK {w.dai, s.muggleton}@imperial.ac.uk
Pseudocode Yes Algorithm 1 ABLSim Learning
Open Source Code Yes The code is available for download1. 1https://github.com/Abductive Learning/ABLSim
Open Datasets Yes This task was first introduced in [21], the inputs are pairs of MNIST [17] images and the outputs are their sums. We also prepare a hard version of this task by replacing the MNIST images with CIFAR10 [16] images. The HWF dataset [19].
Dataset Splits No The paper does not provide specific details on dataset splits (e.g., percentages or counts) for training, validation, and test sets. It mentions 'training images' but not how they are partitioned for validation.
Hardware Specification Yes All experiments are repeated five times on a server with Intel Xeon Gold 6248R CPU and Nvidia Tesla V100S GPU.
Software Dependencies No The paper mentions models and frameworks like 'Le Net [17]', 'Res Net-50 [12]', 'BERT model [7]', but does not provide specific version numbers for software dependencies such as Python, PyTorch, TensorFlow, or CUDA.
Experiment Setup Yes All methods share the same knowledge base and perception model (Le Net [17] for MNIST and Res Net-50 [12] for CIFAR-10), which are initialized randomly. For the non-pre-trained methods, including ABLSim, the perception model, a Res Net-50 [12], is initialized by self-supervised learning [3] on training images. We use a beam width of 600 in our implementation to achieve the balance between convergence rate and time complexity.