Compositional Neural Logic Programming

Authors: Son N. Tran

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experiments, we demonstrate the advantages of CNLP in discriminative tasks and generative tasks. and 4 Experiments section with subsections like 4.1 Comparison KB, 4.2 Addition, 4.3 Semantic Image Interpretation where performance metrics and comparisons are provided.
Researcher Affiliation Academia Son N. Tran University of Tasmania sn.tran@utas.edu.au
Pseudocode Yes Algorithm 1 Voting Backward-Forward Chaining (sketch)
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes 10000 digit images and their labels are extracted from the MNIST dataset1 as the facts for the digit predicate. and http://yann.lecun.com/exdb/mnist/
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification Yes We run 1000 queries for the addition(*,*,?) task using a computer with a quad-core 3.6 GHz CPU and 16 GB of RAM.
Software Dependencies No The paper mentions 'Adam optimizer' but does not provide specific ancillary software details like library or framework names with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes All models were trained by Adam optimizer with the batch size 64. and For a fair comparison, we also use the RMSProp optimiser for training as in [Donadello et al., 2017].