Composing Neural Learning and Symbolic Reasoning with an Application to Visual Discrimination

Authors: Adithya Murali, Atharva Sehgal, Paul Krogmeier, P. Madhusudan

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement and evaluate our framework on the real-world datasets and show that it is effective and robust. It solves 68% and 80% of the puzzles in the two datasets and gives sensible discriminators. We perform ablation studies that examine the effectiveness of the domain-specific logic FO-SL as well as the synthesis algorithm. We also compare our framework with purely neural baselines based on image similarity models [Wang et al., 2014] and prototypical networks [Snell et al., 2017], and we show that they perform poorly ( 40%).
Researcher Affiliation Academia 1Department of Computer Science, University of Illinois at Urbana-Champaign 2Department of Computer Science, University of Texas at Austin
Pseudocode No No pseudocode or algorithm blocks were found.
Open Source Code Yes The datasets, code, and the website of VDPs can be found at: https://github.com/muraliadithya/vdp
Open Datasets Yes We use YOLOV4 [Wang et al., 2021], a CNNbased object detector trained on the IMAGENET and COCO datasets to predict multiple objects with bounding boxes and class labels. The GQA VDP dataset is created automatically using the GQA dataset [Hudson and Manning, 2019]. The CLEVR domain [Johnson et al., 2017]... We create 11,600 puzzles across four datasets.
Dataset Splits No We fine-tune a Res Net18 [He et al., 2016] + MLP architecture pretrained on CIFAR10 using 6 concept classes, validated against a held-out set of classes. (This statement refers to a baseline model, not the training/validation split for the authors' main proposed framework or datasets.)
Hardware Specification No No specific hardware details (GPU/CPU models, memory, etc.) used for running experiments were mentioned.
Software Dependencies No We use a pretrained model of YOLOv4 [Wang et al., 2021]... and uses the SAT-solver Z3 [de Moura and Bjørner, 2008]. (Specific versions of these software are not provided).
Experiment Setup No No specific experimental setup details such as concrete hyperparameter values, training configurations, or system-level settings were explicitly provided for their main framework's training.