NeurASP: Embracing Neural Networks into Answer Set Programming

Authors: Zhun Yang, Adam Ishay, Joohyung Lee

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Table 1 compares Accidentify of each of Midentify, Neur ASP program Πsudoku \ r with Midentify, Neur ASP program Πsudoku with Midentify, as well as Accsol of Πsudoku with Midentify. All experiments in Section 4 were done on Ubuntu 18.04.2 LTS with two 10-cores CPU Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz and four GP104 [Ge Force GTX 1080].
Researcher Affiliation Collaboration Zhun Yang1 , Adam Ishay1 and Joohyung Lee1 2 1 Arizona State University, Tempe, AZ, USA 2 Samsung Research, Seoul, South Korea
Pseudocode No The paper provides examples of ASP rules but does not contain a dedicated pseudocode or algorithm block.
Open Source Code Yes The implementation of Neur ASP, as well as the codes used for the experiments, is publicly available online at https://github.com/azreasoners/Neur ASP.
Open Datasets Yes For comparison, we use the same dataset and the same structure of the neural network model used in [Manhaeve et al., 2018] to train the digit classifier Mdigit in Πdigit. The network Midentify is pretrained using image, label pairs, where each image is a Sudoku board image generated by Open Sky Sudoku Generator (http://www.opensky.ca/ jdhildeb/software/sudokugen/) and We use the dataset from [Xu et al., 2018].
Dataset Splits Yes The dataset is divided into 60/20/20 train/validation/test examples.
Hardware Specification Yes All experiments in Section 4 were done on Ubuntu 18.04.2 LTS with two 10-cores CPU Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz and four GP104 [Ge Force GTX 1080].
Software Dependencies No The paper mentions using 'Py Torch' and 'CLINGO' but does not specify their version numbers.
Experiment Setup No The paper mentions the number of epochs for training (e.g., '63 epochs of training', '500 epochs of training') and general neural network architectures (e.g., '5-layer Multi-Layer Perceptron'), but it does not provide specific hyperparameter values like learning rate, batch size, or optimizer settings within the main text.