Neural-Symbolic Integration: A Compositional Perspective

Authors: Efthymia Tsamoura, Timothy Hospedales, Loizos Michael5051-5060

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate in what we believe to be a more comprehensive manner than typically found in the relevant literature the performance of our framework against three frameworks that share the same goals with ours: DEEPPROBLOG (Manhaeve et al. 2018), NEURASP (Yang, Ishay, and Lee 2020), and ABL (Dai et al. 2019). We demonstrate the superior performance of our framework both in terms of training efficiency and accuracy over a wide range of scenarios showing the features described above.
Researcher Affiliation Collaboration Efthymia Tsamoura1, Timothy Hospedales1, Loizos Michael2,3 1 Samsung AI Research 2 Open University of Cyprus 3 CYENS Center of Excellence
Pseudocode Yes Algorithm 1 TRAIN(x, f(x), nt) 1: ω = nt(x) 2: ϕ = W abduce(T, f(x)) basic form or ϕ = W abduce(T r(ω), f(x)) NGA form 3: ℓ = loss(ϕ, r, ω) using WMC 4: nt+1 = backpropagate(nt, ℓ) 5: return nt+1
Open Source Code Yes The code and data to reproduce the experiments are available at: https://bitbucket.org/tsamoura/neurolog/src/master/.
Open Datasets Yes Benchmark datasets have been used to provide inputs to the neural module as follows: MNIST (Le Cun et al. 1998) for images of digits; HASY (Thoma 2017) for images of math operators; GTSRB (Stallkamp et al. 2011) for images of road signs.
Dataset Splits No The paper mentions 'training set' and 'testing accuracy' but does not explicitly describe a 'validation set' or 'validation split'.
Hardware Specification Yes Experiments were ran on an Ubuntu 16.04 Linux PC with Intel i7 64-bit CPU and 94.1 Gi B RAM.
Software Dependencies Yes Abductive feedback in NEUROLOG was computed using the A-system (Nuffelen and Kakas 2001) running over SICStus Prolog 4.5.1.
Experiment Setup Yes Each system was trained on a training set of 3000 samples, and was ran independently 10 times per scenario to account for the random initialization of the neural module or other system stochasticity. Training was performed over 3 epochs for NEUROLOG, DEEPPROBLOG and NEURASP, while the training loop of ABL was invoked 3000 times. In all systems, the neural module was trained using the Adam algorithm with a learning rate of 0.001.