Parallel Neurosymbolic Integration with Concordia

Authors: Jonathan Feldstein, Modestas Jurčius, Efthymia Tsamoura

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Concordia has been successfully applied to tasks beyond NLP and data classification, improving the accuracy of stateof-the-art on collective activity detection, entity linking and recommendation tasks.
Researcher Affiliation Collaboration 1University of Edinburgh, Edinburgh, United Kingdom 2BENNU.AI, Edinburgh, United Kingdom 3Mintis AI, Kaunas, Lithuania 4Samsung AI, Cambridge, United Kingdom.
Pseudocode Yes Algorithm 1 INFERC(x, ν, τ, θ, λ, γ) ... Algorithm 2 UPDATEC(x, y, ν, τ, θt, λt, γt) ... Algorithm 3 TRANSLATE(x, ν, θ, L) z
Open Source Code Yes 1Available on https://github.com/jonathanfeldstein/Concordia
Open Datasets Yes We used the 2020 Yelp and Movie Lens-100k datasets (Harper & Konstan, 2015)... We used the Collective Activity Augmented Dataset (CAAD) (Choi et al., 2011)... We used the Pub Med Parsed dataset from (Moen & Ananiadou, 2013)
Dataset Splits Yes We used the train and test splits proposed in (Qi et al., 2018), namely choosing 2/3 of the video sequences for training and the rest for testing. ... We used a 90%/10% training/test split for both Yelp and Movie Lens-100k.
Hardware Specification Yes All experiments ran on a Linux machine with a NVidia Ge Force GTX 1080 Ti GPUs, 64 Intel(R) Xeon(R) Gold 6130 CPUs, and 256GB of RAM.
Software Dependencies Yes Concordia has been developed in Py Torch 1.10. The logic component of each task was implemented using the pslpython library4.
Experiment Setup Yes To train Mobile Net and Inception-v3 we used a minibatch of size 1 and set the learning rate to 0.00001. ... The batch size used for NNMFwas 32, the learning rate 0.001, and the L2 norm set to 0.01 for regularization, and for Neu MFbatch size was 16, learning rate 0.001, and the L2 norm was 0.01. The parameters for Graph Rec were as follows: batch size = 1000, learning rate was set to 0.00003, the L2 norm was set to 0.05 for the user features and 0.02 for the item features. In all three baseline models the optimizer was set to Adam with loss function set to RMSE. ... In the case of the Bi LSTM, ... We used a learning rate of 0.001 and batch size 64. In the case of BERT, ... We used a learning rate of 0.00003 and batch size 16. The loss function used in all experiments was cross-entropy.