Semantic Probabilistic Layers for Neuro-Symbolic Learning

Authors: Kareem Ahmed, Stefano Teso, Kai-Wei Chang, Guy Van den Broeck, Antonio Vergari

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate that SPLs outperform these competitors in terms of accuracy on challenging SOP tasks including hierarchical multi-label classification, pathfinding and preference learning, while retaining perfect constraint satisfaction.
Researcher Affiliation Academia Kareem Ahmed CS Department UCLA ahmedk@cs.ucla.edu Stefano Teso CIMe C and DISI University of Trento stefano.teso@unitn.it Kai-Wei Chang CS Department UCLA kwchang@cs.ucla.edu Guy Van den Broeck CS Department UCLA guyvdb@cs.ucla.edu Antonio Vergari School of Informatics University of Edinburgh avergari@ed.ac.uk
Pseudocode No No pseudocode or algorithm block was found.
Open Source Code Yes Our code is made publicly available on Github at github.com/Kareem Yousrii/SPL.
Open Datasets Yes We use preference ranking data over 10 types of sushi for 5, 000 individuals, taken from [49], and a 60/20/20 split.
Dataset Splits Yes We use preference ranking data over 10 types of sushi for 5, 000 individuals, taken from [49], and a 60/20/20 split.
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) were provided for running its experiments.
Software Dependencies No No specific software dependencies with version numbers were listed. The paper only mentions 'Py Torch [54]' without a version.
Experiment Setup Yes We used the validation splits to determine the number of layers in the gating function as well as the overparameterization, keeping all other hyperparameters fixed. The final models were obtained by training using a batch size of 128 and early stopping on the validation set.