Convex and Bilevel Optimization for Neural-Symbolic Inference and Learning

Authors: Charles Andrew Dickens, Changyu Gao, Connor Pryor, Stephen Wright, Lise Getoor

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we provide extensive empirical evaluations across 8 datasets covering a range of tasks and demonstrate our learning framework achieves up to a 16% point prediction performance improvement over alternative learning methods.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, University of California, Santa Cruz, CA 95060 2Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI 53706.
Pseudocode Yes Algorithm 1 Ne Sy-EBM Learning Framework (Page 4), Algorithm 2 Full Ne Sy-EBM Learning Framework (Page 10), and Algorithm 3 Dual LCQP Block Coordinate Descent (Page 18).
Open Source Code Yes Code for the Neu PSL implementation of our proposed learning framework and inference algorithms is available at: https://github.com/linqs/psl.
Open Datasets Yes All code and data is available at https://github.com/ linqs/dickens-icml24. ... The 5 data splits and the Neu PSL model we evaluate in this paper originated from Sridhar et al. (2015). ... The data and Neu PSL model are available at: https://github.com/linqs/psl-examples/tree/main/epinions. ... Citeseer and Cora are citation networks introduced by Sen et al. (2008). ... The data and Neu PSL models are available at: https://github.com/linqs/psl-examples/tree/main/drug-drug-interaction. ... The data and Neu PSL model are available at: https://github.com/linqs/psl-examples/tree/main/yelp. ... MNIST Addition is a canonical Ne Sy image classification dataset first introduced by (Manhaeve et al., 2018).
Dataset Splits Yes Specifically, for each of the 10 folds, we randomly sample 5% of the node labels for training 5% of the node labels for validation and 1, 000 for testing. This process is repeated to create five corresponding validation and test splits, with 1, 0000 MNIST examples being sampled per test split from the original MNIST dataset.
Hardware Specification Yes All timing experiments were performed on an Ubuntu 22.04.1 Linux machine with Intel Xeon Processor E5-2630 v4 at 3.10GHz and 128 GB of RAM.
Software Dependencies No The paper mentions software like 'Gurobi' and optimizers like 'Adam', and the operating system 'Ubuntu 22.04.1 Linux machine', but it does not provide specific version numbers for software dependencies such as Python, PyTorch, or the exact version of Gurobi used.
Experiment Setup Yes Details on the datasets, hardware specifications, hyperparameter searches, and model architectures are provided in Appendix F. ... We run a hyperparameter search, detailed in Appendix F.4, for each algorithm... Hyperparameters used for SP and MSE learning are reported in Appendix F.5. A hyperparameter search (detailed in Appendix F.6) is performed over learning steplengths, regularizations, and parameters for Algorithm 1.