MultiplexNet: Towards Fully Satisfied Logical Constraints in Neural Networks
Authors: Nick Hoernle, Rafael Michael Karampatsis, Vaishak Belle, Kobi Gal5700-5709
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the efficacy of this approach empirically on several classical deep learning tasks, such as density estimation and classification in both supervised and unsupervised settings where prior knowledge about the domains was expressed as logical constraints. Our results show that the Multiplex Net approach learned to approximate unknown distributions well, often requiring fewer data samples than the alternative approaches. |
| Researcher Affiliation | Academia | Nicholas Hoernle1, Rafael Michael Karampatsis1, Vaishak Belle1, 2, Kobi Gal1, 3 1 University of Edinburgh 2 Alan Turing Institute 3 Ben-Gurion University |
| Pseudocode | No | An overview of the proposed architecture with a general algorithm that details how to incorporate domain constraints into training a network can be found in Section: Architecture Overview of Multiplex Net in the supplementary material. |
| Open Source Code | No | No explicit statement or link providing access to the source code for the methodology described in the paper. |
| Open Datasets | Yes | Second, we present an experiment on the popular MNIST data set (Le Cun, Cortes, and Burges 2010) which combines structured data with domain knowledge. ... In our third experiment, we apply our approach to the well known image classification task on the CIFAR100 data set (Krizhevsky and Hinton 2009). |
| Dataset Splits | Yes | We selected the runs based on the loss (the ELBO) on a validation set (i.e., the labels were still not used in selecting the run). |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions using Z3 as an off-the-shelf solver, but does not provide specific version numbers for any other software dependencies such as programming languages, libraries, or frameworks used for implementation. |
| Experiment Setup | Yes | We vary the size of the training data set with N {100, 250, 500, 1000} as the four experimental conditions. ... Additional experimental details, as well as the full loss function, can be found in Appendix A. ... We use a Wide Res Net 28-10 (Zagoruyko and Komodakis 2016) model in all of the experimental conditions. |