Deep Differentiable Logic Gate Networks

Authors: Felix Petersen, Christian Borgelt, Hilde Kuehne, Oliver Deussen

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To empirically validate our method, we perform an array of experiments. We start with the three MONK data sets and continue to the Adult Census and Breast Cancer data sets. For each experiment, we compare our method to other methods in terms of model memory footprint, evaluation speed, and accuracy. To demonstrate that our method also performs well on image recognition, we benchmark it on the MNIST as well as the CIFAR-10 data sets.
Researcher Affiliation Collaboration Felix Petersen Stanford University University of Konstanz mail@felix-petersen.de Christian Borgelt University of Salzburg christian@borgelt.net Hilde Kuehne University of Frankfurt MIT-IBM Watson AI Lab kuehne@uni-frankfurt.de Oliver Deussen University of Konstanz oliver.deussen@uni.kn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code Yes The source code will be publicly available at github.com/Felix-Petersen/difflogic. [...] We will release the source code of this work to the community to foster future research on learning logic gate networks.
Open Datasets Yes We start with the three MONK data sets [40] and continue to the Adult Census [41] and Breast Cancer data sets [42]. [...] We start by considering MNIST [43]. [...] In addition to MNIST, we also benchmark our method on CIFAR-10 [50].
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. It mentions using standard datasets like MNIST and CIFAR-10, which often have predefined splits, but does not explicitly state them.
Hardware Specification Yes Times (T.) are inference times per image, the GPU is an NVIDIA A6000, and the CPU is a single thread at 2.5 GHz.
Software Dependencies No The paper mentions the Adam optimizer and refers to PyTorch [56], but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We train all models with the Adam optimizer [33] at a constant learning rate of 0.01. [...] In all reported experiments, we use the same number of neurons in each layer (except for the input) and between 4 and 8 layers, which we call straight network.