Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Deep Differentiable Logic Gate Networks
Authors: Felix Petersen, Christian Borgelt, Hilde Kuehne, Oliver Deussen
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To empirically validate our method, we perform an array of experiments. We start with the three MONK data sets and continue to the Adult Census and Breast Cancer data sets. For each experiment, we compare our method to other methods in terms of model memory footprint, evaluation speed, and accuracy. To demonstrate that our method also performs well on image recognition, we benchmark it on the MNIST as well as the CIFAR-10 data sets. |
| Researcher Affiliation | Collaboration | Felix Petersen Stanford University University of Konstanz EMAIL Christian Borgelt University of Salzburg EMAIL Hilde Kuehne University of Frankfurt MIT-IBM Watson AI Lab EMAIL Oliver Deussen University of Konstanz EMAIL |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | The source code will be publicly available at github.com/Felix-Petersen/difflogic. [...] We will release the source code of this work to the community to foster future research on learning logic gate networks. |
| Open Datasets | Yes | We start with the three MONK data sets [40] and continue to the Adult Census [41] and Breast Cancer data sets [42]. [...] We start by considering MNIST [43]. [...] In addition to MNIST, we also benchmark our method on CIFAR-10 [50]. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. It mentions using standard datasets like MNIST and CIFAR-10, which often have predefined splits, but does not explicitly state them. |
| Hardware Specification | Yes | Times (T.) are inference times per image, the GPU is an NVIDIA A6000, and the CPU is a single thread at 2.5 GHz. |
| Software Dependencies | No | The paper mentions the Adam optimizer and refers to PyTorch [56], but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We train all models with the Adam optimizer [33] at a constant learning rate of 0.01. [...] In all reported experiments, we use the same number of neurons in each layer (except for the input) and between 4 and 8 layers, which we call straight network. |