Probabilistic Neural Circuits

Authors: Pedro Zuidberg Dos Martires

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For our experimental evaluation we used the MNIST family of dataset. That is, the original MNIST (Deng 2012), Fashion MNIST (Xiao, Rasul, and Vollgraf 2017), and also EMNIST (Cohen et al. 2017). We implemented PNCs (and also SPQNs) in Py Torch and Lightning4, and ran all our experiments on a DGX-2 machine with V100 Nvidia cards.
Researcher Affiliation Academia Pedro Zuidberg Dos Martires Centre for Applied Autonomous Sensor Systems (AASS), Örebro University, Sweden
Pseudocode Yes Algorithm 1: Layer-wise circuit evaluation
Open Source Code Yes 3https://github.com/pedrozudo/Probabilistic Neural Circuits.git
Open Datasets Yes For our experimental evaluation we used the MNIST family of dataset. That is, the original MNIST (Deng 2012), Fashion MNIST (Xiao, Rasul, and Vollgraf 2017), and also EMNIST (Cohen et al. 2017).
Dataset Splits Yes The best model was selected using a 90 10 train-validation data split where we monitored the negative log-likelihood on the validation set.
Hardware Specification Yes We implemented PNCs (and also SPQNs) in Py Torch and Lightning4, and ran all our experiments on a DGX-2 machine with V100 Nvidia cards.
Software Dependencies No The paper mentions software used: "We implemented PNCs (and also SPQNs) in Py Torch and Lightning4". However, it does not provide specific version numbers for PyTorch or Lightning, which are necessary for reproducible software dependencies.
Experiment Setup Yes All three models were trained for 100 epochs using Adam (Kingma and Ba 2014) with a learning rate of 0.001 and a batch size of 50.